[GH-ISSUE #13118] qwen3-vl failed to offloaded layer to Radeon iGPU on AI MAX 395 on windows #70743

Closed
opened 2026-05-04 22:48:18 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @lihaofd on GitHub (Nov 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13118

What is the issue?

I have tried ollama 0.12.10 on AI MAX 395 on windows
set OLLAMA_GPU_LAYER=directml
ollama run qwen3-vl:8b
even /set parameter num_gpu 99
It failed to offloaded layer to GPU always

load_backend: loaded CPU backend from C:\Users\amd\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon(TM) 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32, ID: 0
load_backend: loaded ROCm backend from C:\Users\amd\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll
time=2025-11-17T21:11:04.830+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-11-17T21:11:05.690+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-17T21:11:05.876+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-17T21:11:06.122+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-17T21:11:06.122+08:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="5.7 GiB"
time=2025-11-17T21:11:06.122+08:00 level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU"
time=2025-11-17T21:11:06.123+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2025-11-17T21:11:06.123+08:00 level=INFO source=ggml.go:494 msg="offloaded 0/37 layers to GPU"
time=2025-11-17T21:11:06.124+08:00 level=INFO source=device.go:228 msg="kv cache" device=CPU size="576.0 MiB"
time=2025-11-17T21:11:06.124+08:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="4.2 GiB"
time=2025-11-17T21:11:06.124+08:00 level=INFO source=device.go:244 msg="total memory" size="10.5 GiB"
time=2025-11-17T21:11:06.124+08:00 level=INFO source=sched.go:500 msg="loaded runners" count=1
time=2025-11-17T21:11:06.125+08:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
time=2025-11-17T21:11:06.125+08:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
time=2025-11-17T21:11:06.627+08:00 level=INFO source=server.go:1289 msg="llama runner started in 1.97 seconds"

If trying other models like below with ollama in same environment, all of them can offload to iGPU
qwen3:8b
qwen2.5vl:7b
gemma3n:e4b
gpt-oss:20b

Also I tried Qwen3VL-8B-Thinking-Q4_K_M.gguf with ollama 0.12.10 in same machine, looks it can offload to GPU

I guess this issue is caused by ollama 0.12.10 doesn't have latest llama.cpp backend?

Relevant log output


OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.12.10

Originally created by @lihaofd on GitHub (Nov 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13118 ### What is the issue? I have tried ollama 0.12.10 on AI MAX 395 on windows set OLLAMA_GPU_LAYER=directml ollama run qwen3-vl:8b even /set parameter num_gpu 99 It failed to offloaded layer to GPU always load_backend: loaded CPU backend from C:\Users\amd\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon(TM) 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32, ID: 0 load_backend: loaded ROCm backend from C:\Users\amd\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll time=2025-11-17T21:11:04.830+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-11-17T21:11:05.690+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-17T21:11:05.876+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-17T21:11:06.122+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-17T21:11:06.122+08:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="5.7 GiB" time=2025-11-17T21:11:06.122+08:00 level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU" time=2025-11-17T21:11:06.123+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2025-11-17T21:11:06.123+08:00 level=INFO source=ggml.go:494 msg="offloaded 0/37 layers to GPU" time=2025-11-17T21:11:06.124+08:00 level=INFO source=device.go:228 msg="kv cache" device=CPU size="576.0 MiB" time=2025-11-17T21:11:06.124+08:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="4.2 GiB" time=2025-11-17T21:11:06.124+08:00 level=INFO source=device.go:244 msg="total memory" size="10.5 GiB" time=2025-11-17T21:11:06.124+08:00 level=INFO source=sched.go:500 msg="loaded runners" count=1 time=2025-11-17T21:11:06.125+08:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" time=2025-11-17T21:11:06.125+08:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" time=2025-11-17T21:11:06.627+08:00 level=INFO source=server.go:1289 msg="llama runner started in 1.97 seconds" If trying other models like below with ollama in same environment, all of them can offload to iGPU qwen3:8b qwen2.5vl:7b gemma3n:e4b gpt-oss:20b Also I tried Qwen3VL-8B-Thinking-Q4_K_M.gguf with ollama 0.12.10 in same machine, looks it can offload to GPU I guess this issue is caused by ollama 0.12.10 doesn't have latest llama.cpp backend? ### Relevant log output ```shell ``` ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.12.10
GiteaMirror added the bug label 2026-05-04 22:48:18 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 17, 2025):

OLLAMA_GPU_LAYER is not an ollama configuration variable.

It works fine on my AI MAX 395.

$ ollama -v
ollama version is 0.12.10
$ ollama run qwen3-vl:8b hello
Thinking...
Okay, the user said "hello". That's pretty straightforward. I need to respond in a friendly and welcoming manner.
...
I think that's it. A friendly hello, offer assistance, keep it simple.
...done thinking.

Hello! 😊 How can I assist you today? Let me know if you have any questions or need help with anything.

$ ollama ps
NAME           ID              SIZE     PROCESSOR    CONTEXT    UNTIL   
qwen3-vl:8b    901cae732162    10 GB    100% GPU     8192       Forever    

Post the full log. For extra debugging information, set OLLAMA_DEBUG=1 in the server environment.

<!-- gh-comment-id:3542358090 --> @rick-github commented on GitHub (Nov 17, 2025): `OLLAMA_GPU_LAYER` is not an ollama configuration variable. It works fine on my AI MAX 395. ```console $ ollama -v ollama version is 0.12.10 $ ollama run qwen3-vl:8b hello Thinking... Okay, the user said "hello". That's pretty straightforward. I need to respond in a friendly and welcoming manner. ... I think that's it. A friendly hello, offer assistance, keep it simple. ...done thinking. Hello! 😊 How can I assist you today? Let me know if you have any questions or need help with anything. $ ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen3-vl:8b 901cae732162 10 GB 100% GPU 8192 Forever ``` Post the full log. For extra debugging information, set `OLLAMA_DEBUG=1` in the server environment.
Author
Owner

@lihaofd commented on GitHub (Nov 17, 2025):

@rick-github
I also tried disabling OLLAMA_GPU_LAYER

when running gemma3:4b, it can offload to GPU
C:\Users\amd>ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
gemma3:4b a2af6cc3eb7f 5.4 GB 100% GPU 4096 4 minutes from now

But when running qwen3-vl:8b, it showed all in CPU
C:\Users\amd>ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
qwen3-vl:8b 901cae732162 11 GB 100% CPU 4096 4 minutes from now

attached full log with OLLAMA_DEBUG=1

full.log

Looks like
time=2025-11-17T23:50:45.611+08:00 level=DEBUG source=server.go:971 msg="insufficient VRAM to load any model layers"

model=C:\Users\amd.ollama\models\blobs\sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55
time=2025-11-17T23:50:44.557+08:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32

I have tried ollama rm qwen3-vl:8b and then ollama pull qwen3-vl:8b again just now
It showed same issue.

<!-- gh-comment-id:3542491646 --> @lihaofd commented on GitHub (Nov 17, 2025): @rick-github I also tried disabling OLLAMA_GPU_LAYER when running gemma3:4b, it can offload to GPU C:\Users\amd>ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL gemma3:4b a2af6cc3eb7f 5.4 GB 100% GPU 4096 4 minutes from now But when running qwen3-vl:8b, it showed all in CPU C:\Users\amd>ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen3-vl:8b 901cae732162 11 GB 100% CPU 4096 4 minutes from now attached full log with OLLAMA_DEBUG=1 [full.log](https://github.com/user-attachments/files/23587169/full.log) Looks like time=2025-11-17T23:50:45.611+08:00 level=DEBUG source=server.go:971 msg="insufficient VRAM to load any model layers" model=C:\Users\amd\.ollama\models\blobs\sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 time=2025-11-17T23:50:44.557+08:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 I have tried ollama rm qwen3-vl:8b and then ollama pull qwen3-vl:8b again just now It showed same issue.
Author
Owner

@rick-github commented on GitHub (Nov 17, 2025):

time=2025-11-17T23:50:41.341+08:00 level=INFO source=types.go:42 msg="inference compute" id=0 filter_id=0
 library=ROCm compute=gfx1151 name=ROCm0 description="AMD Radeon(TM) 8060S Graphics"
 libdirs=ollama,rocm driver=60450.10 pci_id=0000:c4:00.0 type=iGPU total="4.0 GiB" available="3.1 GiB"

Your 8060S is only configured with 4GB of RAM. There is a configuration option in the BIOS that allows you to increase this up to 96GB. An alternative is to to use the Vulkan backend (starting from version 0.12.11, set OLLAMA_VULKAN=1 in the server environment) which will be able to access more RAM.

<!-- gh-comment-id:3542763173 --> @rick-github commented on GitHub (Nov 17, 2025): ``` time=2025-11-17T23:50:41.341+08:00 level=INFO source=types.go:42 msg="inference compute" id=0 filter_id=0 library=ROCm compute=gfx1151 name=ROCm0 description="AMD Radeon(TM) 8060S Graphics" libdirs=ollama,rocm driver=60450.10 pci_id=0000:c4:00.0 type=iGPU total="4.0 GiB" available="3.1 GiB" ``` Your 8060S is only configured with 4GB of RAM. There is a configuration option in the BIOS that allows you to increase this up to 96GB. An alternative is to to use the Vulkan backend (starting from version 0.12.11, set `OLLAMA_VULKAN=1` in the server environment) which will be able to access more RAM.
Author
Owner

@lihaofd commented on GitHub (Nov 17, 2025):

@rick-github I have changed UMA in BIOS to more like 32GB, looks it still run all on CPU, any special BIOS version is needed?

<!-- gh-comment-id:3542862880 --> @lihaofd commented on GitHub (Nov 17, 2025): @rick-github I have changed UMA in BIOS to more like 32GB, looks it still run all on CPU, any special BIOS version is needed?
Author
Owner

@rick-github commented on GitHub (Nov 17, 2025):

My machine:

BIOS Version EVO-X2 1.05. Advanced > GFX Configuration > iGPU Configuration == UMA_SPECIFIED, UMA Frame buffer size == 96G.

Post the log.

<!-- gh-comment-id:3542947880 --> @rick-github commented on GitHub (Nov 17, 2025): My machine: BIOS Version EVO-X2 1.05. Advanced > GFX Configuration > iGPU Configuration == UMA_SPECIFIED, UMA Frame buffer size == 96G. Post the log.
Author
Owner

@lihaofd commented on GitHub (Nov 17, 2025):

My machine is ROG Flow Z13 GZ302EA with 64GB memory
BIOS Vendor American Megatrends
Version 308
Any suggested URL for having BIOS upgrading?

<!-- gh-comment-id:3542987804 --> @lihaofd commented on GitHub (Nov 17, 2025): My machine is ROG Flow Z13 GZ302EA with 64GB memory BIOS Vendor American Megatrends Version 308 Any suggested URL for having BIOS upgrading?
Author
Owner

@rick-github commented on GitHub (Nov 17, 2025):

Any suggested URL for having BIOS upgrading?

Manufacturer website?

Try the Vulkan support, it might work better.

<!-- gh-comment-id:3543010033 --> @rick-github commented on GitHub (Nov 17, 2025): > Any suggested URL for having BIOS upgrading? Manufacturer website? Try the Vulkan support, it might work better.
Author
Owner

@lihaofd commented on GitHub (Nov 17, 2025):

Any suggested URL for having BIOS upgrading?

Manufacturer website?

Try the Vulkan support, it might work better.

I tried set OLLAMA_VULKAN=1 with version 0.12.11, still all in CPU

<!-- gh-comment-id:3543014603 --> @lihaofd commented on GitHub (Nov 17, 2025): > > Any suggested URL for having BIOS upgrading? > > Manufacturer website? > > Try the Vulkan support, it might work better. I tried set OLLAMA_VULKAN=1 with version 0.12.11, still all in CPU
Author
Owner

@rick-github commented on GitHub (Nov 17, 2025):

Post the log.

<!-- gh-comment-id:3543018562 --> @rick-github commented on GitHub (Nov 17, 2025): Post the log.
Author
Owner

@lihaofd commented on GitHub (Nov 17, 2025):

Post the log.

full_vulkan.log

<!-- gh-comment-id:3543070644 --> @lihaofd commented on GitHub (Nov 17, 2025): > Post the log. [full_vulkan.log](https://github.com/user-attachments/files/23588824/full_vulkan.log)
Author
Owner

@lihaofd commented on GitHub (Nov 18, 2025):

I have upgraded BIOS to latest version and set UMA to 48GB, then install latest ollama 0.12.11, looks qwen3-vl:8b can offload all to GPU with rocm backend now. thanks!

<!-- gh-comment-id:3544613886 --> @lihaofd commented on GitHub (Nov 18, 2025): I have upgraded BIOS to latest version and set UMA to 48GB, then install latest ollama 0.12.11, looks qwen3-vl:8b can offload all to GPU with rocm backend now. thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70743