[GH-ISSUE #12913] Vulkan backend performance is slow (7.73 tokens/s) #70620

Closed
opened 2026-05-04 22:18:02 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @vt-alt on GitHub (Nov 2, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12913

What is the issue?

(I understand that Vulkan backend is experimental.) There is slowness in ollama (in comparison to llama.cpp with vulkan backend compiled against the same libs). The hardware's the same system with RTX 4090.

vulkan$ ollama run --verbose gpt-oss:20b hi

  • 1st run:
    total duration: 12.015367362s
    load duration: 4.303715301s
    prompt eval count: 68 token(s)
    prompt eval duration: 4.481882633s
    prompt eval rate: 15.17 tokens/s
    eval count: 32 token(s)
    eval duration: 3.18689393s
    eval rate: 10.04 tokens/s
  • 2nd run:
    total duration: 5.245348617s
    load duration: 165.261292ms
    prompt eval count: 68 token(s)
    prompt eval duration: 133.71568ms
    prompt eval rate: 508.54 tokens/s
    eval count: 38 token(s)
    eval duration: 4.917218578s
    eval rate: 7.73 tokens/s

In comparison llama.cpp:

vulkan$ llama-cli -hf ggml-org/gpt-oss-20b-GGUF  --ctx-size 0 --jinja -ub 2048 -b 2048 -st -p 'introduce yourself'
llama_perf_sampler_print:    sampling time =      12.53 ms /   212 runs   (    0.06 ms per token, 16926.15 tokens per second)
llama_perf_context_print:        load time =    3047.07 ms
llama_perf_context_print: prompt eval time =      55.87 ms /    70 tokens (    0.80 ms per token,  1252.82 tokens per second)
llama_perf_context_print:        eval time =     641.19 ms /   141 runs   (    4.55 ms per token,   219.90 tokens per second)
llama_perf_context_print:       total time =     906.90 ms /   211 tokens
llama_perf_context_print:    graphs reused =        140
llama_memory_breakdown_print: | memory breakdown [MiB] | total   free     self   model   context   compute    unaccounted |
llama_memory_breakdown_print: |   - Vulkan0 (RTX 4090) | 24564 = 6586 + (17313 = 10949 +    3123 +    3241) +         663 |
llama_memory_breakdown_print: |   - Host               |                  1650 =   586 +       0 +    1063                |

Relevant log output

ollama[1291103]: ggml_vulkan: Found 1 Vulkan devices:
ollama[1291103]: ggml_vulkan: 0 = NVIDIA GeForce RTX 4090 (NVIDIA) | uma: 0 | fp16: 1 | bf16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: NV_coopmat2
ollama[1291103]: load_backend: loaded Vulkan backend from /usr/lib/ollama/libggml-vulkan.so
ollama[1291103]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
ollama[1291103]: time=2025-11-02T17:14:49.713Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
ollama[1291103]: ggml_backend_vk_get_device_memory device ff90d373-e63e-427f-2f30-73348e89e4bd utilizing NVML memory reporting free: 25105989632 total: 25757220864
ollama[1291103]: time=2025-11-02T17:14:49.780Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
ollama[1291103]: time=2025-11-02T17:14:49.933Z level=DEBUG source=ggml.go:857 msg="compute graph" nodes=1445 splits=2
ollama[1291103]: time=2025-11-02T17:14:49.935Z level=DEBUG source=ggml.go:857 msg="compute graph" nodes=1445 splits=2
ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=device.go:212 msg="model weights" device=Vulkan0 size="11.8 GiB"
ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=device.go:217 msg="model weights" device=CPU size="1.1 GiB"
ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=device.go:223 msg="kv cache" device=Vulkan0 size="3.1 GiB"
ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=device.go:234 msg="compute graph" device=Vulkan0 size="16.8 GiB"
ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=device.go:239 msg="compute graph" device=CPU size="5.6 MiB"
ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=device.go:244 msg="total memory" size="32.8 GiB"
ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=server.go:695 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=5898240 required.Vulkan0.ID=ff90d373-e63e-427f-2f30-73348e89e4bd required.Vulkan0.Weights="[477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 1158278400]" required.Vulkan0.Cache="[9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 0]" required.Vulkan0.Graph=18090297360
ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=server.go:892 msg="available gpu" id=ff90d373-e63e-427f-2f30-73348e89e4bd library=Vulkan "available layer vram"="6.1 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="16.8 GiB"
ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=server.go:706 msg="new layout created" layers="10[ID:ff90d373-e63e-427f-2f30-73348e89e4bd Layers:10(14..23)]"
ollama[1291103]: time=2025-11-02T17:14:49.936Z level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:10 GPULayers:10[ID:ff90d373-e63e-427f-2f30-73348e89e4bd Layers:10(14..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ollama[1291103]: time=2025-11-02T17:14:49.974Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
ollama[1291103]: ggml_backend_vk_get_device_memory device ff90d373-e63e-427f-2f30-73348e89e4bd utilizing NVML memory reporting free: 25105989632 total: 25757220864
ollama[1291103]: time=2025-11-02T17:14:49.986Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
ollama[1291103]: time=2025-11-02T17:14:50.140Z level=DEBUG source=ggml.go:857 msg="compute graph" nodes=1445 splits=325
ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=ggml.go:857 msg="compute graph" nodes=1445 splits=3
ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=device.go:212 msg="model weights" device=Vulkan0 size="4.4 GiB"
ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=device.go:217 msg="model weights" device=CPU size="8.4 GiB"
ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=device.go:223 msg="kv cache" device=Vulkan0 size="1.3 GiB"
ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=device.go:228 msg="kv cache" device=CPU size="1.8 GiB"
ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=device.go:234 msg="compute graph" device=Vulkan0 size="16.6 GiB"
ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=device.go:239 msg="compute graph" device=CPU size="45.1 MiB"
ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=device.go:244 msg="total memory" size="32.6 GiB"
ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=server.go:695 msg=memory success=true required.InputWeights=1158266880 required.CPU.Weights="[477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 0 0 0 0 0 0 0 0 0 0 1158278400]" required.CPU.Cache="[9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 0 0 0 0 0 0 0 0 0 0 0]" required.CPU.Graph=47251360 required.Vulkan0.ID=ff90d373-e63e-427f-2f30-73348e89e4bd required.Vulkan0.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 0]" required.Vulkan0.Cache="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 0]" required.Vulkan0.Graph=17813489664
ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=server.go:892 msg="available gpu" id=ff90d373-e63e-427f-2f30-73348e89e4bd library=Vulkan "available layer vram"="6.3 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="16.6 GiB"
ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=server.go:706 msg="new layout created" layers="10[ID:ff90d373-e63e-427f-2f30-73348e89e4bd Layers:10(14..23)]"
ollama[1291103]: time=2025-11-02T17:14:50.143Z level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:10 GPULayers:10[ID:ff90d373-e63e-427f-2f30-73348e89e4bd Layers:10(14..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ollama[1291103]: time=2025-11-02T17:14:50.181Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
ollama[1291103]: ggml_backend_vk_get_device_memory device ff90d373-e63e-427f-2f30-73348e89e4bd utilizing NVML memory reporting free: 25105989632 total: 25757220864
ollama[1291103]: time=2025-11-02T17:14:50.246Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
ollama[1291103]: time=2025-11-02T17:14:50.892Z level=DEBUG source=ggml.go:857 msg="compute graph" nodes=1445 splits=325
ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=ggml.go:857 msg="compute graph" nodes=1445 splits=3
ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=device.go:212 msg="model weights" device=Vulkan0 size="4.4 GiB"
ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=device.go:217 msg="model weights" device=CPU size="8.4 GiB"
ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=device.go:223 msg="kv cache" device=Vulkan0 size="1.3 GiB"
ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=device.go:228 msg="kv cache" device=CPU size="1.8 GiB"
ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=device.go:234 msg="compute graph" device=Vulkan0 size="16.6 GiB"
ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=device.go:239 msg="compute graph" device=CPU size="45.1 MiB"
ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=device.go:244 msg="total memory" size="32.6 GiB"
ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=server.go:695 msg=memory success=true required.InputWeights=1158266880 required.CPU.Weights="[477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 0 0 0 0 0 0 0 0 0 0 1158278400]" required.CPU.Cache="[9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 0 0 0 0 0 0 0 0 0 0 0]" required.CPU.Graph=47251360 required.Vulkan0.ID=ff90d373-e63e-427f-2f30-73348e89e4bd required.Vulkan0.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 0]" required.Vulkan0.Cache="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 0]" required.Vulkan0.Graph=17813489664
ollama[1291103]: time=2025-11-02T17:14:50.895Z level=DEBUG source=server.go:892 msg="available gpu" id=ff90d373-e63e-427f-2f30-73348e89e4bd library=Vulkan "available layer vram"="6.3 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="16.6 GiB"
ollama[1291103]: time=2025-11-02T17:14:50.895Z level=DEBUG source=server.go:706 msg="new layout created" layers="10[ID:ff90d373-e63e-427f-2f30-73348e89e4bd Layers:10(14..23)]"
ollama[1291103]: time=2025-11-02T17:14:50.895Z level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:10 GPULayers:10[ID:ff90d373-e63e-427f-2f30-73348e89e4bd Layers:10(14..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ollama[1291103]: time=2025-11-02T17:14:50.895Z level=INFO source=ggml.go:482 msg="offloading 10 repeating layers to GPU"
ollama[1291103]: time=2025-11-02T17:14:50.895Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
ollama[1291103]: time=2025-11-02T17:14:50.895Z level=INFO source=ggml.go:494 msg="offloaded 10/25 layers to GPU"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.12.9

Originally created by @vt-alt on GitHub (Nov 2, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12913 ### What is the issue? (I understand that Vulkan backend is experimental.) There is slowness in ollama (in comparison to llama.cpp with vulkan backend compiled against the same libs). The hardware's the same system with RTX 4090. vulkan$ ollama run --verbose gpt-oss:20b hi - 1st run: total duration: 12.015367362s load duration: 4.303715301s prompt eval count: 68 token(s) prompt eval duration: 4.481882633s prompt eval rate: 15.17 tokens/s eval count: 32 token(s) eval duration: 3.18689393s eval rate: 10.04 tokens/s - 2nd run: total duration: 5.245348617s load duration: 165.261292ms prompt eval count: 68 token(s) prompt eval duration: 133.71568ms prompt eval rate: 508.54 tokens/s eval count: 38 token(s) eval duration: 4.917218578s eval rate: 7.73 tokens/s In comparison llama.cpp: ``` vulkan$ llama-cli -hf ggml-org/gpt-oss-20b-GGUF --ctx-size 0 --jinja -ub 2048 -b 2048 -st -p 'introduce yourself' llama_perf_sampler_print: sampling time = 12.53 ms / 212 runs ( 0.06 ms per token, 16926.15 tokens per second) llama_perf_context_print: load time = 3047.07 ms llama_perf_context_print: prompt eval time = 55.87 ms / 70 tokens ( 0.80 ms per token, 1252.82 tokens per second) llama_perf_context_print: eval time = 641.19 ms / 141 runs ( 4.55 ms per token, 219.90 tokens per second) llama_perf_context_print: total time = 906.90 ms / 211 tokens llama_perf_context_print: graphs reused = 140 llama_memory_breakdown_print: | memory breakdown [MiB] | total free self model context compute unaccounted | llama_memory_breakdown_print: | - Vulkan0 (RTX 4090) | 24564 = 6586 + (17313 = 10949 + 3123 + 3241) + 663 | llama_memory_breakdown_print: | - Host | 1650 = 586 + 0 + 1063 | ``` ### Relevant log output ```shell ollama[1291103]: ggml_vulkan: Found 1 Vulkan devices: ollama[1291103]: ggml_vulkan: 0 = NVIDIA GeForce RTX 4090 (NVIDIA) | uma: 0 | fp16: 1 | bf16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: NV_coopmat2 ollama[1291103]: load_backend: loaded Vulkan backend from /usr/lib/ollama/libggml-vulkan.so ollama[1291103]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so ollama[1291103]: time=2025-11-02T17:14:49.713Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) ollama[1291103]: ggml_backend_vk_get_device_memory device ff90d373-e63e-427f-2f30-73348e89e4bd utilizing NVML memory reporting free: 25105989632 total: 25757220864 ollama[1291103]: time=2025-11-02T17:14:49.780Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 ollama[1291103]: time=2025-11-02T17:14:49.933Z level=DEBUG source=ggml.go:857 msg="compute graph" nodes=1445 splits=2 ollama[1291103]: time=2025-11-02T17:14:49.935Z level=DEBUG source=ggml.go:857 msg="compute graph" nodes=1445 splits=2 ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=device.go:212 msg="model weights" device=Vulkan0 size="11.8 GiB" ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=device.go:217 msg="model weights" device=CPU size="1.1 GiB" ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=device.go:223 msg="kv cache" device=Vulkan0 size="3.1 GiB" ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=device.go:234 msg="compute graph" device=Vulkan0 size="16.8 GiB" ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=device.go:239 msg="compute graph" device=CPU size="5.6 MiB" ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=device.go:244 msg="total memory" size="32.8 GiB" ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=server.go:695 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=5898240 required.Vulkan0.ID=ff90d373-e63e-427f-2f30-73348e89e4bd required.Vulkan0.Weights="[477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 1158278400]" required.Vulkan0.Cache="[9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 0]" required.Vulkan0.Graph=18090297360 ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=server.go:892 msg="available gpu" id=ff90d373-e63e-427f-2f30-73348e89e4bd library=Vulkan "available layer vram"="6.1 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="16.8 GiB" ollama[1291103]: time=2025-11-02T17:14:49.936Z level=DEBUG source=server.go:706 msg="new layout created" layers="10[ID:ff90d373-e63e-427f-2f30-73348e89e4bd Layers:10(14..23)]" ollama[1291103]: time=2025-11-02T17:14:49.936Z level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:10 GPULayers:10[ID:ff90d373-e63e-427f-2f30-73348e89e4bd Layers:10(14..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ollama[1291103]: time=2025-11-02T17:14:49.974Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 ollama[1291103]: ggml_backend_vk_get_device_memory device ff90d373-e63e-427f-2f30-73348e89e4bd utilizing NVML memory reporting free: 25105989632 total: 25757220864 ollama[1291103]: time=2025-11-02T17:14:49.986Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 ollama[1291103]: time=2025-11-02T17:14:50.140Z level=DEBUG source=ggml.go:857 msg="compute graph" nodes=1445 splits=325 ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=ggml.go:857 msg="compute graph" nodes=1445 splits=3 ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=device.go:212 msg="model weights" device=Vulkan0 size="4.4 GiB" ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=device.go:217 msg="model weights" device=CPU size="8.4 GiB" ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=device.go:223 msg="kv cache" device=Vulkan0 size="1.3 GiB" ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=device.go:228 msg="kv cache" device=CPU size="1.8 GiB" ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=device.go:234 msg="compute graph" device=Vulkan0 size="16.6 GiB" ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=device.go:239 msg="compute graph" device=CPU size="45.1 MiB" ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=device.go:244 msg="total memory" size="32.6 GiB" ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=server.go:695 msg=memory success=true required.InputWeights=1158266880 required.CPU.Weights="[477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 0 0 0 0 0 0 0 0 0 0 1158278400]" required.CPU.Cache="[9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 0 0 0 0 0 0 0 0 0 0 0]" required.CPU.Graph=47251360 required.Vulkan0.ID=ff90d373-e63e-427f-2f30-73348e89e4bd required.Vulkan0.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 0]" required.Vulkan0.Cache="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 0]" required.Vulkan0.Graph=17813489664 ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=server.go:892 msg="available gpu" id=ff90d373-e63e-427f-2f30-73348e89e4bd library=Vulkan "available layer vram"="6.3 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="16.6 GiB" ollama[1291103]: time=2025-11-02T17:14:50.142Z level=DEBUG source=server.go:706 msg="new layout created" layers="10[ID:ff90d373-e63e-427f-2f30-73348e89e4bd Layers:10(14..23)]" ollama[1291103]: time=2025-11-02T17:14:50.143Z level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:10 GPULayers:10[ID:ff90d373-e63e-427f-2f30-73348e89e4bd Layers:10(14..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ollama[1291103]: time=2025-11-02T17:14:50.181Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 ollama[1291103]: ggml_backend_vk_get_device_memory device ff90d373-e63e-427f-2f30-73348e89e4bd utilizing NVML memory reporting free: 25105989632 total: 25757220864 ollama[1291103]: time=2025-11-02T17:14:50.246Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 ollama[1291103]: time=2025-11-02T17:14:50.892Z level=DEBUG source=ggml.go:857 msg="compute graph" nodes=1445 splits=325 ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=ggml.go:857 msg="compute graph" nodes=1445 splits=3 ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=device.go:212 msg="model weights" device=Vulkan0 size="4.4 GiB" ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=device.go:217 msg="model weights" device=CPU size="8.4 GiB" ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=device.go:223 msg="kv cache" device=Vulkan0 size="1.3 GiB" ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=device.go:228 msg="kv cache" device=CPU size="1.8 GiB" ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=device.go:234 msg="compute graph" device=Vulkan0 size="16.6 GiB" ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=device.go:239 msg="compute graph" device=CPU size="45.1 MiB" ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=device.go:244 msg="total memory" size="32.6 GiB" ollama[1291103]: time=2025-11-02T17:14:50.894Z level=DEBUG source=server.go:695 msg=memory success=true required.InputWeights=1158266880 required.CPU.Weights="[477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 0 0 0 0 0 0 0 0 0 0 1158278400]" required.CPU.Cache="[9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 0 0 0 0 0 0 0 0 0 0 0]" required.CPU.Graph=47251360 required.Vulkan0.ID=ff90d373-e63e-427f-2f30-73348e89e4bd required.Vulkan0.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 477628544 0]" required.Vulkan0.Cache="[0 0 0 0 0 0 0 0 0 0 0 0 0 0 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 9437184 268435456 0]" required.Vulkan0.Graph=17813489664 ollama[1291103]: time=2025-11-02T17:14:50.895Z level=DEBUG source=server.go:892 msg="available gpu" id=ff90d373-e63e-427f-2f30-73348e89e4bd library=Vulkan "available layer vram"="6.3 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="16.6 GiB" ollama[1291103]: time=2025-11-02T17:14:50.895Z level=DEBUG source=server.go:706 msg="new layout created" layers="10[ID:ff90d373-e63e-427f-2f30-73348e89e4bd Layers:10(14..23)]" ollama[1291103]: time=2025-11-02T17:14:50.895Z level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:131072 KvCacheType: NumThreads:10 GPULayers:10[ID:ff90d373-e63e-427f-2f30-73348e89e4bd Layers:10(14..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ollama[1291103]: time=2025-11-02T17:14:50.895Z level=INFO source=ggml.go:482 msg="offloading 10 repeating layers to GPU" ollama[1291103]: time=2025-11-02T17:14:50.895Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU" ollama[1291103]: time=2025-11-02T17:14:50.895Z level=INFO source=ggml.go:494 msg="offloaded 10/25 layers to GPU" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.12.9
GiteaMirror added the bug label 2026-05-04 22:18:02 -05:00
Author
Owner

@jessegross commented on GitHub (Nov 3, 2025):

This is likely caused by not having flash attention enabled - https://github.com/ollama/ollama/issues/12928

<!-- gh-comment-id:3482097133 --> @jessegross commented on GitHub (Nov 3, 2025): This is likely caused by not having flash attention enabled - https://github.com/ollama/ollama/issues/12928
Author
Owner

@vt-alt commented on GitHub (Nov 3, 2025):

Yeah, even thought we have OLLAMA_FLASH_ATTENTION=1 set, log says FlashAttention:false. But, also, I wonder why only 10 of 25 layers go to GPU and not all.

<!-- gh-comment-id:3482738014 --> @vt-alt commented on GitHub (Nov 3, 2025): Yeah, even thought we have `OLLAMA_FLASH_ATTENTION=1` set, log says `FlashAttention:false`. But, also, I wonder why only 10 of 25 layers go to GPU and not all.
Author
Owner

@jessegross commented on GitHub (Nov 3, 2025):

Flash attention is more memory efficient so without it there isn't enough space to load all of the layers.

<!-- gh-comment-id:3482848003 --> @jessegross commented on GitHub (Nov 3, 2025): Flash attention is more memory efficient so without it there isn't enough space to load all of the layers.
Author
Owner

@vt-alt commented on GitHub (Nov 3, 2025):

Ah, that explains! I had OLLAMA_CONTEXT_LENGTH=131072, and it, perhaps, calculated a large KV-cache, so only 10 layers are fit into VRAM. Now I removed it (context env) and it started to work much faster (~101.36 tokens/s) with

time=2025-11-03T23:16:17.723Z level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU"

But for such model we really want the long context to make full use of its capabilities.

<!-- gh-comment-id:3483000327 --> @vt-alt commented on GitHub (Nov 3, 2025): Ah, that explains! I had `OLLAMA_CONTEXT_LENGTH=131072`, and it, perhaps, calculated a large KV-cache, so only 10 layers are fit into VRAM. Now I removed it (context env) and it started to work much faster (~101.36 tokens/s) with ``` time=2025-11-03T23:16:17.723Z level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU" ``` But for such model we really want the long context to make full use of its capabilities.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70620