[GH-ISSUE #9683] Gemma 3 4B & 12B is very slow when KV Cache quantization is enabled #68376

Closed
opened 2026-05-04 13:40:34 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @vYLQs6 on GitHub (Mar 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9683

Originally assigned to: @jessegross on GitHub.

What is the issue?

This issue is very similar to #8158

Gemma 3 4B & 12B runs extremely slow when KV Cache is enabled, there didn't seems to be any hit on models response quality, just speed, kinda strange

I'm using Windows 11 + RTX 4090

Here is an example using model: ollama run gemma3:4b

set OLLAMA_FLASH_ATTENTION=1 && set OLLAMA_KV_CACHE_TYPE=q8_0 && ollama serve

PS S:\> ollama run gemma3:4b --verbose
>>> Help me study vocabulary: write a sentence for me to fill in the blank, and I'll try to pick the correct option.
Okay, let’s do it! Here’s your sentence:

The speaker’s ______ delivery captivated the entire audience, leaving them spellbound by his passionate words.

Choose the best word to fill in the blank:

a) monotonous
b) articulate
c) hesitant
d) rambling

Let me know your choice!

total duration:       4.7324228s
load duration:        40.2864ms
prompt eval count:    36 token(s)
prompt eval duration: 391ms
prompt eval rate:     92.07 tokens/s
eval count:           70 token(s)
eval duration:        4.297s
eval rate:            16.29 tokens/s

Which is obviously very slow for a 4090, I can run 14b Q8 at 40+ tk/s

Relevant log output

D:\LLM>set OLLAMA_FLASH_ATTENTION=1   && set OLLAMA_KV_CACHE_TYPE=q8_0   && ollama serve
2025/03/12 18:24:00 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\LLM\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-03-12T18:24:00.776+08:00 level=INFO source=images.go:432 msg="total blobs: 507"
time=2025-03-12T18:24:00.787+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-12T18:24:00.796+08:00 level=INFO source=routes.go:1292 msg="Listening on 127.0.0.1:11434 (version 0.6.0)"
time=2025-03-12T18:24:00.796+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-12T18:24:00.796+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-03-12T18:24:00.796+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=16 efficiency=0 threads=32
time=2025-03-12T18:24:00.909+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d library=cuda compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4090" overhead="365.8 MiB"
time=2025-03-12T18:24:00.914+08:00 level=INFO source=amd_hip_windows.go:103 msg="AMD ROCm reports no devices found"
time=2025-03-12T18:24:00.914+08:00 level=INFO source=amd_windows.go:49 msg="no compatible amdgpu devices detected"
time=2025-03-12T18:24:00.915+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d library=cuda variant=v12 compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="22.5 GiB"
[GIN] 2025/03/12 - 18:24:15 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/03/12 - 18:24:22 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/12 - 18:24:22 | 200 |     39.8898ms |       127.0.0.1 | POST     "/api/show"
time=2025-03-12T18:24:23.000+08:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=D:\LLM\.ollama\models\blobs\sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada gpu=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d parallel=4 available=24111435776 required="3.9 GiB"
time=2025-03-12T18:24:23.015+08:00 level=INFO source=server.go:105 msg="system memory" total="63.6 GiB" free="53.7 GiB" free_swap="107.1 GiB"
time=2025-03-12T18:24:23.031+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=35 layers.offload=35 layers.split="" memory.available="[22.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.9 GiB" memory.required.partial="3.9 GiB" memory.required.kv="544.0 MiB" memory.required.allocations="[3.9 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB"
time=2025-03-12T18:24:23.031+08:00 level=INFO source=server.go:185 msg="enabling flash attention"
time=2025-03-12T18:24:23.093+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-12T18:24:23.096+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-12T18:24:23.097+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-12T18:24:23.101+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-03-12T18:24:23.102+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-12T18:24:23.102+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-12T18:24:23.102+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-12T18:24:23.102+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.final_logit_softcapping default=30
time=2025-03-12T18:24:23.102+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-12T18:24:23.106+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model D:\\LLM\\.ollama\\models\\blobs\\sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada --ctx-size 8192 --batch-size 512 --n-gpu-layers 35 --threads 16 --flash-attn --kv-cache-type q8_0 --no-mmap --parallel 4 --port 57962"
time=2025-03-12T18:24:23.110+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-12T18:24:23.110+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-03-12T18:24:23.110+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-03-12T18:24:23.129+08:00 level=INFO source=runner.go:882 msg="starting ollama engine"
time=2025-03-12T18:24:23.133+08:00 level=INFO source=runner.go:938 msg="Server listening on 127.0.0.1:57962"
time=2025-03-12T18:24:23.195+08:00 level=WARN source=ggml.go:149 msg="key not found" key=general.name default=""
time=2025-03-12T18:24:23.195+08:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
time=2025-03-12T18:24:23.195+08:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=35
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\***\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\***\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
time=2025-03-12T18:24:23.268+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-03-12T18:24:23.348+08:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CUDA0 size="3.1 GiB"
time=2025-03-12T18:24:23.348+08:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="525.0 MiB"
time=2025-03-12T18:24:23.364+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
time=2025-03-12T18:24:24.255+08:00 level=INFO source=ggml.go:356 msg="compute graph" backend=CUDA0 buffer_type=CUDA0
time=2025-03-12T18:24:24.255+08:00 level=INFO source=ggml.go:356 msg="compute graph" backend=CPU buffer_type=CUDA_Host
time=2025-03-12T18:24:24.269+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-12T18:24:24.271+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-12T18:24:24.273+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-12T18:24:24.277+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-03-12T18:24:24.277+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-12T18:24:24.277+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-12T18:24:24.277+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-12T18:24:24.277+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.final_logit_softcapping default=30
time=2025-03-12T18:24:24.277+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-12T18:24:24.370+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.26 seconds"
[GIN] 2025/03/12 - 18:24:24 | 200 |     1.488709s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/03/12 - 18:24:36 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/03/12 - 18:24:38 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/03/12 - 18:24:47 | 200 |    4.7324228s |       127.0.0.1 | POST     "/api/chat"

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

v0.6.0

Originally created by @vYLQs6 on GitHub (Mar 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9683 Originally assigned to: @jessegross on GitHub. ### What is the issue? This issue is very similar to #8158 Gemma 3 4B & 12B runs extremely slow when KV Cache is enabled, there didn't seems to be any hit on models response quality, just speed, kinda strange I'm using Windows 11 + RTX 4090 Here is an example using model: `ollama run gemma3:4b` `set OLLAMA_FLASH_ATTENTION=1 && set OLLAMA_KV_CACHE_TYPE=q8_0 && ollama serve` ``` PS S:\> ollama run gemma3:4b --verbose >>> Help me study vocabulary: write a sentence for me to fill in the blank, and I'll try to pick the correct option. Okay, let’s do it! Here’s your sentence: The speaker’s ______ delivery captivated the entire audience, leaving them spellbound by his passionate words. Choose the best word to fill in the blank: a) monotonous b) articulate c) hesitant d) rambling Let me know your choice! total duration: 4.7324228s load duration: 40.2864ms prompt eval count: 36 token(s) prompt eval duration: 391ms prompt eval rate: 92.07 tokens/s eval count: 70 token(s) eval duration: 4.297s eval rate: 16.29 tokens/s ``` Which is obviously very slow for a 4090, I can run 14b Q8 at 40+ tk/s ### Relevant log output ```shell D:\LLM>set OLLAMA_FLASH_ATTENTION=1 && set OLLAMA_KV_CACHE_TYPE=q8_0 && ollama serve 2025/03/12 18:24:00 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\LLM\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-03-12T18:24:00.776+08:00 level=INFO source=images.go:432 msg="total blobs: 507" time=2025-03-12T18:24:00.787+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-12T18:24:00.796+08:00 level=INFO source=routes.go:1292 msg="Listening on 127.0.0.1:11434 (version 0.6.0)" time=2025-03-12T18:24:00.796+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-12T18:24:00.796+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-03-12T18:24:00.796+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=16 efficiency=0 threads=32 time=2025-03-12T18:24:00.909+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d library=cuda compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4090" overhead="365.8 MiB" time=2025-03-12T18:24:00.914+08:00 level=INFO source=amd_hip_windows.go:103 msg="AMD ROCm reports no devices found" time=2025-03-12T18:24:00.914+08:00 level=INFO source=amd_windows.go:49 msg="no compatible amdgpu devices detected" time=2025-03-12T18:24:00.915+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d library=cuda variant=v12 compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="22.5 GiB" [GIN] 2025/03/12 - 18:24:15 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/03/12 - 18:24:22 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/12 - 18:24:22 | 200 | 39.8898ms | 127.0.0.1 | POST "/api/show" time=2025-03-12T18:24:23.000+08:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=D:\LLM\.ollama\models\blobs\sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada gpu=GPU-f47e9117-13d8-d21e-7b80-735c8d31444d parallel=4 available=24111435776 required="3.9 GiB" time=2025-03-12T18:24:23.015+08:00 level=INFO source=server.go:105 msg="system memory" total="63.6 GiB" free="53.7 GiB" free_swap="107.1 GiB" time=2025-03-12T18:24:23.031+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=35 layers.offload=35 layers.split="" memory.available="[22.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.9 GiB" memory.required.partial="3.9 GiB" memory.required.kv="544.0 MiB" memory.required.allocations="[3.9 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" time=2025-03-12T18:24:23.031+08:00 level=INFO source=server.go:185 msg="enabling flash attention" time=2025-03-12T18:24:23.093+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-12T18:24:23.096+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-03-12T18:24:23.097+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-12T18:24:23.101+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-03-12T18:24:23.102+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-03-12T18:24:23.102+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-03-12T18:24:23.102+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-03-12T18:24:23.102+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.final_logit_softcapping default=30 time=2025-03-12T18:24:23.102+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-03-12T18:24:23.106+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\***\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model D:\\LLM\\.ollama\\models\\blobs\\sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada --ctx-size 8192 --batch-size 512 --n-gpu-layers 35 --threads 16 --flash-attn --kv-cache-type q8_0 --no-mmap --parallel 4 --port 57962" time=2025-03-12T18:24:23.110+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-12T18:24:23.110+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" time=2025-03-12T18:24:23.110+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-03-12T18:24:23.129+08:00 level=INFO source=runner.go:882 msg="starting ollama engine" time=2025-03-12T18:24:23.133+08:00 level=INFO source=runner.go:938 msg="Server listening on 127.0.0.1:57962" time=2025-03-12T18:24:23.195+08:00 level=WARN source=ggml.go:149 msg="key not found" key=general.name default="" time=2025-03-12T18:24:23.195+08:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default="" time=2025-03-12T18:24:23.195+08:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=35 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\***\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\***\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll time=2025-03-12T18:24:23.268+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-03-12T18:24:23.348+08:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CUDA0 size="3.1 GiB" time=2025-03-12T18:24:23.348+08:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="525.0 MiB" time=2025-03-12T18:24:23.364+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" time=2025-03-12T18:24:24.255+08:00 level=INFO source=ggml.go:356 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 time=2025-03-12T18:24:24.255+08:00 level=INFO source=ggml.go:356 msg="compute graph" backend=CPU buffer_type=CUDA_Host time=2025-03-12T18:24:24.269+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-12T18:24:24.271+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-03-12T18:24:24.273+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-12T18:24:24.277+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-03-12T18:24:24.277+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-03-12T18:24:24.277+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-03-12T18:24:24.277+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-03-12T18:24:24.277+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.final_logit_softcapping default=30 time=2025-03-12T18:24:24.277+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-03-12T18:24:24.370+08:00 level=INFO source=server.go:624 msg="llama runner started in 1.26 seconds" [GIN] 2025/03/12 - 18:24:24 | 200 | 1.488709s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/03/12 - 18:24:36 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/03/12 - 18:24:38 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/03/12 - 18:24:47 | 200 | 4.7324228s | 127.0.0.1 | POST "/api/chat" ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version v0.6.0
GiteaMirror added the bug label 2026-05-04 13:40:34 -05:00
Author
Owner

@StailGot commented on GitHub (Mar 12, 2025):

Same with AMD 7900 XTX, windows. gemma3:4b slower gemma3:27b

ollama run --verbose gemma3:27b

total duration:       3.8002812s
load duration:        24.0505ms
prompt eval count:    36 token(s)
prompt eval duration: 126ms
prompt eval rate:     285.71 tokens/s
eval count:           102 token(s)
eval duration:        3.649s
eval rate:            27.95 tokens/s

ollama run --verbose gemma3:4b

total duration:       6.1007555s
load duration:        3.0743006s
prompt eval count:    36 token(s)
prompt eval duration: 196ms
prompt eval rate:     183.67 tokens/s
eval count:           53 token(s)
eval duration:        2.829s
eval rate:            18.73 tokens/s
<!-- gh-comment-id:2717579826 --> @StailGot commented on GitHub (Mar 12, 2025): Same with AMD 7900 XTX, windows. gemma3:4b slower gemma3:27b `ollama run --verbose gemma3:27b` ``` total duration: 3.8002812s load duration: 24.0505ms prompt eval count: 36 token(s) prompt eval duration: 126ms prompt eval rate: 285.71 tokens/s eval count: 102 token(s) eval duration: 3.649s eval rate: 27.95 tokens/s ``` `ollama run --verbose gemma3:4b` ``` total duration: 6.1007555s load duration: 3.0743006s prompt eval count: 36 token(s) prompt eval duration: 196ms prompt eval rate: 183.67 tokens/s eval count: 53 token(s) eval duration: 2.829s eval rate: 18.73 tokens/s ```
Author
Owner

@vYLQs6 commented on GitHub (Mar 12, 2025):

Just wanted to also mention that the olloma run gemma3:12B model also seems impacted by the same issue. Would it be possible to explore a fix, as a functioning KV Cache is crucial for me given my 24 GB VRAM, making it essential for running larger models comfortably? Thank you so much for looking into this!

<!-- gh-comment-id:2717602043 --> @vYLQs6 commented on GitHub (Mar 12, 2025): Just wanted to also mention that the `olloma run gemma3:12B` model also seems impacted by the same issue. Would it be possible to explore a fix, as a functioning KV Cache is crucial for me given my 24 GB VRAM, making it essential for running larger models comfortably? Thank you so much for looking into this!
Author
Owner

@3unnycheung commented on GitHub (Mar 12, 2025):

true

<!-- gh-comment-id:2717927976 --> @3unnycheung commented on GitHub (Mar 12, 2025): true
Author
Owner

@jujaga commented on GitHub (Mar 12, 2025):

Looks like my comment here also applies to this issue here: https://github.com/ollama/ollama/issues/9678#issuecomment-2718887628

<!-- gh-comment-id:2718913129 --> @jujaga commented on GitHub (Mar 12, 2025): Looks like my comment here also applies to this issue here: https://github.com/ollama/ollama/issues/9678#issuecomment-2718887628
Author
Owner

@adradr commented on GitHub (Mar 16, 2025):

same here on M1 Max with 64GB.

<!-- gh-comment-id:2727528399 --> @adradr commented on GitHub (Mar 16, 2025): same here on M1 Max with 64GB.
Author
Owner

@jessegross commented on GitHub (Mar 17, 2025):

Thanks for the issue. I can confirm that this is caused by the fact that flash attention is being executed on the CPU for some models and cache quantizations, causing us to bounce back and forth between GPU and CPU. Likely we are not compiling the CUDA kernels for the set of inputs needed in those cases.

<!-- gh-comment-id:2731221319 --> @jessegross commented on GitHub (Mar 17, 2025): Thanks for the issue. I can confirm that this is caused by the fact that flash attention is being executed on the CPU for some models and cache quantizations, causing us to bounce back and forth between GPU and CPU. Likely we are not compiling the CUDA kernels for the set of inputs needed in those cases.
Author
Owner

@jessegross commented on GitHub (Mar 25, 2025):

The issue is indeed a missing kernel but it's not a simple matter of compiling it in. See here for further information:
https://github.com/ggml-org/llama.cpp/issues/12352#issuecomment-2727452955

Unfortunately, there isn't much we're going to be able to do in the near term.

<!-- gh-comment-id:2752701010 --> @jessegross commented on GitHub (Mar 25, 2025): The issue is indeed a missing kernel but it's not a simple matter of compiling it in. See here for further information: https://github.com/ggml-org/llama.cpp/issues/12352#issuecomment-2727452955 Unfortunately, there isn't much we're going to be able to do in the near term.
Author
Owner

@ProjectMoon commented on GitHub (Mar 31, 2025):

It's kind of nuts how big the drop is. It goes from 31 tokens/second at FP16 KV cache to 10 tokens/second with Q8_0 KV cache on my hardware (RX 6800 XT). The solution I am going with for now is to run two separate ollama instances and then proxy them. One for Gemma3, one for everything else. The one for Gemma3 will have KV cache quantization turned off.

<!-- gh-comment-id:2765703691 --> @ProjectMoon commented on GitHub (Mar 31, 2025): It's kind of nuts how big the drop is. It goes from 31 tokens/second at FP16 KV cache to 10 tokens/second with Q8_0 KV cache on my hardware (RX 6800 XT). The solution I am going with for now is to run two separate ollama instances and then proxy them. One for Gemma3, one for everything else. The one for Gemma3 will have KV cache quantization turned off.
Author
Owner

@Mondonno commented on GitHub (Aug 28, 2025):

I seem to have simillar problems on M3 Pro MacBook Pro with 36 GB of unified RAM. Ultra slow time for the model to start writing, but once started it is solid 15 tok/s.

<!-- gh-comment-id:3233459565 --> @Mondonno commented on GitHub (Aug 28, 2025): I seem to have simillar problems on M3 Pro MacBook Pro with 36 GB of unified RAM. Ultra slow time for the model to start writing, but once started it is solid 15 tok/s.
Author
Owner

@jessegross commented on GitHub (Oct 3, 2025):

Fixed with #12245

<!-- gh-comment-id:3367303682 --> @jessegross commented on GitHub (Oct 3, 2025): Fixed with #12245
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68376