[GH-ISSUE #12579] Ollama v0.12.5 is slow comparing to v0.12.3 (43 vers. 3 s), in v0.12.5 model did not fit in GPU memory #54860

Closed
opened 2026-04-29 07:40:45 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @DanoPTT on GitHub (Oct 11, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12579

Originally assigned to: @jessegross on GitHub.

What is the issue?

After update to v0.12.5 used model gemma3:12b on GeForce RTX 5060TI do not fully fit in GPU (GPU use is only 40%) but raise use of CPU.
In v0.12.3 model fits in GPU and GPU was used to 89% (Windows 11).
here is log from v01.12.3:
time=2025-10-11T20:41:19.852+02:00 level=INFO source=runner.go:1171 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:49[ID:GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-11T20:41:19.926+02:00 level=INFO source=ggml.go:131 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37
load_backend: loaded CPU backend from C:\ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 5060 Ti, compute capability 12.0, VMM: yes, ID: GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a
load_backend: loaded CUDA backend from C:\ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2025-10-11T20:41:20.043+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-10-11T20:41:20.343+02:00 level=INFO source=runner.go:1171 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:49[ID:GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-11T20:41:20.828+02:00 level=INFO source=runner.go:1171 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:49[ID:GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-11T20:41:20.828+02:00 level=INFO source=ggml.go:487 msg="offloading 48 repeating layers to GPU"
time=2025-10-11T20:41:20.828+02:00 level=INFO source=ggml.go:493 msg="offloading output layer to GPU"
time=2025-10-11T20:41:20.828+02:00 level=INFO source=ggml.go:498 msg="offloaded 49/49 layers to GPU"
time=2025-10-11T20:41:20.829+02:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="7.6 GiB"
time=2025-10-11T20:41:20.829+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="787.5 MiB"
time=2025-10-11T20:41:20.829+02:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="4.5 GiB"
time=2025-10-11T20:41:20.829+02:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="2.2 GiB"
time=2025-10-11T20:41:20.829+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="7.5 MiB"
time=2025-10-11T20:41:20.829+02:00 level=INFO source=backend.go:342 msg="total memory" size="15.0 GiB"
time=2025-10-11T20:41:20.829+02:00 level=INFO source=sched.go:470 msg="loaded runners" count=1
time=2025-10-11T20:41:20.829+02:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
time=2025-10-11T20:41:20.830+02:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
time=2025-10-11T20:41:23.355+02:00 level=INFO source=server.go:1289 msg="llama runner started in 3.57 seconds"
[GIN] 2025/10/11 - 20:41:43 | 200 | 24.0308007s | | POST "/api/chat"

log after update to v0.12.5
time=2025-10-11T20:44:01.977+02:00 level=INFO source=runner.go:1189 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:49[ID:GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-11T20:44:02.050+02:00 level=INFO source=ggml.go:133 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37
load_backend: loaded CPU backend from C:\ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 5060 Ti, compute capability 12.0, VMM: yes, ID: GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a
load_backend: loaded CUDA backend from C:\ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2025-10-11T20:44:02.146+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-10-11T20:44:45.090+02:00 level=INFO source=runner.go:1189 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:48[ID:GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a Layers:48(0..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-11T20:44:45.377+02:00 level=INFO source=runner.go:1189 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:48[ID:GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a Layers:48(0..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-11T20:44:45.942+02:00 level=INFO source=runner.go:1189 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:48[ID:GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a Layers:48(0..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-11T20:44:45.942+02:00 level=INFO source=ggml.go:477 msg="offloading 48 repeating layers to GPU"
time=2025-10-11T20:44:45.942+02:00 level=INFO source=ggml.go:481 msg="offloading output layer to CPU"
time=2025-10-11T20:44:45.942+02:00 level=INFO source=ggml.go:488 msg="offloaded 48/49 layers to GPU"
time=2025-10-11T20:44:45.943+02:00 level=INFO source=device.go:206 msg="model weights" device=CUDA0 size="6.0 GiB"
time=2025-10-11T20:44:45.943+02:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="2.3 GiB"
time=2025-10-11T20:44:45.943+02:00 level=INFO source=device.go:217 msg="kv cache" device=CUDA0 size="4.5 GiB"
time=2025-10-11T20:44:45.943+02:00 level=INFO source=device.go:228 msg="compute graph" device=CUDA0 size="3.2 GiB"
time=2025-10-11T20:44:45.943+02:00 level=INFO source=device.go:233 msg="compute graph" device=CPU size="1.1 GiB"
time=2025-10-11T20:44:45.943+02:00 level=INFO source=device.go:238 msg="total memory" size="17.1 GiB"
time=2025-10-11T20:44:45.943+02:00 level=INFO source=sched.go:481 msg="loaded runners" count=1
time=2025-10-11T20:44:45.943+02:00 level=INFO source=server.go:1271 msg="waiting for llama runner to start responding"
time=2025-10-11T20:44:45.948+02:00 level=INFO source=server.go:1305 msg="waiting for server to become available" status="llm server loading model"
time=2025-10-11T20:44:48.225+02:00 level=INFO source=server.go:1309 msg="llama runner started in 46.31 seconds"
[GIN] 2025/10/11 - 20:45:23 | 200 | 1m22s | | POST "/api/chat"

Relevant log output


OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.12.5

Originally created by @DanoPTT on GitHub (Oct 11, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12579 Originally assigned to: @jessegross on GitHub. ### What is the issue? After update to v0.12.5 used model gemma3:12b on GeForce RTX 5060TI do not fully fit in GPU (GPU use is only 40%) but raise use of CPU. In v0.12.3 model fits in GPU and GPU was used to 89% (Windows 11). **here is log from v01.12.3:** time=2025-10-11T20:41:19.852+02:00 level=INFO source=runner.go:1171 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:49[ID:GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-11T20:41:19.926+02:00 level=INFO source=ggml.go:131 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37 load_backend: loaded CPU backend from C:\ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5060 Ti, compute capability 12.0, VMM: yes, ID: GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a load_backend: loaded CUDA backend from C:\ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2025-10-11T20:41:20.043+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-10-11T20:41:20.343+02:00 level=INFO source=runner.go:1171 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:49[ID:GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-11T20:41:20.828+02:00 level=INFO source=runner.go:1171 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:49[ID:GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-11T20:41:20.828+02:00 level=INFO source=ggml.go:487 msg="offloading 48 repeating layers to GPU" time=2025-10-11T20:41:20.828+02:00 level=INFO source=ggml.go:493 msg="offloading output layer to GPU" time=2025-10-11T20:41:20.828+02:00 level=INFO source=ggml.go:498 msg="offloaded 49/49 layers to GPU" time=2025-10-11T20:41:20.829+02:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="7.6 GiB" time=2025-10-11T20:41:20.829+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="787.5 MiB" time=2025-10-11T20:41:20.829+02:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="4.5 GiB" time=2025-10-11T20:41:20.829+02:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="2.2 GiB" time=2025-10-11T20:41:20.829+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="7.5 MiB" time=2025-10-11T20:41:20.829+02:00 level=INFO source=backend.go:342 msg="total memory" size="15.0 GiB" time=2025-10-11T20:41:20.829+02:00 level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-10-11T20:41:20.829+02:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" time=2025-10-11T20:41:20.830+02:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" time=2025-10-11T20:41:23.355+02:00 level=INFO source=server.go:1289 msg="llama runner started in 3.57 seconds" [GIN] 2025/10/11 - 20:41:43 | 200 | 24.0308007s | | POST "/api/chat" l**og after update to v0.12.5** time=2025-10-11T20:44:01.977+02:00 level=INFO source=runner.go:1189 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:49[ID:GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-11T20:44:02.050+02:00 level=INFO source=ggml.go:133 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37 load_backend: loaded CPU backend from C:\ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5060 Ti, compute capability 12.0, VMM: yes, ID: GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a load_backend: loaded CUDA backend from C:\ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2025-10-11T20:44:02.146+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-10-11T20:44:45.090+02:00 level=INFO source=runner.go:1189 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:48[ID:GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a Layers:48(0..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-11T20:44:45.377+02:00 level=INFO source=runner.go:1189 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:48[ID:GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a Layers:48(0..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-11T20:44:45.942+02:00 level=INFO source=runner.go:1189 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:65536 KvCacheType: NumThreads:6 GPULayers:48[ID:GPU-29d9387c-984f-b168-0ba2-2b7477a4fc3a Layers:48(0..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-11T20:44:45.942+02:00 level=INFO source=ggml.go:477 msg="offloading 48 repeating layers to GPU" time=2025-10-11T20:44:45.942+02:00 level=INFO source=ggml.go:481 msg="offloading output layer to CPU" time=2025-10-11T20:44:45.942+02:00 level=INFO source=ggml.go:488 msg="offloaded 48/49 layers to GPU" time=2025-10-11T20:44:45.943+02:00 level=INFO source=device.go:206 msg="model weights" device=CUDA0 size="6.0 GiB" time=2025-10-11T20:44:45.943+02:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="2.3 GiB" time=2025-10-11T20:44:45.943+02:00 level=INFO source=device.go:217 msg="kv cache" device=CUDA0 size="4.5 GiB" time=2025-10-11T20:44:45.943+02:00 level=INFO source=device.go:228 msg="compute graph" device=CUDA0 size="3.2 GiB" time=2025-10-11T20:44:45.943+02:00 level=INFO source=device.go:233 msg="compute graph" device=CPU size="1.1 GiB" time=2025-10-11T20:44:45.943+02:00 level=INFO source=device.go:238 msg="total memory" size="17.1 GiB" time=2025-10-11T20:44:45.943+02:00 level=INFO source=sched.go:481 msg="loaded runners" count=1 time=2025-10-11T20:44:45.943+02:00 level=INFO source=server.go:1271 msg="waiting for llama runner to start responding" time=2025-10-11T20:44:45.948+02:00 level=INFO source=server.go:1305 msg="waiting for server to become available" status="llm server loading model" time=2025-10-11T20:44:48.225+02:00 level=INFO source=server.go:1309 msg="llama runner started in 46.31 seconds" [GIN] 2025/10/11 - 20:45:23 | 200 | 1m22s | | POST "/api/chat" ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.12.5
GiteaMirror added the memorybug labels 2026-04-29 07:40:47 -05:00
Author
Owner

@dhiltgen commented on GitHub (Oct 12, 2025):

Could you share the log lines just before these? In particular, I'd like to see what the ...msg=offload library=CUDA... look like in the old and new versions. That will help us understand what is more likely to be causing it to load 1 less layer.

<!-- gh-comment-id:3395352523 --> @dhiltgen commented on GitHub (Oct 12, 2025): Could you share the log lines just before these? In particular, I'd like to see what the `...msg=offload library=CUDA...` look like in the old and new versions. That will help us understand what is more likely to be causing it to load 1 less layer.
Author
Owner

@DanoPTT commented on GitHub (Oct 13, 2025):

Hallo,
attached file server.log - from v0.12.5 (mostly time running gemma modell), gemma did not fit in GPU
server.log
server-1.log - from v0.12.3 - different models ( gemma, deepseek), but at the end of file should be logs for gemma, which fits in GPU.
server-1.log

<!-- gh-comment-id:3396378240 --> @DanoPTT commented on GitHub (Oct 13, 2025): Hallo, attached file server.log - from v0.12.5 (mostly time running gemma modell), gemma did not fit in GPU [server.log](https://github.com/user-attachments/files/22879420/server.log) server-1.log - from v0.12.3 - different models ( gemma, deepseek), but at the end of file should be logs for gemma, which fits in GPU. [server-1.log](https://github.com/user-attachments/files/22879419/server-1.log)
Author
Owner

@jessegross commented on GitHub (Oct 14, 2025):

With 0.12.3, there are cases where we do not reserve enough working space for intermediate calculations and we crash. This would likely occur in your case if you filled the full 64k context length and processed a max sized batch.

0.12.5 reserves this space to avoid the potential for a crash. In your case, you were right on the edge of what would fit in the GPU with this model before and now it is slightly over, causing one layer to be moved to the CPU, reducing performance. You could try turning on OLLAMA_FLASH_ATTENTION=1, which should reduce memory usage and likely allow full offloading again.

<!-- gh-comment-id:3403465773 --> @jessegross commented on GitHub (Oct 14, 2025): With 0.12.3, there are cases where we do not reserve enough working space for intermediate calculations and we crash. This would likely occur in your case if you filled the full 64k context length and processed a max sized batch. 0.12.5 reserves this space to avoid the potential for a crash. In your case, you were right on the edge of what would fit in the GPU with this model before and now it is slightly over, causing one layer to be moved to the CPU, reducing performance. You could try turning on OLLAMA_FLASH_ATTENTION=1, which should reduce memory usage and likely allow full offloading again.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54860