[GH-ISSUE #11819] Incorrect offloading after Ollama update #33604

Closed
opened 2026-04-22 16:27:46 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @Jakaboii on GitHub (Aug 8, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11819

What is the issue?

Previously I could load 100% of my gemma3:12b model onto the GPU and it would run fine. Recently I updated my nvidia drivers from 575 to version 580.65.06 and Ollama from 0.10.1 to 0.11.4 and when I ran gemma3 it started loading 23%/77% CPU/GPU instead of 100% GPU.

I tried rolling back my drivers and Ollama versions to ones that previously worked but the issue stuck and I tried to specify num_gpu to 49 with no fix. I tested with gemma3:4b and it loads 100% on my GPU.

Relevant log output

Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.000+01:00 level=INFO source=server.go:135 msg="system memory" total="30.9 GiB" free="26.6 GiB" free_swap="2.0 GiB"
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.001+01:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=49 layers.model=49 layers.offload=48 layers.split="" memory.available="[10.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.2 GiB" memory.required.partial="8.6 GiB" memory.required.kv="736.0 MiB" memory.required.allocations="[8.6 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.039+01:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 4096 --batch-size 512 --n-gpu-layers 49 --threads 6 --parallel 1 --port 43023"
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.040+01:00 level=INFO source=sched.go:481 msg="loaded runners" count=1
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.040+01:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.040+01:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.047+01:00 level=INFO source=runner.go:925 msg="starting ollama engine"
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.047+01:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:43023"
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.091+01:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37
Aug 08 21:55:55 j-360 ollama[1617]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Aug 08 21:55:55 j-360 ollama[1617]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Aug 08 21:55:55 j-360 ollama[1617]: ggml_cuda_init: found 1 CUDA devices:
Aug 08 21:55:55 j-360 ollama[1617]:   Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
Aug 08 21:55:55 j-360 ollama[1617]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so
Aug 08 21:55:55 j-360 ollama[1617]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.144+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.244+01:00 level=INFO source=ggml.go:365 msg="offloading 48 repeating layers to GPU"
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.244+01:00 level=INFO source=ggml.go:371 msg="offloading output layer to GPU"
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.244+01:00 level=INFO source=ggml.go:376 msg="offloaded 49/49 layers to GPU"
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.244+01:00 level=INFO source=ggml.go:379 msg="model weights" buffer=CPU size="787.5 MiB"
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.244+01:00 level=INFO source=ggml.go:379 msg="model weights" buffer=CUDA0 size="7.6 GiB"
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.291+01:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.385+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB"
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.385+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CPU buffer_type=CPU size="0 B"
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.395+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB"
Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.395+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CPU buffer_type=CPU size="7.5 MiB"
Aug 08 21:55:56 j-360 ollama[1617]: time=2025-08-08T21:55:56.294+01:00 level=INFO source=server.go:637 msg="llama runner started in 1.25 seconds"
Aug 08 21:56:00 j-360 ollama[1617]: [GIN] 2025/08/08 - 21:56:00 | 200 |      19.663µs |       127.0.0.1 | HEAD     "/"
Aug 08 21:56:00 j-360 ollama[1617]: [GIN] 2025/08/08 - 21:56:00 | 200 |      21.173µs |       127.0.0.1 | GET      "/api/ps"
Aug 08 21:56:12 j-360 ollama[1617]: [GIN] 2025/08/08 - 21:56:12 | 200 | 18.062416735s |      172.17.0.3 | POST     "/api/chat"
Aug 08 21:56:19 j-360 ollama[1617]: [GIN] 2025/08/08 - 21:56:19 | 200 |  6.822471986s |      172.17.0.3 | POST     "/api/chat"
Aug 08 21:57:02 j-360 ollama[1617]: [GIN] 2025/08/08 - 21:57:02 | 200 |  17.30476942s |      172.17.0.3 | POST     "/api/chat"
Aug 08 21:57:08 j-360 ollama[1617]: [GIN] 2025/08/08 - 21:57:08 | 200 |  5.963027495s |      172.17.0.3 | POST     "/api/chat"

OS

Linux Mint 22.1

GPU

NVIDIA GeForce RTX 3060 12GB

CPU

AMD Ryzen 5 7600X

Ollama version

0.11.4

Originally created by @Jakaboii on GitHub (Aug 8, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11819 ### What is the issue? Previously I could load 100% of my gemma3:12b model onto the GPU and it would run fine. Recently I updated my nvidia drivers from 575 to version 580.65.06 and Ollama from 0.10.1 to 0.11.4 and when I ran gemma3 it started loading 23%/77% CPU/GPU instead of 100% GPU. I tried rolling back my drivers and Ollama versions to ones that previously worked but the issue stuck and I tried to specify `num_gpu` to 49 with no fix. I tested with gemma3:4b and it loads 100% on my GPU. ### Relevant log output ```shell Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.000+01:00 level=INFO source=server.go:135 msg="system memory" total="30.9 GiB" free="26.6 GiB" free_swap="2.0 GiB" Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.001+01:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=49 layers.model=49 layers.offload=48 layers.split="" memory.available="[10.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.2 GiB" memory.required.partial="8.6 GiB" memory.required.kv="736.0 MiB" memory.required.allocations="[8.6 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.039+01:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 4096 --batch-size 512 --n-gpu-layers 49 --threads 6 --parallel 1 --port 43023" Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.040+01:00 level=INFO source=sched.go:481 msg="loaded runners" count=1 Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.040+01:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.040+01:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.047+01:00 level=INFO source=runner.go:925 msg="starting ollama engine" Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.047+01:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:43023" Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.091+01:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37 Aug 08 21:55:55 j-360 ollama[1617]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Aug 08 21:55:55 j-360 ollama[1617]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Aug 08 21:55:55 j-360 ollama[1617]: ggml_cuda_init: found 1 CUDA devices: Aug 08 21:55:55 j-360 ollama[1617]: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes Aug 08 21:55:55 j-360 ollama[1617]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so Aug 08 21:55:55 j-360 ollama[1617]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.144+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.244+01:00 level=INFO source=ggml.go:365 msg="offloading 48 repeating layers to GPU" Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.244+01:00 level=INFO source=ggml.go:371 msg="offloading output layer to GPU" Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.244+01:00 level=INFO source=ggml.go:376 msg="offloaded 49/49 layers to GPU" Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.244+01:00 level=INFO source=ggml.go:379 msg="model weights" buffer=CPU size="787.5 MiB" Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.244+01:00 level=INFO source=ggml.go:379 msg="model weights" buffer=CUDA0 size="7.6 GiB" Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.291+01:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.385+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB" Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.385+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.395+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB" Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.395+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CPU buffer_type=CPU size="7.5 MiB" Aug 08 21:55:56 j-360 ollama[1617]: time=2025-08-08T21:55:56.294+01:00 level=INFO source=server.go:637 msg="llama runner started in 1.25 seconds" Aug 08 21:56:00 j-360 ollama[1617]: [GIN] 2025/08/08 - 21:56:00 | 200 | 19.663µs | 127.0.0.1 | HEAD "/" Aug 08 21:56:00 j-360 ollama[1617]: [GIN] 2025/08/08 - 21:56:00 | 200 | 21.173µs | 127.0.0.1 | GET "/api/ps" Aug 08 21:56:12 j-360 ollama[1617]: [GIN] 2025/08/08 - 21:56:12 | 200 | 18.062416735s | 172.17.0.3 | POST "/api/chat" Aug 08 21:56:19 j-360 ollama[1617]: [GIN] 2025/08/08 - 21:56:19 | 200 | 6.822471986s | 172.17.0.3 | POST "/api/chat" Aug 08 21:57:02 j-360 ollama[1617]: [GIN] 2025/08/08 - 21:57:02 | 200 | 17.30476942s | 172.17.0.3 | POST "/api/chat" Aug 08 21:57:08 j-360 ollama[1617]: [GIN] 2025/08/08 - 21:57:08 | 200 | 5.963027495s | 172.17.0.3 | POST "/api/chat" ``` ### OS Linux Mint 22.1 ### GPU NVIDIA GeForce RTX 3060 12GB ### CPU AMD Ryzen 5 7600X ### Ollama version 0.11.4
GiteaMirror added the bug label 2026-04-22 16:27:46 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 8, 2025):

Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.244+01:00 level=INFO source=ggml.go:376 msg="offloaded 49/49 layers to GPU"

The logs show all layers offloaded to the GPU.

<!-- gh-comment-id:3169313584 --> @rick-github commented on GitHub (Aug 8, 2025): ``` Aug 08 21:55:55 j-360 ollama[1617]: time=2025-08-08T21:55:55.244+01:00 level=INFO source=ggml.go:376 msg="offloaded 49/49 layers to GPU" ``` The logs show all layers offloaded to the GPU.
Author
Owner

@Jakaboii commented on GitHub (Aug 8, 2025):

I attempted to specify num_gpu within Ollama and it hadn't helped (More specifically in the Open WebUI GUI, but my issue is replicated in the Terminal).

Here is the most recent log showing that it is not offloading correctly

Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.014+01:00 level=INFO source=server.go:135 msg="system memory" total="30.9 GiB" free="26.4 GiB" free_swap="2.0 GiB"
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.015+01:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=48 layers.split="" memory.available="[10.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.2 GiB" memory.required.partial="8.6 GiB" memory.required.kv="736.0 MiB" memory.required.allocations="[8.6 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.054+01:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 4096 --batch-size 512 --n-gpu-layers 48 --threads 6 --parallel 1 --port 37301"
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.055+01:00 level=INFO source=sched.go:481 msg="loaded runners" count=1
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.055+01:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.055+01:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.062+01:00 level=INFO source=runner.go:925 msg="starting ollama engine"
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.062+01:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:37301"
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.103+01:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37
Aug 08 22:19:15 j-360 ollama[1617]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Aug 08 22:19:15 j-360 ollama[1617]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Aug 08 22:19:15 j-360 ollama[1617]: ggml_cuda_init: found 1 CUDA devices:
Aug 08 22:19:15 j-360 ollama[1617]:   Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
Aug 08 22:19:15 j-360 ollama[1617]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so
Aug 08 22:19:15 j-360 ollama[1617]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.155+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.251+01:00 level=INFO source=ggml.go:365 msg="offloading 48 repeating layers to GPU"
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.251+01:00 level=INFO source=ggml.go:369 msg="offloading output layer to CPU"
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.251+01:00 level=INFO source=ggml.go:376 msg="offloaded 48/49 layers to GPU"
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.251+01:00 level=INFO source=ggml.go:379 msg="model weights" buffer=CPU size="2.3 GiB"
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.251+01:00 level=INFO source=ggml.go:379 msg="model weights" buffer=CUDA0 size="6.0 GiB"
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.306+01:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.387+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="0 B"
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.387+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CPU buffer_type=CPU size="1.1 GiB"
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.399+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="181.3 MiB"
Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.399+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CPU buffer_type=CPU size="1.1 GiB"
Aug 08 22:19:16 j-360 ollama[1617]: time=2025-08-08T22:19:16.308+01:00 level=INFO source=server.go:637 msg="llama runner started in 1.25 seconds"
Aug 08 22:19:20 j-360 ollama[1617]: [GIN] 2025/08/08 - 22:19:20 | 200 |      16.959µs |       127.0.0.1 | HEAD     "/"
Aug 08 22:19:20 j-360 ollama[1617]: [GIN] 2025/08/08 - 22:19:20 | 200 |      18.059µs |       127.0.0.1 | GET      "/api/ps"
Aug 08 22:19:22 j-360 ollama[1617]: [GIN] 2025/08/08 - 22:19:22 | 200 |      22.068µs |       127.0.0.1 | HEAD     "/"
Aug 08 22:19:22 j-360 ollama[1617]: [GIN] 2025/08/08 - 22:19:22 | 200 |      17.549µs |       127.0.0.1 | GET      "/api/ps"
Aug 08 22:19:50 j-360 ollama[1617]: [GIN] 2025/08/08 - 22:19:50 | 200 | 35.590203084s |      172.17.0.3 | POST     "/api/chat"

Output of ollama ps

Image
<!-- gh-comment-id:3169339086 --> @Jakaboii commented on GitHub (Aug 8, 2025): I attempted to specify `num_gpu` within Ollama and it hadn't helped (More specifically in the Open WebUI GUI, but my issue is replicated in the Terminal). Here is the most recent log showing that it is not offloading correctly ``` Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.014+01:00 level=INFO source=server.go:135 msg="system memory" total="30.9 GiB" free="26.4 GiB" free_swap="2.0 GiB" Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.015+01:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=48 layers.split="" memory.available="[10.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.2 GiB" memory.required.partial="8.6 GiB" memory.required.kv="736.0 MiB" memory.required.allocations="[8.6 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.054+01:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 4096 --batch-size 512 --n-gpu-layers 48 --threads 6 --parallel 1 --port 37301" Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.055+01:00 level=INFO source=sched.go:481 msg="loaded runners" count=1 Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.055+01:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.055+01:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.062+01:00 level=INFO source=runner.go:925 msg="starting ollama engine" Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.062+01:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:37301" Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.103+01:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37 Aug 08 22:19:15 j-360 ollama[1617]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Aug 08 22:19:15 j-360 ollama[1617]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Aug 08 22:19:15 j-360 ollama[1617]: ggml_cuda_init: found 1 CUDA devices: Aug 08 22:19:15 j-360 ollama[1617]: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes Aug 08 22:19:15 j-360 ollama[1617]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so Aug 08 22:19:15 j-360 ollama[1617]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.155+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.251+01:00 level=INFO source=ggml.go:365 msg="offloading 48 repeating layers to GPU" Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.251+01:00 level=INFO source=ggml.go:369 msg="offloading output layer to CPU" Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.251+01:00 level=INFO source=ggml.go:376 msg="offloaded 48/49 layers to GPU" Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.251+01:00 level=INFO source=ggml.go:379 msg="model weights" buffer=CPU size="2.3 GiB" Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.251+01:00 level=INFO source=ggml.go:379 msg="model weights" buffer=CUDA0 size="6.0 GiB" Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.306+01:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.387+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="0 B" Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.387+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CPU buffer_type=CPU size="1.1 GiB" Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.399+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="181.3 MiB" Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.399+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CPU buffer_type=CPU size="1.1 GiB" Aug 08 22:19:16 j-360 ollama[1617]: time=2025-08-08T22:19:16.308+01:00 level=INFO source=server.go:637 msg="llama runner started in 1.25 seconds" Aug 08 22:19:20 j-360 ollama[1617]: [GIN] 2025/08/08 - 22:19:20 | 200 | 16.959µs | 127.0.0.1 | HEAD "/" Aug 08 22:19:20 j-360 ollama[1617]: [GIN] 2025/08/08 - 22:19:20 | 200 | 18.059µs | 127.0.0.1 | GET "/api/ps" Aug 08 22:19:22 j-360 ollama[1617]: [GIN] 2025/08/08 - 22:19:22 | 200 | 22.068µs | 127.0.0.1 | HEAD "/" Aug 08 22:19:22 j-360 ollama[1617]: [GIN] 2025/08/08 - 22:19:22 | 200 | 17.549µs | 127.0.0.1 | GET "/api/ps" Aug 08 22:19:50 j-360 ollama[1617]: [GIN] 2025/08/08 - 22:19:50 | 200 | 35.590203084s | 172.17.0.3 | POST "/api/chat" ``` Output of `ollama ps` <img width="723" height="43" alt="Image" src="https://github.com/user-attachments/assets/a4f1bb78-fdd9-4574-b2ca-5c958da21539" />
Author
Owner

@rick-github commented on GitHub (Aug 8, 2025):

Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.015+01:00 level=INFO source=server.go:175 msg=offload
 library=cuda layers.requested=-1 layers.model=49 layers.offload=48 layers.split="" memory.available="[10.9 GiB]"
 memory.gpu_overhead="0 B" memory.required.full="11.2 GiB" memory.required.partial="8.6 GiB" memory.required.kv="736.0 MiB"
 memory.required.allocations="[8.6 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB"
 memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB"
 projector.weights="795.9 MiB" projector.graph="1.0 GiB"

ollama estimates that 11.2G is required to load the model in VRAM. Only 10.9G is available, so only 48 of 49 layers are loaded in VRAM. You could reduce OLLAMA_CONTEXT_LENGTH, then set OLLAMA_FLASH_ATTENTION and OLLAMA_KV_CACHE_TYPE to reduce the memory footprint.

Alternatively, since only 1 layer is offloaded to system RAM, you could set num_gpu to 49 to force ollama to load all layers in VRAM.

<!-- gh-comment-id:3169358459 --> @rick-github commented on GitHub (Aug 8, 2025): ``` Aug 08 22:19:15 j-360 ollama[1617]: time=2025-08-08T22:19:15.015+01:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=48 layers.split="" memory.available="[10.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.2 GiB" memory.required.partial="8.6 GiB" memory.required.kv="736.0 MiB" memory.required.allocations="[8.6 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" ``` ollama estimates that 11.2G is required to load the model in VRAM. Only 10.9G is available, so only 48 of 49 layers are loaded in VRAM. You could reduce [`OLLAMA_CONTEXT_LENGTH`](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-specify-the-context-window-size), then set [`OLLAMA_FLASH_ATTENTION`](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-enable-flash-attention) and [`OLLAMA_KV_CACHE_TYPE`](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-set-the-quantization-type-for-the-kv-cache) to reduce the memory footprint. Alternatively, since only 1 layer is offloaded to system RAM, you could set `num_gpu` to 49 to force ollama to load all layers in VRAM.
Author
Owner

@Jakaboii commented on GitHub (Aug 8, 2025):

Would there be any way of increasing the amount available? I still have 1.5GB free in VRAM and it was fully offloading this model before updating Ollama and the Nvidia drivers (so I am unsure what caused the issue), and the difference in speed even setting num_gpu is noticeable.

<!-- gh-comment-id:3169369447 --> @Jakaboii commented on GitHub (Aug 8, 2025): Would there be any way of increasing the amount available? I still have 1.5GB free in VRAM and it was fully offloading this model before updating Ollama and the Nvidia drivers (so I am unsure what caused the issue), and the difference in speed even setting `num_gpu` is noticeable.
Author
Owner

@rick-github commented on GitHub (Aug 8, 2025):

Set num_gpu in one of the ways described here.

<!-- gh-comment-id:3169389037 --> @rick-github commented on GitHub (Aug 8, 2025): Set `num_gpu` in one of the ways described [here](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650).
Author
Owner

@Jakaboii commented on GitHub (Aug 8, 2025):

Reducing num_ctx to 2048 and/or num_gpu to 49 still doesn't change the offloading.

<!-- gh-comment-id:3169408237 --> @Jakaboii commented on GitHub (Aug 8, 2025): Reducing `num_ctx` to 2048 and/or `num_gpu` to 49 still doesn't change the offloading.
Author
Owner

@rick-github commented on GitHub (Aug 8, 2025):

Logs after changing num_ctx?

<!-- gh-comment-id:3169411210 --> @rick-github commented on GitHub (Aug 8, 2025): Logs after changing `num_ctx`?
Author
Owner

@Jakaboii commented on GitHub (Aug 8, 2025):

It shows properly offloading but ollama ps still shows 23%/77% CPU/GPU and it is still slower.

Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.145+01:00 level=INFO source=server.go:135 msg="system memory" total="30.9 GiB" free="25.4 GiB" free_swap="2.0 GiB"
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.146+01:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=49 layers.model=49 layers.offload=48 layers.split="" memory.available="[10.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.0 GiB" memory.required.partial="8.5 GiB" memory.required.kv="608.0 MiB" memory.required.allocations="[8.5 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.187+01:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 2048 --batch-size 512 --n-gpu-layers 49 --threads 6 --parallel 1 --port 39041"
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.187+01:00 level=INFO source=sched.go:481 msg="loaded runners" count=1
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.187+01:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.187+01:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.195+01:00 level=INFO source=runner.go:925 msg="starting ollama engine"
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.195+01:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:39041"
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.236+01:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37
Aug 08 23:02:37 j-360 ollama[1617]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Aug 08 23:02:37 j-360 ollama[1617]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Aug 08 23:02:37 j-360 ollama[1617]: ggml_cuda_init: found 1 CUDA devices:
Aug 08 23:02:37 j-360 ollama[1617]:   Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
Aug 08 23:02:37 j-360 ollama[1617]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so
Aug 08 23:02:37 j-360 ollama[1617]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.289+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.371+01:00 level=INFO source=ggml.go:365 msg="offloading 48 repeating layers to GPU"
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.371+01:00 level=INFO source=ggml.go:371 msg="offloading output layer to GPU"
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.371+01:00 level=INFO source=ggml.go:376 msg="offloaded 49/49 layers to GPU"
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.371+01:00 level=INFO source=ggml.go:379 msg="model weights" buffer=CUDA0 size="7.6 GiB"
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.371+01:00 level=INFO source=ggml.go:379 msg="model weights" buffer=CPU size="787.5 MiB"
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.439+01:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.510+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB"
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.510+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CPU buffer_type=CPU size="0 B"
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.518+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB"
Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.518+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CPU buffer_type=CPU size="7.5 MiB"
Aug 08 23:02:38 j-360 ollama[1617]: time=2025-08-08T23:02:38.441+01:00 level=INFO source=server.go:637 msg="llama runner started in 1.25 seconds"
Aug 08 23:02:38 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:02:38 | 200 |  2.747406422s |       127.0.0.1 | POST     "/api/chat"
Aug 08 23:02:41 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:02:41 | 200 |      15.449µs |       127.0.0.1 | HEAD     "/"
Aug 08 23:02:41 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:02:41 | 200 |       16.76µs |       127.0.0.1 | GET      "/api/ps"
Aug 08 23:03:26 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:03:26 | 200 |      17.269µs |       127.0.0.1 | HEAD     "/"
Aug 08 23:03:26 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:03:26 | 200 |       15.75µs |       127.0.0.1 | GET      "/api/ps"
Aug 08 23:03:32 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:03:32 | 200 |      21.789µs |       127.0.0.1 | HEAD     "/"
Aug 08 23:03:32 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:03:32 | 200 |      338.07µs |       127.0.0.1 | POST     "/api/generate"
Aug 08 23:03:34 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:03:34 | 200 |      15.749µs |       127.0.0.1 | HEAD     "/"
Aug 08 23:03:34 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:03:34 | 200 |        8.38µs |       127.0.0.1 | GET      "/api/ps"
<!-- gh-comment-id:3169415424 --> @Jakaboii commented on GitHub (Aug 8, 2025): It shows properly offloading but `ollama ps` still shows `23%/77% CPU/GPU` and it is still slower. ``` Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.145+01:00 level=INFO source=server.go:135 msg="system memory" total="30.9 GiB" free="25.4 GiB" free_swap="2.0 GiB" Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.146+01:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=49 layers.model=49 layers.offload=48 layers.split="" memory.available="[10.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.0 GiB" memory.required.partial="8.5 GiB" memory.required.kv="608.0 MiB" memory.required.allocations="[8.5 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.187+01:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 2048 --batch-size 512 --n-gpu-layers 49 --threads 6 --parallel 1 --port 39041" Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.187+01:00 level=INFO source=sched.go:481 msg="loaded runners" count=1 Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.187+01:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.187+01:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.195+01:00 level=INFO source=runner.go:925 msg="starting ollama engine" Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.195+01:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:39041" Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.236+01:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37 Aug 08 23:02:37 j-360 ollama[1617]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Aug 08 23:02:37 j-360 ollama[1617]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Aug 08 23:02:37 j-360 ollama[1617]: ggml_cuda_init: found 1 CUDA devices: Aug 08 23:02:37 j-360 ollama[1617]: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes Aug 08 23:02:37 j-360 ollama[1617]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so Aug 08 23:02:37 j-360 ollama[1617]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.289+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.371+01:00 level=INFO source=ggml.go:365 msg="offloading 48 repeating layers to GPU" Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.371+01:00 level=INFO source=ggml.go:371 msg="offloading output layer to GPU" Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.371+01:00 level=INFO source=ggml.go:376 msg="offloaded 49/49 layers to GPU" Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.371+01:00 level=INFO source=ggml.go:379 msg="model weights" buffer=CUDA0 size="7.6 GiB" Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.371+01:00 level=INFO source=ggml.go:379 msg="model weights" buffer=CPU size="787.5 MiB" Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.439+01:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.510+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB" Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.510+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.518+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB" Aug 08 23:02:37 j-360 ollama[1617]: time=2025-08-08T23:02:37.518+01:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CPU buffer_type=CPU size="7.5 MiB" Aug 08 23:02:38 j-360 ollama[1617]: time=2025-08-08T23:02:38.441+01:00 level=INFO source=server.go:637 msg="llama runner started in 1.25 seconds" Aug 08 23:02:38 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:02:38 | 200 | 2.747406422s | 127.0.0.1 | POST "/api/chat" Aug 08 23:02:41 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:02:41 | 200 | 15.449µs | 127.0.0.1 | HEAD "/" Aug 08 23:02:41 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:02:41 | 200 | 16.76µs | 127.0.0.1 | GET "/api/ps" Aug 08 23:03:26 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:03:26 | 200 | 17.269µs | 127.0.0.1 | HEAD "/" Aug 08 23:03:26 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:03:26 | 200 | 15.75µs | 127.0.0.1 | GET "/api/ps" Aug 08 23:03:32 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:03:32 | 200 | 21.789µs | 127.0.0.1 | HEAD "/" Aug 08 23:03:32 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:03:32 | 200 | 338.07µs | 127.0.0.1 | POST "/api/generate" Aug 08 23:03:34 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:03:34 | 200 | 15.749µs | 127.0.0.1 | HEAD "/" Aug 08 23:03:34 j-360 ollama[1617]: [GIN] 2025/08/08 - 23:03:34 | 200 | 8.38µs | 127.0.0.1 | GET "/api/ps" ```
Author
Owner

@rick-github commented on GitHub (Aug 8, 2025):

--n-gpu-layers 49

All 49 layers are offloaded. The output of ollama ps is incorrect because num_gpu has been overridden.

<!-- gh-comment-id:3169418289 --> @rick-github commented on GitHub (Aug 8, 2025): ``` --n-gpu-layers 49 ``` All 49 layers are offloaded. The output of `ollama ps` is incorrect because `num_gpu` has been overridden.
Author
Owner

@Jakaboii commented on GitHub (Aug 8, 2025):

I do see improved performance, but it is still slower than before. ollama ps used to show GPU 100% also and I didn't have to set num_gpu, it just worked. I presume for whatever reason there is no way to go back to how it worked previously?

<!-- gh-comment-id:3169424640 --> @Jakaboii commented on GitHub (Aug 8, 2025): I do see improved performance, but it is still slower than before. `ollama ps` used to show GPU 100% also and I didn't have to set `num_gpu`, it just worked. I presume for whatever reason there is no way to go back to how it worked previously?
Author
Owner

@rick-github commented on GitHub (Aug 8, 2025):

There is work underway to improve the memory estimation logic, in your case it seems to have made it more conservative, so the only solution is to stop other processes from using 1.1G of VRAM, or overriding num_gpu. However, there is an overhaul of the memory system (#11090) which should result in much more accurate estimations.

<!-- gh-comment-id:3169431900 --> @rick-github commented on GitHub (Aug 8, 2025): There is work underway to improve the memory estimation logic, in your case it seems to have made it more conservative, so the only solution is to stop other processes from using 1.1G of VRAM, or overriding `num_gpu`. However, there is an overhaul of the memory system (#11090) which should result in much more accurate estimations.
Author
Owner

@Jakaboii commented on GitHub (Aug 8, 2025):

I understand, could be just the cause of updating Open WebUI aswell amongst other things and it tipped memory usage slightly beyond the threshold. Thanks!

<!-- gh-comment-id:3169436234 --> @Jakaboii commented on GitHub (Aug 8, 2025): I understand, could be just the cause of updating Open WebUI aswell amongst other things and it tipped memory usage slightly beyond the threshold. Thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33604