[GH-ISSUE #11983] Ollama not using all GPU memory to offload model #33716

Closed
opened 2026-04-22 16:38:54 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @josetesan on GitHub (Aug 20, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11983

What is the issue?

Hi !
I've seen that my RTX 3070 with 8Gb is not been fully used by ollama to serve models, as it's still using CPU to offload models.

This is the command line:

OLLAMA_DEBUG=1 OLLAMA_MAX_LOADED_MODELS=1 ollama serve

time=2025-08-20T11:44:17.649+02:00 level=INFO source=routes.go:1318 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/josete/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"

On running ollama run --verbose qwen2.5vl, i see the following :

NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen2.5vl:latest 5ced39dfa4ba 8.5 GB 41%/59% CPU/GPU 4096 4 minutes from now

nvidia-smi output is

Wed Aug 20 11:54:45 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 575.64.05              Driver Version: 575.64.05      CUDA Version: 12.9     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3070        Off |   00000000:01:00.0  On |                  N/A |
| 33%   41C    P5             22W /  220W |    5223MiB /   8192MiB |     39%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A             986      G   /usr/lib/Xorg                           179MiB |
|    0   N/A  N/A            1827      G   cinnamon                                 64MiB |
|    0   N/A  N/A            7873      G   /usr/lib/firefox/firefox                182MiB |
|    0   N/A  N/A            9804      C   /usr/local/bin/ollama                  4744MiB |
+-----------------------------------------------------------------------------------------+

I've seen about ROCR_VISIBLE_DEVICES but unsetting it shows no change.

Relevant log output

>ollama server
time=2025-08-20T11:51:52.166+02:00 level=INFO source=images.go:477 msg="total blobs: 6"
time=2025-08-20T11:51:52.166+02:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-08-20T11:51:52.166+02:00 level=INFO source=routes.go:1371 msg="Listening on [::]:11434 (version 0.11.5)"
time=2025-08-20T11:51:52.167+02:00 level=DEBUG source=sched.go:121 msg="starting llm scheduler"
time=2025-08-20T11:51:52.167+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-20T11:51:52.167+02:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-08-20T11:51:52.167+02:00 level=DEBUG source=gpu.go:503 msg="Searching for GPU library" name=libcuda.so*
time=2025-08-20T11:51:52.167+02:00 level=DEBUG source=gpu.go:527 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /home/josete/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-08-20T11:51:52.189+02:00 level=DEBUG source=gpu.go:560 msg="discovered GPU libraries" paths="[/usr/lib/libcuda.so.575.64.05 /usr/lib32/libcuda.so.575.64.05 /usr/lib64/libcuda.so.575.64.05]"
initializing /usr/lib/libcuda.so.575.64.05
dlsym: cuInit - 0x7f8a32974640
dlsym: cuDriverGetVersion - 0x7f8a32974700
dlsym: cuDeviceGetCount - 0x7f8a32974880
dlsym: cuDeviceGet - 0x7f8a329747c0
dlsym: cuDeviceGetAttribute - 0x7f8a32974dc0
dlsym: cuDeviceGetUuid - 0x7f8a32974a00
dlsym: cuDeviceGetName - 0x7f8a32974940
dlsym: cuCtxCreate_v3 - 0x7f8a32975900
dlsym: cuMemGetInfo_v2 - 0x7f8a329785a0
dlsym: cuCtxDestroy - 0x7f8a329da4a0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f3a
CUDA driver version: 12.9
calling cuDeviceGetCount
device count 1
time=2025-08-20T11:51:52.333+02:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/libcuda.so.575.64.05
[GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5] CUDA totalMem 7838mb
[GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5] CUDA freeMem 7185mb
[GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5] Compute Capability 8.6
time=2025-08-20T11:51:52.493+02:00 level=DEBUG source=amd_linux.go:422 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2025-08-20T11:51:52.493+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5 library=cuda variant=v12 compute=8.6 driver=12.9 name="NVIDIA GeForce RTX 3070" total="7.7 GiB" available="7.0 GiB"
time=2025-08-20T11:51:52.493+02:00 level=INFO source=routes.go:1412 msg="entering low vram mode" "total vram"="7.7 GiB" threshold="20.0 GiB"

> ollama run --verbose qwen2.5vl
[GIN] 2025/08/20 - 11:52:11 | 200 |      32.985µs |       127.0.0.1 | HEAD     "/"
time=2025-08-20T11:52:11.609+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32
[GIN] 2025/08/20 - 11:52:11 | 200 |   51.831294ms |       127.0.0.1 | POST     "/api/show"
time=2025-08-20T11:52:11.681+02:00 level=DEBUG source=gpu.go:393 msg="updating system memory data" before.total="15.5 GiB" before.free="12.7 GiB" before.free_swap="0 B" now.total="15.5 GiB" now.free="12.6 GiB" now.free_swap="0 B"
initializing /usr/lib/libcuda.so.575.64.05
dlsym: cuInit - 0x7f8a32974640
dlsym: cuDriverGetVersion - 0x7f8a32974700
dlsym: cuDeviceGetCount - 0x7f8a32974880
dlsym: cuDeviceGet - 0x7f8a329747c0
dlsym: cuDeviceGetAttribute - 0x7f8a32974dc0
dlsym: cuDeviceGetUuid - 0x7f8a32974a00
dlsym: cuDeviceGetName - 0x7f8a32974940
dlsym: cuCtxCreate_v3 - 0x7f8a32975900
dlsym: cuMemGetInfo_v2 - 0x7f8a329785a0
dlsym: cuCtxDestroy - 0x7f8a329da4a0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f3a
CUDA driver version: 12.9
calling cuDeviceGetCount
device count 1
time=2025-08-20T11:52:11.842+02:00 level=DEBUG source=gpu.go:443 msg="updating cuda memory data" gpu=GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5 name="NVIDIA GeForce RTX 3070" overhead="0 B" before.total="7.7 GiB" before.free="7.0 GiB" now.total="7.7 GiB" now.free="7.0 GiB" now.used="682.4 MiB"
releasing cuda driver library
time=2025-08-20T11:52:11.860+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32
time=2025-08-20T11:52:11.861+02:00 level=DEBUG source=sched.go:208 msg="loading first model" model=/home/josete/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025
time=2025-08-20T11:52:11.923+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32
time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.rope.dimension_count default=128
time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.rope.freq_scale default=1
time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.qwen25vl.vision.fullatt_block_indexes default="&{size:0 values:[7 15 23 31]}"
time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.vision.max_pixels default=1003520
time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=gpu.go:393 msg="updating system memory data" before.total="15.5 GiB" before.free="12.6 GiB" before.free_swap="0 B" now.total="15.5 GiB" now.free="12.6 GiB" now.free_swap="0 B"
initializing /usr/lib/libcuda.so.575.64.05
dlsym: cuInit - 0x7f8a32974640
dlsym: cuDriverGetVersion - 0x7f8a32974700
dlsym: cuDeviceGetCount - 0x7f8a32974880
dlsym: cuDeviceGet - 0x7f8a329747c0
dlsym: cuDeviceGetAttribute - 0x7f8a32974dc0
dlsym: cuDeviceGetUuid - 0x7f8a32974a00
dlsym: cuDeviceGetName - 0x7f8a32974940
dlsym: cuCtxCreate_v3 - 0x7f8a32975900
dlsym: cuMemGetInfo_v2 - 0x7f8a329785a0
dlsym: cuCtxDestroy - 0x7f8a329da4a0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f3a
CUDA driver version: 12.9
calling cuDeviceGetCount
device count 1
time=2025-08-20T11:52:12.065+02:00 level=DEBUG source=gpu.go:443 msg="updating cuda memory data" gpu=GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5 name="NVIDIA GeForce RTX 3070" overhead="0 B" before.total="7.7 GiB" before.free="7.0 GiB" now.total="7.7 GiB" now.free="7.0 GiB" now.used="682.4 MiB"
releasing cuda driver library
time=2025-08-20T11:52:12.066+02:00 level=INFO source=server.go:383 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /home/josete/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 --port 35267"
time=2025-08-20T11:52:12.066+02:00 level=DEBUG source=server.go:384 msg=subprocess OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_DEBUG=1 CUDA_DIR=/opt/cuda OLLAMA_HOST=0.0.0.0 CUDA_PATH=/opt/cuda PATH=/home/josete/.nvm/versions/node/v20.18.0/bin:/home/josete/.jbang/bin:/home/josete/.pyenv/shims:/home/josete/.pyenv/bin:/home/josete/google-cloud-sdk/bin:/home/josete/.sdkman/candidates/visualvm/current/bin:/home/josete/.sdkman/candidates/mvnd/current/bin:/home/josete/.sdkman/candidates/maven/current/bin:/home/josete/.sdkman/candidates/kotlin/current/bin:/home/josete/.sdkman/candidates/kcctl/current/bin:/home/josete/.sdkman/candidates/java/current/bin:/home/josete/.sdkman/candidates/gradle/current/bin:/home/josete/.sdkman/candidates/btrace/current/bin:/home/josete/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/opt/cuda/bin:/opt/cuda/nsight_compute:/opt/cuda/nsight_systems/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/usr/lib/rustup/bin:/home/josete/tools/idea-IU-222.4459.24/bin:/home/josete/go/go1.24.2/bin OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama
time=2025-08-20T11:52:12.067+02:00 level=DEBUG source=gpu.go:393 msg="updating system memory data" before.total="15.5 GiB" before.free="12.6 GiB" before.free_swap="0 B" now.total="15.5 GiB" now.free="12.5 GiB" now.free_swap="0 B"
initializing /usr/lib/libcuda.so.575.64.05
dlsym: cuInit - 0x7f8a32974640
dlsym: cuDriverGetVersion - 0x7f8a32974700
dlsym: cuDeviceGetCount - 0x7f8a32974880
dlsym: cuDeviceGet - 0x7f8a329747c0
dlsym: cuDeviceGetAttribute - 0x7f8a32974dc0
dlsym: cuDeviceGetUuid - 0x7f8a32974a00
dlsym: cuDeviceGetName - 0x7f8a32974940
dlsym: cuCtxCreate_v3 - 0x7f8a32975900
dlsym: cuMemGetInfo_v2 - 0x7f8a329785a0
dlsym: cuCtxDestroy - 0x7f8a329da4a0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f3a
CUDA driver version: 12.9
calling cuDeviceGetCount
device count 1
time=2025-08-20T11:52:12.077+02:00 level=INFO source=runner.go:1006 msg="starting ollama engine"
time=2025-08-20T11:52:12.077+02:00 level=INFO source=runner.go:1043 msg="Server listening on 127.0.0.1:35267"
time=2025-08-20T11:52:12.209+02:00 level=DEBUG source=gpu.go:443 msg="updating cuda memory data" gpu=GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5 name="NVIDIA GeForce RTX 3070" overhead="0 B" before.total="7.7 GiB" before.free="7.0 GiB" now.total="7.7 GiB" now.free="7.0 GiB" now.used="682.4 MiB"
releasing cuda driver library
time=2025-08-20T11:52:12.209+02:00 level=INFO source=server.go:488 msg="system memory" total="15.5 GiB" free="12.5 GiB" free_swap="0 B"
time=2025-08-20T11:52:12.209+02:00 level=DEBUG source=memory.go:177 msg=evaluating library=cuda gpu_count=1 available="[7.0 GiB]"
time=2025-08-20T11:52:12.209+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.vision.image_size default=0
time=2025-08-20T11:52:12.210+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.vision.max_pixels default=1003520
time=2025-08-20T11:52:12.210+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.attention.key_length default=128
time=2025-08-20T11:52:12.210+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.attention.value_length default=128
time=2025-08-20T11:52:12.211+02:00 level=DEBUG source=memory.go:177 msg=evaluating library=cuda gpu_count=1 available="[7.0 GiB]"
time=2025-08-20T11:52:12.211+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.vision.image_size default=0
time=2025-08-20T11:52:12.212+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.vision.max_pixels default=1003520
time=2025-08-20T11:52:12.212+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.attention.key_length default=128
time=2025-08-20T11:52:12.212+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.attention.value_length default=128
time=2025-08-20T11:52:12.212+02:00 level=INFO source=server.go:531 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=28 layers.split=[28] memory.available="[7.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.0 GiB" memory.required.partial="4.7 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[4.7 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="261.3 MiB" memory.graph.partial="261.3 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB"
time=2025-08-20T11:52:12.213+02:00 level=INFO source=runner.go:925 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:6 GPULayers:28[ID:GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5 Layers:28(0..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-08-20T11:52:12.239+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32
time=2025-08-20T11:52:12.241+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.name default=""
time=2025-08-20T11:52:12.241+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.description default=""
time=2025-08-20T11:52:12.241+02:00 level=INFO source=ggml.go:130 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=36
time=2025-08-20T11:52:12.241+02:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes, ID: GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5
load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
time=2025-08-20T11:52:12.286+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-08-20T11:52:12.371+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-08-20T11:52:12.371+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2025-08-20T11:52:12.371+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2025-08-20T11:52:12.371+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-08-20T11:52:12.371+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.rope.dimension_count default=128
time=2025-08-20T11:52:12.371+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.rope.freq_scale default=1
time=2025-08-20T11:52:12.371+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.qwen25vl.vision.fullatt_block_indexes default="&{size:0 values:[7 15 23 31]}"
time=2025-08-20T11:52:12.371+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.vision.max_pixels default=1003520
time=2025-08-20T11:52:12.591+02:00 level=DEBUG source=vocabulary.go:52 msg="adding bos token to prompt" id=[0]
time=2025-08-20T11:52:12.592+02:00 level=DEBUG source=ggml.go:795 msg="compute graph" nodes=1748 splits=1
time=2025-08-20T11:52:12.601+02:00 level=DEBUG source=ggml.go:795 msg="compute graph" nodes=1073 splits=3
time=2025-08-20T11:52:12.601+02:00 level=INFO source=ggml.go:486 msg="offloading 28 repeating layers to GPU"
time=2025-08-20T11:52:12.601+02:00 level=INFO source=ggml.go:490 msg="offloading output layer to CPU"
time=2025-08-20T11:52:12.601+02:00 level=INFO source=ggml.go:497 msg="offloaded 28/29 layers to GPU"
time=2025-08-20T11:52:12.602+02:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="3.7 GiB"
time=2025-08-20T11:52:12.602+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.9 GiB"
time=2025-08-20T11:52:12.602+02:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="224.0 MiB"
time=2025-08-20T11:52:12.602+02:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="622.6 MiB"
time=2025-08-20T11:52:12.602+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="1.6 GiB"
time=2025-08-20T11:52:12.602+02:00 level=INFO source=backend.go:342 msg="total memory" size="7.9 GiB"
time=2025-08-20T11:52:12.602+02:00 level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-08-20T11:52:12.602+02:00 level=INFO source=server.go:1234 msg="waiting for llama runner to start responding"
time=2025-08-20T11:52:12.603+02:00 level=INFO source=server.go:1268 msg="waiting for server to become available" status="llm server loading model"
time=2025-08-20T11:52:12.603+02:00 level=DEBUG source=server.go:1278 msg="model load progress 0.00"
time=2025-08-20T11:52:12.855+02:00 level=DEBUG source=server.go:1278 msg="model load progress 0.23"
time=2025-08-20T11:52:13.106+02:00 level=DEBUG source=server.go:1278 msg="model load progress 0.45"
time=2025-08-20T11:52:13.357+02:00 level=DEBUG source=server.go:1278 msg="model load progress 0.67"
time=2025-08-20T11:52:13.612+02:00 level=DEBUG source=server.go:1278 msg="model load progress 0.83"
time=2025-08-20T11:52:13.863+02:00 level=DEBUG source=server.go:1278 msg="model load progress 0.98"
time=2025-08-20T11:52:14.114+02:00 level=INFO source=server.go:1272 msg="llama runner started in 2.05 seconds"
time=2025-08-20T11:52:14.114+02:00 level=DEBUG source=sched.go:485 msg="finished setting up" runner.name=registry.ollama.ai/library/qwen2.5vl:latest runner.inference=cuda runner.devices=1 runner.size="8.0 GiB" runner.vram="4.7 GiB" runner.parallel=1 runner.pid=9804 runner.model=/home/josete/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 runner.num_ctx=4096
[GIN] 2025/08/20 - 11:52:14 | 200 |  2.501134435s |       127.0.0.1 | POST     "/api/generate"
time=2025-08-20T11:52:14.115+02:00 level=DEBUG source=sched.go:493 msg="context for request finished"
time=2025-08-20T11:52:14.115+02:00 level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen2.5vl:latest runner.inference=cuda runner.devices=1 runner.size="8.0 GiB" runner.vram="4.7 GiB" runner.parallel=1 runner.pid=9804 runner.model=/home/josete/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 runner.num_ctx=4096 duration=5m0s
time=2025-08-20T11:52:14.115+02:00 level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen2.5vl:latest runner.inference=cuda runner.devices=1 runner.size="8.0 GiB" runner.vram="4.7 GiB" runner.parallel=1 runner.pid=9804 runner.model=/home/josete/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 runner.num_ctx=4096 refCount=0

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.11.5

Originally created by @josetesan on GitHub (Aug 20, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11983 ### What is the issue? Hi ! I've seen that my RTX 3070 with 8Gb is not been fully used by ollama to serve models, as it's still using CPU to offload models. This is the command line: >OLLAMA_DEBUG=1 OLLAMA_MAX_LOADED_MODELS=1 ollama serve `time=2025-08-20T11:44:17.649+02:00 level=INFO source=routes.go:1318 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/josete/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"` On running `ollama run --verbose qwen2.5vl`, i see the following : ` NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen2.5vl:latest 5ced39dfa4ba 8.5 GB 41%/59% CPU/GPU 4096 4 minutes from now ` nvidia-smi output is <pre>Wed Aug 20 11:54:45 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 575.64.05 Driver Version: 575.64.05 CUDA Version: 12.9 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3070 Off | 00000000:01:00.0 On | N/A | | 33% 41C P5 22W / 220W | 5223MiB / 8192MiB | 39% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 986 G /usr/lib/Xorg 179MiB | | 0 N/A N/A 1827 G cinnamon 64MiB | | 0 N/A N/A 7873 G /usr/lib/firefox/firefox 182MiB | | 0 N/A N/A 9804 C /usr/local/bin/ollama 4744MiB | +-----------------------------------------------------------------------------------------+ </pre> I've seen about `ROCR_VISIBLE_DEVICES` but unsetting it shows no change. ### Relevant log output ```shell >ollama server time=2025-08-20T11:51:52.166+02:00 level=INFO source=images.go:477 msg="total blobs: 6" time=2025-08-20T11:51:52.166+02:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-08-20T11:51:52.166+02:00 level=INFO source=routes.go:1371 msg="Listening on [::]:11434 (version 0.11.5)" time=2025-08-20T11:51:52.167+02:00 level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-08-20T11:51:52.167+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-20T11:51:52.167+02:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-08-20T11:51:52.167+02:00 level=DEBUG source=gpu.go:503 msg="Searching for GPU library" name=libcuda.so* time=2025-08-20T11:51:52.167+02:00 level=DEBUG source=gpu.go:527 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /home/josete/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-08-20T11:51:52.189+02:00 level=DEBUG source=gpu.go:560 msg="discovered GPU libraries" paths="[/usr/lib/libcuda.so.575.64.05 /usr/lib32/libcuda.so.575.64.05 /usr/lib64/libcuda.so.575.64.05]" initializing /usr/lib/libcuda.so.575.64.05 dlsym: cuInit - 0x7f8a32974640 dlsym: cuDriverGetVersion - 0x7f8a32974700 dlsym: cuDeviceGetCount - 0x7f8a32974880 dlsym: cuDeviceGet - 0x7f8a329747c0 dlsym: cuDeviceGetAttribute - 0x7f8a32974dc0 dlsym: cuDeviceGetUuid - 0x7f8a32974a00 dlsym: cuDeviceGetName - 0x7f8a32974940 dlsym: cuCtxCreate_v3 - 0x7f8a32975900 dlsym: cuMemGetInfo_v2 - 0x7f8a329785a0 dlsym: cuCtxDestroy - 0x7f8a329da4a0 calling cuInit calling cuDriverGetVersion raw version 0x2f3a CUDA driver version: 12.9 calling cuDeviceGetCount device count 1 time=2025-08-20T11:51:52.333+02:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/libcuda.so.575.64.05 [GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5] CUDA totalMem 7838mb [GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5] CUDA freeMem 7185mb [GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5] Compute Capability 8.6 time=2025-08-20T11:51:52.493+02:00 level=DEBUG source=amd_linux.go:422 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2025-08-20T11:51:52.493+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5 library=cuda variant=v12 compute=8.6 driver=12.9 name="NVIDIA GeForce RTX 3070" total="7.7 GiB" available="7.0 GiB" time=2025-08-20T11:51:52.493+02:00 level=INFO source=routes.go:1412 msg="entering low vram mode" "total vram"="7.7 GiB" threshold="20.0 GiB" > ollama run --verbose qwen2.5vl [GIN] 2025/08/20 - 11:52:11 | 200 | 32.985µs | 127.0.0.1 | HEAD "/" time=2025-08-20T11:52:11.609+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/08/20 - 11:52:11 | 200 | 51.831294ms | 127.0.0.1 | POST "/api/show" time=2025-08-20T11:52:11.681+02:00 level=DEBUG source=gpu.go:393 msg="updating system memory data" before.total="15.5 GiB" before.free="12.7 GiB" before.free_swap="0 B" now.total="15.5 GiB" now.free="12.6 GiB" now.free_swap="0 B" initializing /usr/lib/libcuda.so.575.64.05 dlsym: cuInit - 0x7f8a32974640 dlsym: cuDriverGetVersion - 0x7f8a32974700 dlsym: cuDeviceGetCount - 0x7f8a32974880 dlsym: cuDeviceGet - 0x7f8a329747c0 dlsym: cuDeviceGetAttribute - 0x7f8a32974dc0 dlsym: cuDeviceGetUuid - 0x7f8a32974a00 dlsym: cuDeviceGetName - 0x7f8a32974940 dlsym: cuCtxCreate_v3 - 0x7f8a32975900 dlsym: cuMemGetInfo_v2 - 0x7f8a329785a0 dlsym: cuCtxDestroy - 0x7f8a329da4a0 calling cuInit calling cuDriverGetVersion raw version 0x2f3a CUDA driver version: 12.9 calling cuDeviceGetCount device count 1 time=2025-08-20T11:52:11.842+02:00 level=DEBUG source=gpu.go:443 msg="updating cuda memory data" gpu=GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5 name="NVIDIA GeForce RTX 3070" overhead="0 B" before.total="7.7 GiB" before.free="7.0 GiB" now.total="7.7 GiB" now.free="7.0 GiB" now.used="682.4 MiB" releasing cuda driver library time=2025-08-20T11:52:11.860+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32 time=2025-08-20T11:52:11.861+02:00 level=DEBUG source=sched.go:208 msg="loading first model" model=/home/josete/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 time=2025-08-20T11:52:11.923+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32 time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.rope.dimension_count default=128 time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.rope.freq_scale default=1 time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.qwen25vl.vision.fullatt_block_indexes default="&{size:0 values:[7 15 23 31]}" time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.vision.max_pixels default=1003520 time=2025-08-20T11:52:11.925+02:00 level=DEBUG source=gpu.go:393 msg="updating system memory data" before.total="15.5 GiB" before.free="12.6 GiB" before.free_swap="0 B" now.total="15.5 GiB" now.free="12.6 GiB" now.free_swap="0 B" initializing /usr/lib/libcuda.so.575.64.05 dlsym: cuInit - 0x7f8a32974640 dlsym: cuDriverGetVersion - 0x7f8a32974700 dlsym: cuDeviceGetCount - 0x7f8a32974880 dlsym: cuDeviceGet - 0x7f8a329747c0 dlsym: cuDeviceGetAttribute - 0x7f8a32974dc0 dlsym: cuDeviceGetUuid - 0x7f8a32974a00 dlsym: cuDeviceGetName - 0x7f8a32974940 dlsym: cuCtxCreate_v3 - 0x7f8a32975900 dlsym: cuMemGetInfo_v2 - 0x7f8a329785a0 dlsym: cuCtxDestroy - 0x7f8a329da4a0 calling cuInit calling cuDriverGetVersion raw version 0x2f3a CUDA driver version: 12.9 calling cuDeviceGetCount device count 1 time=2025-08-20T11:52:12.065+02:00 level=DEBUG source=gpu.go:443 msg="updating cuda memory data" gpu=GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5 name="NVIDIA GeForce RTX 3070" overhead="0 B" before.total="7.7 GiB" before.free="7.0 GiB" now.total="7.7 GiB" now.free="7.0 GiB" now.used="682.4 MiB" releasing cuda driver library time=2025-08-20T11:52:12.066+02:00 level=INFO source=server.go:383 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /home/josete/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 --port 35267" time=2025-08-20T11:52:12.066+02:00 level=DEBUG source=server.go:384 msg=subprocess OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_DEBUG=1 CUDA_DIR=/opt/cuda OLLAMA_HOST=0.0.0.0 CUDA_PATH=/opt/cuda PATH=/home/josete/.nvm/versions/node/v20.18.0/bin:/home/josete/.jbang/bin:/home/josete/.pyenv/shims:/home/josete/.pyenv/bin:/home/josete/google-cloud-sdk/bin:/home/josete/.sdkman/candidates/visualvm/current/bin:/home/josete/.sdkman/candidates/mvnd/current/bin:/home/josete/.sdkman/candidates/maven/current/bin:/home/josete/.sdkman/candidates/kotlin/current/bin:/home/josete/.sdkman/candidates/kcctl/current/bin:/home/josete/.sdkman/candidates/java/current/bin:/home/josete/.sdkman/candidates/gradle/current/bin:/home/josete/.sdkman/candidates/btrace/current/bin:/home/josete/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/opt/cuda/bin:/opt/cuda/nsight_compute:/opt/cuda/nsight_systems/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/usr/lib/rustup/bin:/home/josete/tools/idea-IU-222.4459.24/bin:/home/josete/go/go1.24.2/bin OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama time=2025-08-20T11:52:12.067+02:00 level=DEBUG source=gpu.go:393 msg="updating system memory data" before.total="15.5 GiB" before.free="12.6 GiB" before.free_swap="0 B" now.total="15.5 GiB" now.free="12.5 GiB" now.free_swap="0 B" initializing /usr/lib/libcuda.so.575.64.05 dlsym: cuInit - 0x7f8a32974640 dlsym: cuDriverGetVersion - 0x7f8a32974700 dlsym: cuDeviceGetCount - 0x7f8a32974880 dlsym: cuDeviceGet - 0x7f8a329747c0 dlsym: cuDeviceGetAttribute - 0x7f8a32974dc0 dlsym: cuDeviceGetUuid - 0x7f8a32974a00 dlsym: cuDeviceGetName - 0x7f8a32974940 dlsym: cuCtxCreate_v3 - 0x7f8a32975900 dlsym: cuMemGetInfo_v2 - 0x7f8a329785a0 dlsym: cuCtxDestroy - 0x7f8a329da4a0 calling cuInit calling cuDriverGetVersion raw version 0x2f3a CUDA driver version: 12.9 calling cuDeviceGetCount device count 1 time=2025-08-20T11:52:12.077+02:00 level=INFO source=runner.go:1006 msg="starting ollama engine" time=2025-08-20T11:52:12.077+02:00 level=INFO source=runner.go:1043 msg="Server listening on 127.0.0.1:35267" time=2025-08-20T11:52:12.209+02:00 level=DEBUG source=gpu.go:443 msg="updating cuda memory data" gpu=GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5 name="NVIDIA GeForce RTX 3070" overhead="0 B" before.total="7.7 GiB" before.free="7.0 GiB" now.total="7.7 GiB" now.free="7.0 GiB" now.used="682.4 MiB" releasing cuda driver library time=2025-08-20T11:52:12.209+02:00 level=INFO source=server.go:488 msg="system memory" total="15.5 GiB" free="12.5 GiB" free_swap="0 B" time=2025-08-20T11:52:12.209+02:00 level=DEBUG source=memory.go:177 msg=evaluating library=cuda gpu_count=1 available="[7.0 GiB]" time=2025-08-20T11:52:12.209+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.vision.image_size default=0 time=2025-08-20T11:52:12.210+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.vision.max_pixels default=1003520 time=2025-08-20T11:52:12.210+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.attention.key_length default=128 time=2025-08-20T11:52:12.210+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.attention.value_length default=128 time=2025-08-20T11:52:12.211+02:00 level=DEBUG source=memory.go:177 msg=evaluating library=cuda gpu_count=1 available="[7.0 GiB]" time=2025-08-20T11:52:12.211+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.vision.image_size default=0 time=2025-08-20T11:52:12.212+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.vision.max_pixels default=1003520 time=2025-08-20T11:52:12.212+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.attention.key_length default=128 time=2025-08-20T11:52:12.212+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.attention.value_length default=128 time=2025-08-20T11:52:12.212+02:00 level=INFO source=server.go:531 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=28 layers.split=[28] memory.available="[7.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.0 GiB" memory.required.partial="4.7 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[4.7 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="261.3 MiB" memory.graph.partial="261.3 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB" time=2025-08-20T11:52:12.213+02:00 level=INFO source=runner.go:925 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:6 GPULayers:28[ID:GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5 Layers:28(0..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-08-20T11:52:12.239+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32 time=2025-08-20T11:52:12.241+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.name default="" time=2025-08-20T11:52:12.241+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.description default="" time=2025-08-20T11:52:12.241+02:00 level=INFO source=ggml.go:130 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=36 time=2025-08-20T11:52:12.241+02:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes, ID: GPU-c2e245dd-43c2-fb0c-f77c-93e3e5cd10c5 load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so time=2025-08-20T11:52:12.286+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-08-20T11:52:12.371+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-08-20T11:52:12.371+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-08-20T11:52:12.371+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-08-20T11:52:12.371+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-08-20T11:52:12.371+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.rope.dimension_count default=128 time=2025-08-20T11:52:12.371+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.rope.freq_scale default=1 time=2025-08-20T11:52:12.371+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.qwen25vl.vision.fullatt_block_indexes default="&{size:0 values:[7 15 23 31]}" time=2025-08-20T11:52:12.371+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=qwen25vl.vision.max_pixels default=1003520 time=2025-08-20T11:52:12.591+02:00 level=DEBUG source=vocabulary.go:52 msg="adding bos token to prompt" id=[0] time=2025-08-20T11:52:12.592+02:00 level=DEBUG source=ggml.go:795 msg="compute graph" nodes=1748 splits=1 time=2025-08-20T11:52:12.601+02:00 level=DEBUG source=ggml.go:795 msg="compute graph" nodes=1073 splits=3 time=2025-08-20T11:52:12.601+02:00 level=INFO source=ggml.go:486 msg="offloading 28 repeating layers to GPU" time=2025-08-20T11:52:12.601+02:00 level=INFO source=ggml.go:490 msg="offloading output layer to CPU" time=2025-08-20T11:52:12.601+02:00 level=INFO source=ggml.go:497 msg="offloaded 28/29 layers to GPU" time=2025-08-20T11:52:12.602+02:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="3.7 GiB" time=2025-08-20T11:52:12.602+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.9 GiB" time=2025-08-20T11:52:12.602+02:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="224.0 MiB" time=2025-08-20T11:52:12.602+02:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="622.6 MiB" time=2025-08-20T11:52:12.602+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="1.6 GiB" time=2025-08-20T11:52:12.602+02:00 level=INFO source=backend.go:342 msg="total memory" size="7.9 GiB" time=2025-08-20T11:52:12.602+02:00 level=INFO source=sched.go:473 msg="loaded runners" count=1 time=2025-08-20T11:52:12.602+02:00 level=INFO source=server.go:1234 msg="waiting for llama runner to start responding" time=2025-08-20T11:52:12.603+02:00 level=INFO source=server.go:1268 msg="waiting for server to become available" status="llm server loading model" time=2025-08-20T11:52:12.603+02:00 level=DEBUG source=server.go:1278 msg="model load progress 0.00" time=2025-08-20T11:52:12.855+02:00 level=DEBUG source=server.go:1278 msg="model load progress 0.23" time=2025-08-20T11:52:13.106+02:00 level=DEBUG source=server.go:1278 msg="model load progress 0.45" time=2025-08-20T11:52:13.357+02:00 level=DEBUG source=server.go:1278 msg="model load progress 0.67" time=2025-08-20T11:52:13.612+02:00 level=DEBUG source=server.go:1278 msg="model load progress 0.83" time=2025-08-20T11:52:13.863+02:00 level=DEBUG source=server.go:1278 msg="model load progress 0.98" time=2025-08-20T11:52:14.114+02:00 level=INFO source=server.go:1272 msg="llama runner started in 2.05 seconds" time=2025-08-20T11:52:14.114+02:00 level=DEBUG source=sched.go:485 msg="finished setting up" runner.name=registry.ollama.ai/library/qwen2.5vl:latest runner.inference=cuda runner.devices=1 runner.size="8.0 GiB" runner.vram="4.7 GiB" runner.parallel=1 runner.pid=9804 runner.model=/home/josete/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 runner.num_ctx=4096 [GIN] 2025/08/20 - 11:52:14 | 200 | 2.501134435s | 127.0.0.1 | POST "/api/generate" time=2025-08-20T11:52:14.115+02:00 level=DEBUG source=sched.go:493 msg="context for request finished" time=2025-08-20T11:52:14.115+02:00 level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen2.5vl:latest runner.inference=cuda runner.devices=1 runner.size="8.0 GiB" runner.vram="4.7 GiB" runner.parallel=1 runner.pid=9804 runner.model=/home/josete/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 runner.num_ctx=4096 duration=5m0s time=2025-08-20T11:52:14.115+02:00 level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen2.5vl:latest runner.inference=cuda runner.devices=1 runner.size="8.0 GiB" runner.vram="4.7 GiB" runner.parallel=1 runner.pid=9804 runner.model=/home/josete/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 runner.num_ctx=4096 refCount=0 ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.11.5
GiteaMirror added the bug label 2026-04-22 16:38:54 -05:00
Author
Owner

@jessegross commented on GitHub (Aug 20, 2025):

time=2025-08-20T11:52:12.212+02:00 level=INFO source=server.go:531 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=28 layers.split=[28] memory.available="[7.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.0 GiB" memory.required.partial="4.7 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[4.7 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="261.3 MiB" memory.graph.partial="261.3 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB"

All but the last layer has been offloaded to the GPU. The last layer includes the vision projector, which is 2.8G. Since this must be offloaded or not as one unit, it does not fit in the remaining space on your GPU and has been placed on the CPU.

<!-- gh-comment-id:3207276738 --> @jessegross commented on GitHub (Aug 20, 2025): `time=2025-08-20T11:52:12.212+02:00 level=INFO source=server.go:531 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=28 layers.split=[28] memory.available="[7.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.0 GiB" memory.required.partial="4.7 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[4.7 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="261.3 MiB" memory.graph.partial="261.3 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB"` All but the last layer has been offloaded to the GPU. The last layer includes the vision projector, which is 2.8G. Since this must be offloaded or not as one unit, it does not fit in the remaining space on your GPU and has been placed on the CPU.
Author
Owner

@josetesan commented on GitHub (Aug 20, 2025):

I tríed using  the same model in LMStudio . It dia fit entirelly on the vram. What could be the issue ? Enviado desde mi iPhoneEl 20 ago 2025, a las 18:58, Jesse Gross @.***> escribió:
Closed #11983 as completed.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>

<!-- gh-comment-id:3207375096 --> @josetesan commented on GitHub (Aug 20, 2025): I tríed using  the same model in LMStudio . It dia fit entirelly on the vram. What could be the issue ? Enviado desde mi iPhoneEl 20 ago 2025, a las 18:58, Jesse Gross ***@***.***> escribió: Closed #11983 as completed. —Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
Author
Owner

@jessegross commented on GitHub (Aug 20, 2025):

You could try setting OLLAMA_NEW_ESTIMATES=1, which sometimes has tighter memory allocations and might help given that it is close.

I believe that LMStudio, which uses llama.cpp internally, does not preallocate memory for vision calculations so it may crash at runtime depending on the image resolution, etc. Ollama tries to avoid this situation.

<!-- gh-comment-id:3207433956 --> @jessegross commented on GitHub (Aug 20, 2025): You could try setting OLLAMA_NEW_ESTIMATES=1, which sometimes has tighter memory allocations and might help given that it is close. I believe that LMStudio, which uses llama.cpp internally, does not preallocate memory for vision calculations so it may crash at runtime depending on the image resolution, etc. Ollama tries to avoid this situation.
Author
Owner

@poly2it commented on GitHub (Sep 18, 2025):

Running on Windows, it seems that when we load gemma3:4b-it-qat, Ollama loads 30/35 layers on GPU memory (CUDA), using about half of the available 4 GiB dedicated.

<!-- gh-comment-id:3308100303 --> @poly2it commented on GitHub (Sep 18, 2025): Running on Windows, it seems that when we load gemma3:4b-it-qat, Ollama loads 30/35 layers on GPU memory (CUDA), using about half of the available 4 GiB dedicated.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33716