[GH-ISSUE #13343] Model seems to run completely on CPU instead of GPU #34572

Closed
opened 2026-04-22 18:16:00 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @wuyukai0403 on GitHub (Dec 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13343

What is the issue?

qwen3 model run completely on CPU instead of GPU. Shown by both ollama ps and nvidia-smi.

Relevant log output

time=2025-12-05T20:51:05.169+08:00 level=INFO source=routes.go:1466 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/run/media/wyk/e9995e69-f6f8-4b6f-8fe4-c2b4ff56d221/data/AI/LLMs/ OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-12-05T20:51:05.169+08:00 level=INFO source=images.go:518 msg="total blobs: 10"
time=2025-12-05T20:51:05.170+08:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0"
time=2025-12-05T20:51:05.170+08:00 level=INFO source=routes.go:1519 msg="Listening on 127.0.0.1:11434 (version 0.12.0)"
time=2025-12-05T20:51:05.170+08:00 level=DEBUG source=sched.go:121 msg="starting llm scheduler"
time=2025-12-05T20:51:05.170+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-12-05T20:51:05.171+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-12-05T20:51:05.171+08:00 level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcuda.so*
time=2025-12-05T20:51:05.171+08:00 level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /home/wyk/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-12-05T20:51:05.185+08:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths="[/usr/lib/libcuda.so.580.105.08 /usr/lib64/libcuda.so.580.105.08]"
initializing /usr/lib/libcuda.so.580.105.08
dlsym: cuInit - 0x7f2e36505cb0
dlsym: cuDriverGetVersion - 0x7f2e36505d70
dlsym: cuDeviceGetCount - 0x7f2e36505ef0
dlsym: cuDeviceGet - 0x7f2e36505e30
dlsym: cuDeviceGetAttribute - 0x7f2e36528f00
dlsym: cuDeviceGetUuid - 0x7f2e36581b70
dlsym: cuDeviceGetName - 0x7f2e36505fb0
dlsym: cuCtxCreate_v3 - 0x7f2e3657f5b0
dlsym: cuMemGetInfo_v2 - 0x7f2e3652d440
dlsym: cuCtxDestroy - 0x7f2e365814b0
calling cuInit
calling cuDriverGetVersion
raw version 0x32c8
CUDA driver version: 13.0
calling cuDeviceGetCount
device count 1
time=2025-12-05T20:51:05.226+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/libcuda.so.580.105.08
[GPU-c5624af8-73bc-5daa-957b-e85fef506412] CUDA totalMem 11909mb
[GPU-c5624af8-73bc-5daa-957b-e85fef506412] CUDA freeMem 11037mb
[GPU-c5624af8-73bc-5daa-957b-e85fef506412] Compute Capability 8.6
time=2025-12-05T20:51:05.371+08:00 level=DEBUG source=amd_linux.go:423 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2025-12-05T20:51:05.371+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-c5624af8-73bc-5daa-957b-e85fef506412 library=cuda variant=v13 compute=8.6 driver=13.0 name="NVIDIA GeForce RTX 3060" total="11.6 GiB" available="10.8 GiB"
time=2025-12-05T20:51:05.371+08:00 level=INFO source=routes.go:1560 msg="entering low vram mode" "total vram"="11.6 GiB" threshold="20.0 GiB"
[GIN] 2025/12/05 - 20:51:15 | 200 |       41.27µs |       127.0.0.1 | HEAD     "/"
time=2025-12-05T20:51:15.603+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=general.alignment default=32
[GIN] 2025/12/05 - 20:51:15 | 200 |   38.487857ms |       127.0.0.1 | POST     "/api/show"
time=2025-12-05T20:51:15.653+08:00 level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="31.1 GiB" before.free="26.7 GiB" before.free_swap="0 B" now.total="31.1 GiB" now.free="26.7 GiB" now.free_swap="0 B"
initializing /usr/lib/libcuda.so.580.105.08
dlsym: cuInit - 0x7f2e36505cb0
dlsym: cuDriverGetVersion - 0x7f2e36505d70
dlsym: cuDeviceGetCount - 0x7f2e36505ef0
dlsym: cuDeviceGet - 0x7f2e36505e30
dlsym: cuDeviceGetAttribute - 0x7f2e36528f00
dlsym: cuDeviceGetUuid - 0x7f2e36581b70
dlsym: cuDeviceGetName - 0x7f2e36505fb0
dlsym: cuCtxCreate_v3 - 0x7f2e3657f5b0
dlsym: cuMemGetInfo_v2 - 0x7f2e3652d440
dlsym: cuCtxDestroy - 0x7f2e365814b0
calling cuInit
calling cuDriverGetVersion
raw version 0x32c8
CUDA driver version: 13.0
calling cuDeviceGetCount
device count 1
time=2025-12-05T20:51:15.792+08:00 level=DEBUG source=gpu.go:460 msg="updating cuda memory data" gpu=GPU-c5624af8-73bc-5daa-957b-e85fef506412 name="NVIDIA GeForce RTX 3060" overhead="0 B" before.total="11.6 GiB" before.free="10.8 GiB" now.total="11.6 GiB" now.free="10.8 GiB" now.used="872.6 MiB"
releasing cuda driver library
time=2025-12-05T20:51:15.792+08:00 level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-12-05T20:51:15.800+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=general.alignment default=32
time=2025-12-05T20:51:15.800+08:00 level=DEBUG source=sched.go:208 msg="loading first model" model=/run/media/wyk/e9995e69-f6f8-4b6f-8fe4-c2b4ff56d221/data/AI/LLMs/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f
time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=general.alignment default=32
time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.pooling_type default=0
time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.rope.scaling.factor default=1
time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.expert_count default=0
time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.expert_used_count default=0
time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.norm_top_k_prob default=true
time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="31.1 GiB" before.free="26.7 GiB" before.free_swap="0 B" now.total="31.1 GiB" now.free="26.6 GiB" now.free_swap="0 B"
initializing /usr/lib/libcuda.so.580.105.08
dlsym: cuInit - 0x7f2e36505cb0
dlsym: cuDriverGetVersion - 0x7f2e36505d70
dlsym: cuDeviceGetCount - 0x7f2e36505ef0
dlsym: cuDeviceGet - 0x7f2e36505e30
dlsym: cuDeviceGetAttribute - 0x7f2e36528f00
dlsym: cuDeviceGetUuid - 0x7f2e36581b70
dlsym: cuDeviceGetName - 0x7f2e36505fb0
dlsym: cuCtxCreate_v3 - 0x7f2e3657f5b0
dlsym: cuMemGetInfo_v2 - 0x7f2e3652d440
dlsym: cuCtxDestroy - 0x7f2e365814b0
calling cuInit
calling cuDriverGetVersion
raw version 0x32c8
CUDA driver version: 13.0
calling cuDeviceGetCount
device count 1
time=2025-12-05T20:51:15.959+08:00 level=DEBUG source=gpu.go:460 msg="updating cuda memory data" gpu=GPU-c5624af8-73bc-5daa-957b-e85fef506412 name="NVIDIA GeForce RTX 3060" overhead="0 B" before.total="11.6 GiB" before.free="10.8 GiB" now.total="11.6 GiB" now.free="10.8 GiB" now.used="872.6 MiB"
releasing cuda driver library
time=2025-12-05T20:51:15.960+08:00 level=INFO source=server.go:399 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /run/media/wyk/e9995e69-f6f8-4b6f-8fe4-c2b4ff56d221/data/AI/LLMs/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f --port 32817"
time=2025-12-05T20:51:15.960+08:00 level=DEBUG source=server.go:400 msg=subprocess OLLAMA_NUM_PARALLEL=1 OLLAMA_MODELS=/run/media/wyk/e9995e69-f6f8-4b6f-8fe4-c2b4ff56d221/data/AI/LLMs/ OLLAMA_DEBUG=1 CUDA_PATH=/opt/cuda PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/cuda/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/usr/lib/rustup/bin OLLAMA_MAX_LOADED_MODELS=3 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama
time=2025-12-05T20:51:15.960+08:00 level=INFO source=server.go:672 msg="loading model" "model layers"=37 requested=-1
time=2025-12-05T20:51:15.960+08:00 level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="31.1 GiB" before.free="26.6 GiB" before.free_swap="0 B" now.total="31.1 GiB" now.free="26.6 GiB" now.free_swap="0 B"
initializing /usr/lib/libcuda.so.580.105.08
dlsym: cuInit - 0x7f2e36505cb0
dlsym: cuDriverGetVersion - 0x7f2e36505d70
dlsym: cuDeviceGetCount - 0x7f2e36505ef0
dlsym: cuDeviceGet - 0x7f2e36505e30
dlsym: cuDeviceGetAttribute - 0x7f2e36528f00
dlsym: cuDeviceGetUuid - 0x7f2e36581b70
dlsym: cuDeviceGetName - 0x7f2e36505fb0
dlsym: cuCtxCreate_v3 - 0x7f2e3657f5b0
dlsym: cuMemGetInfo_v2 - 0x7f2e3652d440
dlsym: cuCtxDestroy - 0x7f2e365814b0
calling cuInit
calling cuDriverGetVersion
raw version 0x32c8
CUDA driver version: 13.0
calling cuDeviceGetCount
device count 1
time=2025-12-05T20:51:15.968+08:00 level=INFO source=runner.go:1252 msg="starting ollama engine"
time=2025-12-05T20:51:15.968+08:00 level=INFO source=runner.go:1287 msg="Server listening on 127.0.0.1:32817"
time=2025-12-05T20:51:16.083+08:00 level=DEBUG source=gpu.go:460 msg="updating cuda memory data" gpu=GPU-c5624af8-73bc-5daa-957b-e85fef506412 name="NVIDIA GeForce RTX 3060" overhead="0 B" before.total="11.6 GiB" before.free="10.8 GiB" now.total="11.6 GiB" now.free="10.8 GiB" now.used="872.6 MiB"
releasing cuda driver library
time=2025-12-05T20:51:16.083+08:00 level=INFO source=server.go:678 msg="system memory" total="31.1 GiB" free="26.6 GiB" free_swap="0 B"
time=2025-12-05T20:51:16.083+08:00 level=INFO source=server.go:686 msg="gpu memory" id=GPU-c5624af8-73bc-5daa-957b-e85fef506412 available="10.3 GiB" free="10.8 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-12-05T20:51:16.083+08:00 level=INFO source=runner.go:1171 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-c5624af8-73bc-5daa-957b-e85fef506412 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-05T20:51:16.098+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=general.alignment default=32
time=2025-12-05T20:51:16.098+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=general.description default=""
time=2025-12-05T20:51:16.098+08:00 level=INFO source=ggml.go:131 msg="" architecture=qwen3 file_type=Q4_K_M name="Qwen3 8B" description="" num_tensors=399 num_key_values=29
time=2025-12-05T20:51:16.098+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
time=2025-12-05T20:51:16.099+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-12-05T20:51:16.100+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.pooling_type default=0
time=2025-12-05T20:51:16.100+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-12-05T20:51:16.100+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-12-05T20:51:16.100+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.rope.scaling.factor default=1
time=2025-12-05T20:51:16.100+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.expert_count default=0
time=2025-12-05T20:51:16.100+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.expert_used_count default=0
time=2025-12-05T20:51:16.100+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.norm_top_k_prob default=true
time=2025-12-05T20:51:16.102+08:00 level=DEBUG source=ggml.go:794 msg="compute graph" nodes=1374 splits=1
time=2025-12-05T20:51:16.102+08:00 level=DEBUG source=backend.go:315 msg="model weights" device=CPU size="4.9 GiB"
time=2025-12-05T20:51:16.102+08:00 level=DEBUG source=backend.go:326 msg="kv cache" device=CPU size="576.0 MiB"
time=2025-12-05T20:51:16.102+08:00 level=DEBUG source=backend.go:337 msg="compute graph" device=CPU size="288.0 MiB"
time=2025-12-05T20:51:16.102+08:00 level=DEBUG source=backend.go:342 msg="total memory" size="5.7 GiB"
time=2025-12-05T20:51:16.102+08:00 level=DEBUG source=server.go:717 msg=memory success=true required.InputWeights=350060544U required.CPU.Weights="[127566848U 127566848U 127566848U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 127566848U 127566848U 127566848U 127566848U 127566848U 510521344U]" required.CPU.Cache="[16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 0U]" required.CPU.Graph=301989888U
time=2025-12-05T20:51:16.103+08:00 level=DEBUG source=server.go:969 msg="insufficient VRAM to load any model layers"
time=2025-12-05T20:51:16.103+08:00 level=DEBUG source=server.go:728 msg="new layout created" layers=[]
time=2025-12-05T20:51:16.103+08:00 level=INFO source=runner.go:1171 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-05T20:51:16.123+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=general.alignment default=32
time=2025-12-05T20:51:16.124+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.pooling_type default=0
time=2025-12-05T20:51:16.124+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-12-05T20:51:16.124+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-12-05T20:51:16.124+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.rope.scaling.factor default=1
time=2025-12-05T20:51:16.124+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.expert_count default=0
time=2025-12-05T20:51:16.124+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.expert_used_count default=0
time=2025-12-05T20:51:16.124+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.norm_top_k_prob default=true
time=2025-12-05T20:51:16.126+08:00 level=DEBUG source=ggml.go:794 msg="compute graph" nodes=1374 splits=1
time=2025-12-05T20:51:16.126+08:00 level=DEBUG source=backend.go:315 msg="model weights" device=CPU size="4.9 GiB"
time=2025-12-05T20:51:16.126+08:00 level=DEBUG source=backend.go:326 msg="kv cache" device=CPU size="576.0 MiB"
time=2025-12-05T20:51:16.126+08:00 level=DEBUG source=backend.go:337 msg="compute graph" device=CPU size="288.0 MiB"
time=2025-12-05T20:51:16.126+08:00 level=DEBUG source=backend.go:342 msg="total memory" size="5.7 GiB"
time=2025-12-05T20:51:16.126+08:00 level=DEBUG source=server.go:717 msg=memory success=true required.InputWeights=350060544U required.CPU.Weights="[127566848U 127566848U 127566848U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 127566848U 127566848U 127566848U 127566848U 127566848U 510521344U]" required.CPU.Cache="[16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 0U]" required.CPU.Graph=301989888U
time=2025-12-05T20:51:16.126+08:00 level=DEBUG source=server.go:969 msg="insufficient VRAM to load any model layers"
time=2025-12-05T20:51:16.126+08:00 level=DEBUG source=server.go:728 msg="new layout created" layers=[]
time=2025-12-05T20:51:16.126+08:00 level=INFO source=runner.go:1171 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-05T20:51:16.145+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=general.alignment default=32
time=2025-12-05T20:51:16.146+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.pooling_type default=0
time=2025-12-05T20:51:16.146+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-12-05T20:51:16.146+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-12-05T20:51:16.146+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.rope.scaling.factor default=1
time=2025-12-05T20:51:16.146+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.expert_count default=0
time=2025-12-05T20:51:16.146+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.expert_used_count default=0
time=2025-12-05T20:51:16.146+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.norm_top_k_prob default=true
time=2025-12-05T20:51:16.236+08:00 level=DEBUG source=ggml.go:794 msg="compute graph" nodes=1374 splits=1
time=2025-12-05T20:51:16.236+08:00 level=DEBUG source=backend.go:315 msg="model weights" device=CPU size="4.9 GiB"
time=2025-12-05T20:51:16.236+08:00 level=DEBUG source=backend.go:326 msg="kv cache" device=CPU size="576.0 MiB"
time=2025-12-05T20:51:16.236+08:00 level=DEBUG source=backend.go:337 msg="compute graph" device=CPU size="288.0 MiB"
time=2025-12-05T20:51:16.236+08:00 level=DEBUG source=backend.go:342 msg="total memory" size="5.7 GiB"
time=2025-12-05T20:51:16.236+08:00 level=DEBUG source=server.go:717 msg=memory success=true required.InputWeights=350060544A required.CPU.Weights="[127566848A 127566848A 127566848A 127566848A 114590720A 114590720A 127566848A 114590720A 114590720A 127566848A 114590720A 114590720A 127566848A 114590720A 114590720A 127566848A 114590720A 114590720A 127566848A 114590720A 114590720A 127566848A 114590720A 114590720A 127566848A 114590720A 114590720A 127566848A 114590720A 114590720A 127566848A 127566848A 127566848A 127566848A 127566848A 127566848A 510521344A]" required.CPU.Cache="[16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 0U]" required.CPU.Graph=301989888A
time=2025-12-05T20:51:16.236+08:00 level=DEBUG source=server.go:969 msg="insufficient VRAM to load any model layers"
time=2025-12-05T20:51:16.236+08:00 level=DEBUG source=server.go:728 msg="new layout created" layers=[]
time=2025-12-05T20:51:16.236+08:00 level=INFO source=runner.go:1171 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-05T20:51:16.237+08:00 level=INFO source=ggml.go:487 msg="offloading 0 repeating layers to GPU"
time=2025-12-05T20:51:16.237+08:00 level=INFO source=ggml.go:491 msg="offloading output layer to CPU"
time=2025-12-05T20:51:16.237+08:00 level=INFO source=ggml.go:498 msg="offloaded 0/37 layers to GPU"
time=2025-12-05T20:51:16.237+08:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="4.9 GiB"
time=2025-12-05T20:51:16.237+08:00 level=INFO source=backend.go:326 msg="kv cache" device=CPU size="576.0 MiB"
time=2025-12-05T20:51:16.237+08:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="288.0 MiB"
time=2025-12-05T20:51:16.237+08:00 level=INFO source=backend.go:342 msg="total memory" size="5.7 GiB"
time=2025-12-05T20:51:16.237+08:00 level=INFO source=sched.go:470 msg="loaded runners" count=1
time=2025-12-05T20:51:16.237+08:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
time=2025-12-05T20:51:16.239+08:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
time=2025-12-05T20:51:16.240+08:00 level=DEBUG source=server.go:1295 msg="model load progress 0.00"
time=2025-12-05T20:51:16.492+08:00 level=DEBUG source=server.go:1295 msg="model load progress 0.89"
time=2025-12-05T20:51:16.546+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.pooling_type default=0
time=2025-12-05T20:51:16.744+08:00 level=INFO source=server.go:1289 msg="llama runner started in 0.78 seconds"
time=2025-12-05T20:51:16.744+08:00 level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=registry.ollama.ai/library/qwen3:latest runner.inference=cuda runner.devices=1 runner.size="5.7 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=18794 runner.model=/run/media/wyk/e9995e69-f6f8-4b6f-8fe4-c2b4ff56d221/data/AI/LLMs/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f runner.num_ctx=4096
[GIN] 2025/12/05 - 20:51:16 | 200 |  1.140018071s |       127.0.0.1 | POST     "/api/generate"
time=2025-12-05T20:51:16.745+08:00 level=DEBUG source=sched.go:490 msg="context for request finished"
time=2025-12-05T20:51:16.745+08:00 level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen3:latest runner.inference=cuda runner.devices=1 runner.size="5.7 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=18794 runner.model=/run/media/wyk/e9995e69-f6f8-4b6f-8fe4-c2b4ff56d221/data/AI/LLMs/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f runner.num_ctx=4096 duration=5m0s
time=2025-12-05T20:51:16.745+08:00 level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3:latest runner.inference=cuda runner.devices=1 runner.size="5.7 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=18794 runner.model=/run/media/wyk/e9995e69-f6f8-4b6f-8fe4-c2b4ff56d221/data/AI/LLMs/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f runner.num_ctx=4096 refCount=0

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.12.0

Originally created by @wuyukai0403 on GitHub (Dec 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13343 ### What is the issue? qwen3 model run completely on CPU instead of GPU. Shown by both `ollama ps` and `nvidia-smi`. ### Relevant log output ```shell time=2025-12-05T20:51:05.169+08:00 level=INFO source=routes.go:1466 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/run/media/wyk/e9995e69-f6f8-4b6f-8fe4-c2b4ff56d221/data/AI/LLMs/ OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-12-05T20:51:05.169+08:00 level=INFO source=images.go:518 msg="total blobs: 10" time=2025-12-05T20:51:05.170+08:00 level=INFO source=images.go:525 msg="total unused blobs removed: 0" time=2025-12-05T20:51:05.170+08:00 level=INFO source=routes.go:1519 msg="Listening on 127.0.0.1:11434 (version 0.12.0)" time=2025-12-05T20:51:05.170+08:00 level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2025-12-05T20:51:05.170+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-12-05T20:51:05.171+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-12-05T20:51:05.171+08:00 level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcuda.so* time=2025-12-05T20:51:05.171+08:00 level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /home/wyk/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-12-05T20:51:05.185+08:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths="[/usr/lib/libcuda.so.580.105.08 /usr/lib64/libcuda.so.580.105.08]" initializing /usr/lib/libcuda.so.580.105.08 dlsym: cuInit - 0x7f2e36505cb0 dlsym: cuDriverGetVersion - 0x7f2e36505d70 dlsym: cuDeviceGetCount - 0x7f2e36505ef0 dlsym: cuDeviceGet - 0x7f2e36505e30 dlsym: cuDeviceGetAttribute - 0x7f2e36528f00 dlsym: cuDeviceGetUuid - 0x7f2e36581b70 dlsym: cuDeviceGetName - 0x7f2e36505fb0 dlsym: cuCtxCreate_v3 - 0x7f2e3657f5b0 dlsym: cuMemGetInfo_v2 - 0x7f2e3652d440 dlsym: cuCtxDestroy - 0x7f2e365814b0 calling cuInit calling cuDriverGetVersion raw version 0x32c8 CUDA driver version: 13.0 calling cuDeviceGetCount device count 1 time=2025-12-05T20:51:05.226+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/libcuda.so.580.105.08 [GPU-c5624af8-73bc-5daa-957b-e85fef506412] CUDA totalMem 11909mb [GPU-c5624af8-73bc-5daa-957b-e85fef506412] CUDA freeMem 11037mb [GPU-c5624af8-73bc-5daa-957b-e85fef506412] Compute Capability 8.6 time=2025-12-05T20:51:05.371+08:00 level=DEBUG source=amd_linux.go:423 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2025-12-05T20:51:05.371+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-c5624af8-73bc-5daa-957b-e85fef506412 library=cuda variant=v13 compute=8.6 driver=13.0 name="NVIDIA GeForce RTX 3060" total="11.6 GiB" available="10.8 GiB" time=2025-12-05T20:51:05.371+08:00 level=INFO source=routes.go:1560 msg="entering low vram mode" "total vram"="11.6 GiB" threshold="20.0 GiB" [GIN] 2025/12/05 - 20:51:15 | 200 | 41.27µs | 127.0.0.1 | HEAD "/" time=2025-12-05T20:51:15.603+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/12/05 - 20:51:15 | 200 | 38.487857ms | 127.0.0.1 | POST "/api/show" time=2025-12-05T20:51:15.653+08:00 level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="31.1 GiB" before.free="26.7 GiB" before.free_swap="0 B" now.total="31.1 GiB" now.free="26.7 GiB" now.free_swap="0 B" initializing /usr/lib/libcuda.so.580.105.08 dlsym: cuInit - 0x7f2e36505cb0 dlsym: cuDriverGetVersion - 0x7f2e36505d70 dlsym: cuDeviceGetCount - 0x7f2e36505ef0 dlsym: cuDeviceGet - 0x7f2e36505e30 dlsym: cuDeviceGetAttribute - 0x7f2e36528f00 dlsym: cuDeviceGetUuid - 0x7f2e36581b70 dlsym: cuDeviceGetName - 0x7f2e36505fb0 dlsym: cuCtxCreate_v3 - 0x7f2e3657f5b0 dlsym: cuMemGetInfo_v2 - 0x7f2e3652d440 dlsym: cuCtxDestroy - 0x7f2e365814b0 calling cuInit calling cuDriverGetVersion raw version 0x32c8 CUDA driver version: 13.0 calling cuDeviceGetCount device count 1 time=2025-12-05T20:51:15.792+08:00 level=DEBUG source=gpu.go:460 msg="updating cuda memory data" gpu=GPU-c5624af8-73bc-5daa-957b-e85fef506412 name="NVIDIA GeForce RTX 3060" overhead="0 B" before.total="11.6 GiB" before.free="10.8 GiB" now.total="11.6 GiB" now.free="10.8 GiB" now.used="872.6 MiB" releasing cuda driver library time=2025-12-05T20:51:15.792+08:00 level=DEBUG source=sched.go:188 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-12-05T20:51:15.800+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=general.alignment default=32 time=2025-12-05T20:51:15.800+08:00 level=DEBUG source=sched.go:208 msg="loading first model" model=/run/media/wyk/e9995e69-f6f8-4b6f-8fe4-c2b4ff56d221/data/AI/LLMs/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=general.alignment default=32 time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.pooling_type default=0 time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.rope.scaling.factor default=1 time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.expert_count default=0 time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.expert_used_count default=0 time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.norm_top_k_prob default=true time=2025-12-05T20:51:15.832+08:00 level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="31.1 GiB" before.free="26.7 GiB" before.free_swap="0 B" now.total="31.1 GiB" now.free="26.6 GiB" now.free_swap="0 B" initializing /usr/lib/libcuda.so.580.105.08 dlsym: cuInit - 0x7f2e36505cb0 dlsym: cuDriverGetVersion - 0x7f2e36505d70 dlsym: cuDeviceGetCount - 0x7f2e36505ef0 dlsym: cuDeviceGet - 0x7f2e36505e30 dlsym: cuDeviceGetAttribute - 0x7f2e36528f00 dlsym: cuDeviceGetUuid - 0x7f2e36581b70 dlsym: cuDeviceGetName - 0x7f2e36505fb0 dlsym: cuCtxCreate_v3 - 0x7f2e3657f5b0 dlsym: cuMemGetInfo_v2 - 0x7f2e3652d440 dlsym: cuCtxDestroy - 0x7f2e365814b0 calling cuInit calling cuDriverGetVersion raw version 0x32c8 CUDA driver version: 13.0 calling cuDeviceGetCount device count 1 time=2025-12-05T20:51:15.959+08:00 level=DEBUG source=gpu.go:460 msg="updating cuda memory data" gpu=GPU-c5624af8-73bc-5daa-957b-e85fef506412 name="NVIDIA GeForce RTX 3060" overhead="0 B" before.total="11.6 GiB" before.free="10.8 GiB" now.total="11.6 GiB" now.free="10.8 GiB" now.used="872.6 MiB" releasing cuda driver library time=2025-12-05T20:51:15.960+08:00 level=INFO source=server.go:399 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /run/media/wyk/e9995e69-f6f8-4b6f-8fe4-c2b4ff56d221/data/AI/LLMs/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f --port 32817" time=2025-12-05T20:51:15.960+08:00 level=DEBUG source=server.go:400 msg=subprocess OLLAMA_NUM_PARALLEL=1 OLLAMA_MODELS=/run/media/wyk/e9995e69-f6f8-4b6f-8fe4-c2b4ff56d221/data/AI/LLMs/ OLLAMA_DEBUG=1 CUDA_PATH=/opt/cuda PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/cuda/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/usr/lib/rustup/bin OLLAMA_MAX_LOADED_MODELS=3 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama time=2025-12-05T20:51:15.960+08:00 level=INFO source=server.go:672 msg="loading model" "model layers"=37 requested=-1 time=2025-12-05T20:51:15.960+08:00 level=DEBUG source=gpu.go:410 msg="updating system memory data" before.total="31.1 GiB" before.free="26.6 GiB" before.free_swap="0 B" now.total="31.1 GiB" now.free="26.6 GiB" now.free_swap="0 B" initializing /usr/lib/libcuda.so.580.105.08 dlsym: cuInit - 0x7f2e36505cb0 dlsym: cuDriverGetVersion - 0x7f2e36505d70 dlsym: cuDeviceGetCount - 0x7f2e36505ef0 dlsym: cuDeviceGet - 0x7f2e36505e30 dlsym: cuDeviceGetAttribute - 0x7f2e36528f00 dlsym: cuDeviceGetUuid - 0x7f2e36581b70 dlsym: cuDeviceGetName - 0x7f2e36505fb0 dlsym: cuCtxCreate_v3 - 0x7f2e3657f5b0 dlsym: cuMemGetInfo_v2 - 0x7f2e3652d440 dlsym: cuCtxDestroy - 0x7f2e365814b0 calling cuInit calling cuDriverGetVersion raw version 0x32c8 CUDA driver version: 13.0 calling cuDeviceGetCount device count 1 time=2025-12-05T20:51:15.968+08:00 level=INFO source=runner.go:1252 msg="starting ollama engine" time=2025-12-05T20:51:15.968+08:00 level=INFO source=runner.go:1287 msg="Server listening on 127.0.0.1:32817" time=2025-12-05T20:51:16.083+08:00 level=DEBUG source=gpu.go:460 msg="updating cuda memory data" gpu=GPU-c5624af8-73bc-5daa-957b-e85fef506412 name="NVIDIA GeForce RTX 3060" overhead="0 B" before.total="11.6 GiB" before.free="10.8 GiB" now.total="11.6 GiB" now.free="10.8 GiB" now.used="872.6 MiB" releasing cuda driver library time=2025-12-05T20:51:16.083+08:00 level=INFO source=server.go:678 msg="system memory" total="31.1 GiB" free="26.6 GiB" free_swap="0 B" time=2025-12-05T20:51:16.083+08:00 level=INFO source=server.go:686 msg="gpu memory" id=GPU-c5624af8-73bc-5daa-957b-e85fef506412 available="10.3 GiB" free="10.8 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-12-05T20:51:16.083+08:00 level=INFO source=runner.go:1171 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-c5624af8-73bc-5daa-957b-e85fef506412 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-05T20:51:16.098+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=general.alignment default=32 time=2025-12-05T20:51:16.098+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=general.description default="" time=2025-12-05T20:51:16.098+08:00 level=INFO source=ggml.go:131 msg="" architecture=qwen3 file_type=Q4_K_M name="Qwen3 8B" description="" num_tensors=399 num_key_values=29 time=2025-12-05T20:51:16.098+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama time=2025-12-05T20:51:16.099+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) time=2025-12-05T20:51:16.100+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.pooling_type default=0 time=2025-12-05T20:51:16.100+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-12-05T20:51:16.100+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-12-05T20:51:16.100+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.rope.scaling.factor default=1 time=2025-12-05T20:51:16.100+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.expert_count default=0 time=2025-12-05T20:51:16.100+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.expert_used_count default=0 time=2025-12-05T20:51:16.100+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.norm_top_k_prob default=true time=2025-12-05T20:51:16.102+08:00 level=DEBUG source=ggml.go:794 msg="compute graph" nodes=1374 splits=1 time=2025-12-05T20:51:16.102+08:00 level=DEBUG source=backend.go:315 msg="model weights" device=CPU size="4.9 GiB" time=2025-12-05T20:51:16.102+08:00 level=DEBUG source=backend.go:326 msg="kv cache" device=CPU size="576.0 MiB" time=2025-12-05T20:51:16.102+08:00 level=DEBUG source=backend.go:337 msg="compute graph" device=CPU size="288.0 MiB" time=2025-12-05T20:51:16.102+08:00 level=DEBUG source=backend.go:342 msg="total memory" size="5.7 GiB" time=2025-12-05T20:51:16.102+08:00 level=DEBUG source=server.go:717 msg=memory success=true required.InputWeights=350060544U required.CPU.Weights="[127566848U 127566848U 127566848U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 127566848U 127566848U 127566848U 127566848U 127566848U 510521344U]" required.CPU.Cache="[16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 0U]" required.CPU.Graph=301989888U time=2025-12-05T20:51:16.103+08:00 level=DEBUG source=server.go:969 msg="insufficient VRAM to load any model layers" time=2025-12-05T20:51:16.103+08:00 level=DEBUG source=server.go:728 msg="new layout created" layers=[] time=2025-12-05T20:51:16.103+08:00 level=INFO source=runner.go:1171 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-05T20:51:16.123+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=general.alignment default=32 time=2025-12-05T20:51:16.124+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.pooling_type default=0 time=2025-12-05T20:51:16.124+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-12-05T20:51:16.124+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-12-05T20:51:16.124+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.rope.scaling.factor default=1 time=2025-12-05T20:51:16.124+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.expert_count default=0 time=2025-12-05T20:51:16.124+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.expert_used_count default=0 time=2025-12-05T20:51:16.124+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.norm_top_k_prob default=true time=2025-12-05T20:51:16.126+08:00 level=DEBUG source=ggml.go:794 msg="compute graph" nodes=1374 splits=1 time=2025-12-05T20:51:16.126+08:00 level=DEBUG source=backend.go:315 msg="model weights" device=CPU size="4.9 GiB" time=2025-12-05T20:51:16.126+08:00 level=DEBUG source=backend.go:326 msg="kv cache" device=CPU size="576.0 MiB" time=2025-12-05T20:51:16.126+08:00 level=DEBUG source=backend.go:337 msg="compute graph" device=CPU size="288.0 MiB" time=2025-12-05T20:51:16.126+08:00 level=DEBUG source=backend.go:342 msg="total memory" size="5.7 GiB" time=2025-12-05T20:51:16.126+08:00 level=DEBUG source=server.go:717 msg=memory success=true required.InputWeights=350060544U required.CPU.Weights="[127566848U 127566848U 127566848U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 114590720U 114590720U 127566848U 127566848U 127566848U 127566848U 127566848U 127566848U 510521344U]" required.CPU.Cache="[16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 16777216U 0U]" required.CPU.Graph=301989888U time=2025-12-05T20:51:16.126+08:00 level=DEBUG source=server.go:969 msg="insufficient VRAM to load any model layers" time=2025-12-05T20:51:16.126+08:00 level=DEBUG source=server.go:728 msg="new layout created" layers=[] time=2025-12-05T20:51:16.126+08:00 level=INFO source=runner.go:1171 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-05T20:51:16.145+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=general.alignment default=32 time=2025-12-05T20:51:16.146+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.pooling_type default=0 time=2025-12-05T20:51:16.146+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-12-05T20:51:16.146+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-12-05T20:51:16.146+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.rope.scaling.factor default=1 time=2025-12-05T20:51:16.146+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.expert_count default=0 time=2025-12-05T20:51:16.146+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.expert_used_count default=0 time=2025-12-05T20:51:16.146+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.norm_top_k_prob default=true time=2025-12-05T20:51:16.236+08:00 level=DEBUG source=ggml.go:794 msg="compute graph" nodes=1374 splits=1 time=2025-12-05T20:51:16.236+08:00 level=DEBUG source=backend.go:315 msg="model weights" device=CPU size="4.9 GiB" time=2025-12-05T20:51:16.236+08:00 level=DEBUG source=backend.go:326 msg="kv cache" device=CPU size="576.0 MiB" time=2025-12-05T20:51:16.236+08:00 level=DEBUG source=backend.go:337 msg="compute graph" device=CPU size="288.0 MiB" time=2025-12-05T20:51:16.236+08:00 level=DEBUG source=backend.go:342 msg="total memory" size="5.7 GiB" time=2025-12-05T20:51:16.236+08:00 level=DEBUG source=server.go:717 msg=memory success=true required.InputWeights=350060544A required.CPU.Weights="[127566848A 127566848A 127566848A 127566848A 114590720A 114590720A 127566848A 114590720A 114590720A 127566848A 114590720A 114590720A 127566848A 114590720A 114590720A 127566848A 114590720A 114590720A 127566848A 114590720A 114590720A 127566848A 114590720A 114590720A 127566848A 114590720A 114590720A 127566848A 114590720A 114590720A 127566848A 127566848A 127566848A 127566848A 127566848A 127566848A 510521344A]" required.CPU.Cache="[16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 16777216A 0U]" required.CPU.Graph=301989888A time=2025-12-05T20:51:16.236+08:00 level=DEBUG source=server.go:969 msg="insufficient VRAM to load any model layers" time=2025-12-05T20:51:16.236+08:00 level=DEBUG source=server.go:728 msg="new layout created" layers=[] time=2025-12-05T20:51:16.236+08:00 level=INFO source=runner.go:1171 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-05T20:51:16.237+08:00 level=INFO source=ggml.go:487 msg="offloading 0 repeating layers to GPU" time=2025-12-05T20:51:16.237+08:00 level=INFO source=ggml.go:491 msg="offloading output layer to CPU" time=2025-12-05T20:51:16.237+08:00 level=INFO source=ggml.go:498 msg="offloaded 0/37 layers to GPU" time=2025-12-05T20:51:16.237+08:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="4.9 GiB" time=2025-12-05T20:51:16.237+08:00 level=INFO source=backend.go:326 msg="kv cache" device=CPU size="576.0 MiB" time=2025-12-05T20:51:16.237+08:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="288.0 MiB" time=2025-12-05T20:51:16.237+08:00 level=INFO source=backend.go:342 msg="total memory" size="5.7 GiB" time=2025-12-05T20:51:16.237+08:00 level=INFO source=sched.go:470 msg="loaded runners" count=1 time=2025-12-05T20:51:16.237+08:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" time=2025-12-05T20:51:16.239+08:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" time=2025-12-05T20:51:16.240+08:00 level=DEBUG source=server.go:1295 msg="model load progress 0.00" time=2025-12-05T20:51:16.492+08:00 level=DEBUG source=server.go:1295 msg="model load progress 0.89" time=2025-12-05T20:51:16.546+08:00 level=DEBUG source=ggml.go:275 msg="key with type not found" key=qwen3.pooling_type default=0 time=2025-12-05T20:51:16.744+08:00 level=INFO source=server.go:1289 msg="llama runner started in 0.78 seconds" time=2025-12-05T20:51:16.744+08:00 level=DEBUG source=sched.go:482 msg="finished setting up" runner.name=registry.ollama.ai/library/qwen3:latest runner.inference=cuda runner.devices=1 runner.size="5.7 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=18794 runner.model=/run/media/wyk/e9995e69-f6f8-4b6f-8fe4-c2b4ff56d221/data/AI/LLMs/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f runner.num_ctx=4096 [GIN] 2025/12/05 - 20:51:16 | 200 | 1.140018071s | 127.0.0.1 | POST "/api/generate" time=2025-12-05T20:51:16.745+08:00 level=DEBUG source=sched.go:490 msg="context for request finished" time=2025-12-05T20:51:16.745+08:00 level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen3:latest runner.inference=cuda runner.devices=1 runner.size="5.7 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=18794 runner.model=/run/media/wyk/e9995e69-f6f8-4b6f-8fe4-c2b4ff56d221/data/AI/LLMs/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f runner.num_ctx=4096 duration=5m0s time=2025-12-05T20:51:16.745+08:00 level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3:latest runner.inference=cuda runner.devices=1 runner.size="5.7 GiB" runner.vram="0 B" runner.parallel=1 runner.pid=18794 runner.model=/run/media/wyk/e9995e69-f6f8-4b6f-8fe4-c2b4ff56d221/data/AI/LLMs/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f runner.num_ctx=4096 refCount=0 ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.12.0
GiteaMirror added the bug label 2026-04-22 18:16:00 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 5, 2025):

time=2025-12-05T20:51:16.098+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
time=2025-12-05T20:51:16.099+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)

No accelerated backends were found. What's the output of ls -lR /usr/local/lib/ollama? How did you install ollama?

<!-- gh-comment-id:3616902344 --> @rick-github commented on GitHub (Dec 5, 2025): ``` time=2025-12-05T20:51:16.098+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama time=2025-12-05T20:51:16.099+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) ``` No accelerated backends were found. What's the output of `ls -lR /usr/local/lib/ollama`? How did you install ollama?
Author
Owner

@wuyukai0403 commented on GitHub (Dec 5, 2025):

/usr/local/lib/ollama:
total 26476
-rwx------ 1 root root 27110912 Sep 20 09:49 libggml-hip.so

But /usr/lib/ollama has:

/usr/lib/ollama:
total 1601452
-rwxr-xr-x 1 root root     723200 Nov 23 14:42 libggml-base.so
-rwxr-xr-x 1 root root     842176 Nov 23 14:42 libggml-cpu-alderlake.so
-rwxr-xr-x 1 root root     842176 Nov 23 14:42 libggml-cpu-haswell.so
-rwxr-xr-x 1 root root    1038784 Nov 23 14:42 libggml-cpu-icelake.so
-rwxr-xr-x 1 root root     776648 Nov 23 14:42 libggml-cpu-sandybridge.so
-rwxr-xr-x 1 root root    1042880 Nov 23 14:42 libggml-cpu-skylakex.so
-rwxr-xr-x 1 root root     629184 Nov 23 14:42 libggml-cpu-sse42.so
-rwxr-xr-x 1 root root     612800 Nov 23 14:42 libggml-cpu-x64.so
-rwxr-xr-x 1 root root 1633359808 Nov 23 14:42 libggml-cuda.so

I installed ollama by installing pacman packages ollama and ollama-cuda.

<!-- gh-comment-id:3616999779 --> @wuyukai0403 commented on GitHub (Dec 5, 2025): ``` /usr/local/lib/ollama: total 26476 -rwx------ 1 root root 27110912 Sep 20 09:49 libggml-hip.so ``` But /usr/lib/ollama has: ``` /usr/lib/ollama: total 1601452 -rwxr-xr-x 1 root root 723200 Nov 23 14:42 libggml-base.so -rwxr-xr-x 1 root root 842176 Nov 23 14:42 libggml-cpu-alderlake.so -rwxr-xr-x 1 root root 842176 Nov 23 14:42 libggml-cpu-haswell.so -rwxr-xr-x 1 root root 1038784 Nov 23 14:42 libggml-cpu-icelake.so -rwxr-xr-x 1 root root 776648 Nov 23 14:42 libggml-cpu-sandybridge.so -rwxr-xr-x 1 root root 1042880 Nov 23 14:42 libggml-cpu-skylakex.so -rwxr-xr-x 1 root root 629184 Nov 23 14:42 libggml-cpu-sse42.so -rwxr-xr-x 1 root root 612800 Nov 23 14:42 libggml-cpu-x64.so -rwxr-xr-x 1 root root 1633359808 Nov 23 14:42 libggml-cuda.so ``` I installed ollama by installing pacman packages `ollama` and `ollama-cuda`.
Author
Owner

@wuyukai0403 commented on GitHub (Dec 5, 2025):

Wait. It seems to be caused by a manual installation of ollama in /usr/local/bin/ollama. The one installed by pacman is /usr/bin/ollama.

<!-- gh-comment-id:3617021260 --> @wuyukai0403 commented on GitHub (Dec 5, 2025): Wait. It seems to be caused by a manual installation of ollama in `/usr/local/bin/ollama`. The one installed by pacman is `/usr/bin/ollama`.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34572