[GH-ISSUE #9797] Unable to utilize the GPU for computation. #52919

Closed
opened 2026-04-29 01:24:07 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @freers623 on GitHub (Mar 16, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9797

What is the issue?

After upgrading to the latest version, I could only rely on the CPU to run the AI language model. I tried installing versions 0.60 and 0.511, but the situation remained the same—only the CPU was utilized, and the GPU was not called upon. It wasn’t until I reverted to version 0.312 that I was able to utilize the GPU for computations again. My GPU is a 4090.

Relevant log output

2025/03/17 00:41:34 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\Ollama model OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-03-17T00:41:34.216+08:00 level=INFO source=images.go:432 msg="total blobs: 41"
time=2025-03-17T00:41:34.217+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-17T00:41:34.218+08:00 level=INFO source=routes.go:1297 msg="Listening on 127.0.0.1:11434 (version 0.6.1)"
time=2025-03-17T00:41:34.218+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-17T00:41:34.218+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-03-17T00:41:34.218+08:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-03-17T00:41:34.218+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=24 efficiency=16 threads=32
time=2025-03-17T00:41:34.346+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-996bef33-c125-09bc-a867-8f319f7f4f9f library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" overhead="633.2 MiB"
time=2025-03-17T00:41:34.347+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-996bef33-c125-09bc-a867-8f319f7f4f9f library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="22.5 GiB"
time=2025-03-17T00:41:47.610+08:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model="D:\\Ollama model\\blobs\\sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541" gpu=GPU-996bef33-c125-09bc-a867-8f319f7f4f9f parallel=1 available=24111644672 required="19.4 GiB"
time=2025-03-17T00:41:47.627+08:00 level=INFO source=server.go:105 msg="system memory" total="31.8 GiB" free="26.7 GiB" free_swap="54.4 GiB"
time=2025-03-17T00:41:47.628+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[22.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.4 GiB" memory.required.partial="19.4 GiB" memory.required.kv="992.0 MiB" memory.required.allocations="[19.4 GiB]" memory.weights.total="14.3 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-03-17T00:41:47.683+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-17T00:41:47.685+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-17T00:41:47.686+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-17T00:41:47.689+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-03-17T00:41:47.689+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-17T00:41:47.689+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-17T00:41:47.689+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-17T00:41:47.689+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-17T00:41:47.695+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model D:\\Ollama model\\blobs\\sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541 --ctx-size 2048 --batch-size 512 --n-gpu-layers 63 --threads 8 --no-mmap --mlock --parallel 1 --port 62620"
time=2025-03-17T00:41:47.698+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-17T00:41:47.698+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-03-17T00:41:47.698+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-03-17T00:41:47.712+08:00 level=INFO source=runner.go:823 msg="starting ollama engine"
time=2025-03-17T00:41:47.716+08:00 level=INFO source=runner.go:883 msg="Server listening on 127.0.0.1:62620"
time=2025-03-17T00:41:47.769+08:00 level=WARN source=ggml.go:149 msg="key not found" key=general.name default=""
time=2025-03-17T00:41:47.769+08:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
time=2025-03-17T00:41:47.769+08:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=36
ggml_backend_load_best: failed to load C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_backend_load_best: failed to load C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_backend_load_best: failed to load C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_backend_load_best: failed to load C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll
ggml_backend_load_best: failed to load C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll
time=2025-03-17T00:41:47.795+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang)
time=2025-03-17T00:41:47.799+08:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="17.3 GiB"
time=2025-03-17T00:41:48.005+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
time=2025-03-17T00:42:05.859+08:00 level=INFO source=ggml.go:356 msg="compute graph" backend=CPU buffer_type=CPU
time=2025-03-17T00:42:05.873+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-17T00:42:05.957+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-17T00:42:05.963+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-17T00:42:05.975+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-03-17T00:42:05.976+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-17T00:42:05.976+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-17T00:42:05.977+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-17T00:42:05.977+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-17T00:42:06.077+08:00 level=INFO source=server.go:624 msg="llama runner started in 18.38 seconds"
[GIN] 2025/03/17 - 00:44:14 | 200 |         2m27s |       127.0.0.1 | POST     "/api/chat"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.6.1

Originally created by @freers623 on GitHub (Mar 16, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9797 ### What is the issue? After upgrading to the latest version, I could only rely on the CPU to run the AI language model. I tried installing versions 0.60 and 0.511, but the situation remained the same—only the CPU was utilized, and the GPU was not called upon. It wasn’t until I reverted to version 0.312 that I was able to utilize the GPU for computations again. My GPU is a 4090. ### Relevant log output ```shell 2025/03/17 00:41:34 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\Ollama model OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-03-17T00:41:34.216+08:00 level=INFO source=images.go:432 msg="total blobs: 41" time=2025-03-17T00:41:34.217+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-17T00:41:34.218+08:00 level=INFO source=routes.go:1297 msg="Listening on 127.0.0.1:11434 (version 0.6.1)" time=2025-03-17T00:41:34.218+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-17T00:41:34.218+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-03-17T00:41:34.218+08:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-03-17T00:41:34.218+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=24 efficiency=16 threads=32 time=2025-03-17T00:41:34.346+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-996bef33-c125-09bc-a867-8f319f7f4f9f library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" overhead="633.2 MiB" time=2025-03-17T00:41:34.347+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-996bef33-c125-09bc-a867-8f319f7f4f9f library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="22.5 GiB" time=2025-03-17T00:41:47.610+08:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model="D:\\Ollama model\\blobs\\sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541" gpu=GPU-996bef33-c125-09bc-a867-8f319f7f4f9f parallel=1 available=24111644672 required="19.4 GiB" time=2025-03-17T00:41:47.627+08:00 level=INFO source=server.go:105 msg="system memory" total="31.8 GiB" free="26.7 GiB" free_swap="54.4 GiB" time=2025-03-17T00:41:47.628+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[22.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.4 GiB" memory.required.partial="19.4 GiB" memory.required.kv="992.0 MiB" memory.required.allocations="[19.4 GiB]" memory.weights.total="14.3 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-03-17T00:41:47.683+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-17T00:41:47.685+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-03-17T00:41:47.686+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-17T00:41:47.689+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-03-17T00:41:47.689+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-03-17T00:41:47.689+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-03-17T00:41:47.689+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-03-17T00:41:47.689+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-03-17T00:41:47.695+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model D:\\Ollama model\\blobs\\sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541 --ctx-size 2048 --batch-size 512 --n-gpu-layers 63 --threads 8 --no-mmap --mlock --parallel 1 --port 62620" time=2025-03-17T00:41:47.698+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-17T00:41:47.698+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" time=2025-03-17T00:41:47.698+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-03-17T00:41:47.712+08:00 level=INFO source=runner.go:823 msg="starting ollama engine" time=2025-03-17T00:41:47.716+08:00 level=INFO source=runner.go:883 msg="Server listening on 127.0.0.1:62620" time=2025-03-17T00:41:47.769+08:00 level=WARN source=ggml.go:149 msg="key not found" key=general.name default="" time=2025-03-17T00:41:47.769+08:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default="" time=2025-03-17T00:41:47.769+08:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=36 ggml_backend_load_best: failed to load C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_backend_load_best: failed to load C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_backend_load_best: failed to load C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_backend_load_best: failed to load C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-sandybridge.dll ggml_backend_load_best: failed to load C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-skylakex.dll time=2025-03-17T00:41:47.795+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang) time=2025-03-17T00:41:47.799+08:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="17.3 GiB" time=2025-03-17T00:41:48.005+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" time=2025-03-17T00:42:05.859+08:00 level=INFO source=ggml.go:356 msg="compute graph" backend=CPU buffer_type=CPU time=2025-03-17T00:42:05.873+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-17T00:42:05.957+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-03-17T00:42:05.963+08:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-17T00:42:05.975+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-03-17T00:42:05.976+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-03-17T00:42:05.976+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-03-17T00:42:05.977+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-03-17T00:42:05.977+08:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-03-17T00:42:06.077+08:00 level=INFO source=server.go:624 msg="llama runner started in 18.38 seconds" [GIN] 2025/03/17 - 00:44:14 | 200 | 2m27s | 127.0.0.1 | POST "/api/chat" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.6.1
GiteaMirror added the bug label 2026-04-29 01:24:07 -05:00
Author
Owner

@freers623 commented on GitHub (Mar 16, 2025):

This is the log file after reverting to version 0.3.12.
2025/03/17 01:32:23 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\Ollama model OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2025-03-17T01:32:23.098+08:00 level=INFO source=images.go:753 msg="total blobs: 41"
time=2025-03-17T01:32:23.111+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2025-03-17T01:32:23.112+08:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.12)"
time=2025-03-17T01:32:23.112+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 rocm_v6.1 cpu cpu_avx cpu_avx2]"
time=2025-03-17T01:32:23.112+08:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
time=2025-03-17T01:32:23.259+08:00 level=INFO source=gpu.go:292 msg="detected OS VRAM overhead" id=GPU-996bef33-c125-09bc-a867-8f319f7f4f9f library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" overhead="476.0 MiB"
time=2025-03-17T01:32:23.260+08:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-996bef33-c125-09bc-a867-8f319f7f4f9f library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="22.5 GiB"
time=2025-03-17T01:32:28.495+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model="D:\Ollama model\blobs\sha256-c62ccde5630c20c8a9cf601861d31977d07450cad6dfdf1c661aab307107bddb" gpu=GPU-996bef33-c125-09bc-a867-8f319f7f4f9f parallel=4 available=24114331648 required="21.5 GiB"
time=2025-03-17T01:32:28.495+08:00 level=INFO source=server.go:103 msg="system memory" total="31.8 GiB" free="26.0 GiB" free_swap="52.5 GiB"
time=2025-03-17T01:32:28.496+08:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=65 layers.offload=65 layers.split="" memory.available="[22.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="21.5 GiB" memory.required.partial="21.5 GiB" memory.required.kv="2.0 GiB" memory.required.allocations="[21.5 GiB]" memory.weights.total="19.5 GiB" memory.weights.repeating="18.9 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB"
time=2025-03-17T01:32:28.503+08:00 level=INFO source=server.go:388 msg="starting llama server" cmd="C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\runners\cuda_v12\ollama_llama_server.exe --model D:\Ollama model\blobs\sha256-c62ccde5630c20c8a9cf601861d31977d07450cad6dfdf1c661aab307107bddb --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 65 --no-mmap --mlock --parallel 4 --port 50105"
time=2025-03-17T01:32:28.504+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-03-17T01:32:28.504+08:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
time=2025-03-17T01:32:28.504+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3670 commit="7142ce2b" tid="20296" timestamp=1742146348
INFO [wmain] system info | n_threads=24 n_threads_batch=24 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="20296" timestamp=1742146348 total_threads=32
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="50105" tid="20296" timestamp=1742146348
llama_model_loader: loaded meta data with 33 key-value pairs and 771 tensors from D:\Ollama model\blobs\sha256-c62ccde5630c20c8a9cf601861d31977d07450cad6dfdf1c661aab307107bddb (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = QwQ 32B
llama_model_loader: - kv 3: general.basename str = QwQ
llama_model_loader: - kv 4: general.size_label str = 32B
llama_model_loader: - kv 5: general.license str = apache-2.0
llama_model_loader: - kv 6: general.license.link str = https://huggingface.co/Qwen/QWQ-32B/b...
llama_model_loader: - kv 7: general.base_model.count u32 = 1
llama_model_loader: - kv 8: general.base_model.0.name str = Qwen2.5 32B
llama_model_loader: - kv 9: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-32B
llama_model_loader: - kv 11: general.tags arr[str,2] = ["chat", "text-generation"]
llama_model_loader: - kv 12: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 13: qwen2.block_count u32 = 64
llama_model_loader: - kv 14: qwen2.context_length u32 = 131072
llama_model_loader: - kv 15: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 16: qwen2.feed_forward_length u32 = 27648
llama_model_loader: - kv 17: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 18: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 19: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 20: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 22: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 30: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 31: general.quantization_version u32 = 2
llama_model_loader: - kv 32: general.file_type u32 = 15
llama_model_loader: - type f32: 321 tensors
llama_model_loader: - type q4_K: 385 tensors
llama_model_loader: - type q6_K: 65 tensors
llm_load_vocab: special tokens cache size = 26
llm_load_vocab: token to piece cache size = 0.9311 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 5120
llm_load_print_meta: n_layer = 64
llm_load_print_meta: n_head = 40
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 5
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 27648
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 32.76 B
llm_load_print_meta: model size = 18.48 GiB (4.85 BPW)
llm_load_print_meta: general.name = QwQ 32B
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
time=2025-03-17T01:32:28.758+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
llm_load_tensors: ggml ctx size = 0.68 MiB
llm_load_tensors: offloading 64 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 65/65 layers to GPU
llm_load_tensors: CUDA_Host buffer size = 417.66 MiB
llm_load_tensors: CUDA0 buffer size = 18508.35 MiB
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 2048.00 MiB
llama_new_context_with_model: KV self size = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.40 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 696.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 26.01 MiB
llama_new_context_with_model: graph nodes = 2246
llama_new_context_with_model: graph splits = 2
INFO [wmain] model loaded | tid="20296" timestamp=1742146352
time=2025-03-17T01:32:32.315+08:00 level=INFO source=server.go:626 msg="llama runner started in 3.81 seconds"

<!-- gh-comment-id:2727556356 --> @freers623 commented on GitHub (Mar 16, 2025): This is the log file after reverting to version 0.3.12. 2025/03/17 01:32:23 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\Ollama model OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2025-03-17T01:32:23.098+08:00 level=INFO source=images.go:753 msg="total blobs: 41" time=2025-03-17T01:32:23.111+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2025-03-17T01:32:23.112+08:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.12)" time=2025-03-17T01:32:23.112+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 rocm_v6.1 cpu cpu_avx cpu_avx2]" time=2025-03-17T01:32:23.112+08:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs" time=2025-03-17T01:32:23.259+08:00 level=INFO source=gpu.go:292 msg="detected OS VRAM overhead" id=GPU-996bef33-c125-09bc-a867-8f319f7f4f9f library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" overhead="476.0 MiB" time=2025-03-17T01:32:23.260+08:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-996bef33-c125-09bc-a867-8f319f7f4f9f library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="22.5 GiB" time=2025-03-17T01:32:28.495+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model="D:\\Ollama model\\blobs\\sha256-c62ccde5630c20c8a9cf601861d31977d07450cad6dfdf1c661aab307107bddb" gpu=GPU-996bef33-c125-09bc-a867-8f319f7f4f9f parallel=4 available=24114331648 required="21.5 GiB" time=2025-03-17T01:32:28.495+08:00 level=INFO source=server.go:103 msg="system memory" total="31.8 GiB" free="26.0 GiB" free_swap="52.5 GiB" time=2025-03-17T01:32:28.496+08:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=65 layers.offload=65 layers.split="" memory.available="[22.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="21.5 GiB" memory.required.partial="21.5 GiB" memory.required.kv="2.0 GiB" memory.required.allocations="[21.5 GiB]" memory.weights.total="19.5 GiB" memory.weights.repeating="18.9 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB" time=2025-03-17T01:32:28.503+08:00 level=INFO source=server.go:388 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cuda_v12\\ollama_llama_server.exe --model D:\\Ollama model\\blobs\\sha256-c62ccde5630c20c8a9cf601861d31977d07450cad6dfdf1c661aab307107bddb --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 65 --no-mmap --mlock --parallel 4 --port 50105" time=2025-03-17T01:32:28.504+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-03-17T01:32:28.504+08:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding" time=2025-03-17T01:32:28.504+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" INFO [wmain] build info | build=3670 commit="7142ce2b" tid="20296" timestamp=1742146348 INFO [wmain] system info | n_threads=24 n_threads_batch=24 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="20296" timestamp=1742146348 total_threads=32 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="50105" tid="20296" timestamp=1742146348 llama_model_loader: loaded meta data with 33 key-value pairs and 771 tensors from D:\Ollama model\blobs\sha256-c62ccde5630c20c8a9cf601861d31977d07450cad6dfdf1c661aab307107bddb (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = QwQ 32B llama_model_loader: - kv 3: general.basename str = QwQ llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: general.license str = apache-2.0 llama_model_loader: - kv 6: general.license.link str = https://huggingface.co/Qwen/QWQ-32B/b... llama_model_loader: - kv 7: general.base_model.count u32 = 1 llama_model_loader: - kv 8: general.base_model.0.name str = Qwen2.5 32B llama_model_loader: - kv 9: general.base_model.0.organization str = Qwen llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-32B llama_model_loader: - kv 11: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 12: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 13: qwen2.block_count u32 = 64 llama_model_loader: - kv 14: qwen2.context_length u32 = 131072 llama_model_loader: - kv 15: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 16: qwen2.feed_forward_length u32 = 27648 llama_model_loader: - kv 17: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 18: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 19: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 20: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 30: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 31: general.quantization_version u32 = 2 llama_model_loader: - kv 32: general.file_type u32 = 15 llama_model_loader: - type f32: 321 tensors llama_model_loader: - type q4_K: 385 tensors llama_model_loader: - type q6_K: 65 tensors llm_load_vocab: special tokens cache size = 26 llm_load_vocab: token to piece cache size = 0.9311 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 5120 llm_load_print_meta: n_layer = 64 llm_load_print_meta: n_head = 40 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 5 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 27648 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 32.76 B llm_load_print_meta: model size = 18.48 GiB (4.85 BPW) llm_load_print_meta: general.name = QwQ 32B llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes time=2025-03-17T01:32:28.758+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" llm_load_tensors: ggml ctx size = 0.68 MiB llm_load_tensors: offloading 64 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 65/65 layers to GPU llm_load_tensors: CUDA_Host buffer size = 417.66 MiB llm_load_tensors: CUDA0 buffer size = 18508.35 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 2048.00 MiB llama_new_context_with_model: KV self size = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.40 MiB llama_new_context_with_model: CUDA0 compute buffer size = 696.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 26.01 MiB llama_new_context_with_model: graph nodes = 2246 llama_new_context_with_model: graph splits = 2 INFO [wmain] model loaded | tid="20296" timestamp=1742146352 time=2025-03-17T01:32:32.315+08:00 level=INFO source=server.go:626 msg="llama runner started in 3.81 seconds"
Author
Owner

@hkthomas commented on GitHub (Mar 17, 2025):

I have the same problem. Once I used version 0.6.1, I couldn't schedule the GPU even after downgrading to version 0.6.0.

<!-- gh-comment-id:2727921748 --> @hkthomas commented on GitHub (Mar 17, 2025): I have the same problem. Once I used version 0.6.1, I couldn't schedule the GPU even after downgrading to version 0.6.0.
Author
Owner

@hkthomas commented on GitHub (Mar 17, 2025):

I tried downgrading to 0.3.12 and got this error.

time=2025-03-17T11:18:40.225+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-03-17T11:18:40.225+08:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
/tmp/ollama2686851147/runners/cuda_v12/ollama_llama_server: error while loading shared libraries: /usr/lib/ollama/libcudart.so.12: file too short
time=2025-03-17T11:18:40.225+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
time=2025-03-17T11:18:40.476+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: exit status 127"
[GIN] 2025/03/17 - 11:18:40 | 500 | 679.248964ms | * | POST "/v1/chat/completions"
time=2025-03-17T11:18:45.739+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.262963952 model=/home/Model_Cache/models/blobs/sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93
time=2025-03-17T11:18:46.051+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.574943457 model=/home/Model_Cache/models/blobs/sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93
time=2025-03-17T11:18:46.363+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.886874434 model=/home/Model_Cache/models/blobs/sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93

<!-- gh-comment-id:2727950214 --> @hkthomas commented on GitHub (Mar 17, 2025): I tried downgrading to 0.3.12 and got this error. time=2025-03-17T11:18:40.225+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-03-17T11:18:40.225+08:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding" /tmp/ollama2686851147/runners/cuda_v12/ollama_llama_server: error while loading shared libraries: /usr/lib/ollama/libcudart.so.12: file too short time=2025-03-17T11:18:40.225+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" time=2025-03-17T11:18:40.476+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: exit status 127" [GIN] 2025/03/17 - 11:18:40 | 500 | 679.248964ms | * | POST "/v1/chat/completions" time=2025-03-17T11:18:45.739+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.262963952 model=/home/Model_Cache/models/blobs/sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 time=2025-03-17T11:18:46.051+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.574943457 model=/home/Model_Cache/models/blobs/sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 time=2025-03-17T11:18:46.363+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.886874434 model=/home/Model_Cache/models/blobs/sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93
Author
Owner

@Henning742 commented on GitHub (Mar 17, 2025):

Hi, all. I think I have a fix.

I had a similar issue and when I tried to follow @freers623 's lead and downgraded to 0.312 (which did work for me), I noticed that the intallation position of the binary file /usr/bin/ollama is not the same as the one in the PATH /usr/local/bin/ollama. I run /usr/bin/ollama instead and the model is able to run on GPU again.

I guess people having this problem installed ollama with the install.sh on the offical site but upgraded ollama with the manual method. This could possibly lead to different locations for the different installs, /usr/local/ vs /usr/.

The installation script detects if /usr/local/bin is in the $PATH first and would select /usr/local/bin as the installation dir if that's the case. But the manual installation guide would just lead the user to install ollama to /usr/bin.

For Windows this is probably from the same cause, check https://github.com/ollama/ollama/issues/9266

<!-- gh-comment-id:2728475117 --> @Henning742 commented on GitHub (Mar 17, 2025): Hi, all. I think I have a fix. I had a similar issue and when I tried to follow @freers623 's lead and downgraded to 0.312 (which did work for me), I noticed that the intallation position of the binary file `/usr/bin/ollama` is not the same as the one in the PATH `/usr/local/bin/ollama`. I run `/usr/bin/ollama` instead and the model is able to run on GPU again. I guess people having this problem installed ollama with the `install.sh` on the offical site but upgraded ollama with the manual method. This could possibly lead to different locations for the different installs, `/usr/local/` vs `/usr/`. The installation script detects if `/usr/local/bin` is in the $PATH first and would select `/usr/local/bin` as the installation dir if that's the case. But the manual installation guide would just lead the user to install ollama to `/usr/bin`. For **Windows** this is probably from the same cause, check https://github.com/ollama/ollama/issues/9266
Author
Owner

@hkthomas commented on GitHub (Mar 17, 2025):

Hi, all. I think I have a fix.

I had a similar issue and when I tried to follow @freers623 's lead and downgraded to 0.312 (which did work for me), I noticed that the intallation position of the binary file /usr/bin/ollama is not the same as the one in the PATH /usr/local/bin/ollama. I run /usr/bin/ollama instead and the model is able to run on GPU again.

I guess people that are having this problem installed ollama with the install.sh on the offical site but upgraded ollama with only manual method.

Thanks.

<!-- gh-comment-id:2728479993 --> @hkthomas commented on GitHub (Mar 17, 2025): > Hi, all. I think I have a fix. > > I had a similar issue and when I tried to follow [@freers623](https://github.com/freers623) 's lead and downgraded to 0.312 (which did work for me), I noticed that the intallation position of the binary file `/usr/bin/ollama` is not the same as the one in the PATH `/usr/local/bin/ollama`. I run `/usr/bin/ollama` instead and the model is able to run on GPU again. > > I guess people that are having this problem installed ollama with the `install.sh` on the offical site but upgraded ollama with only manual method. Thanks.
Author
Owner

@freers623 commented on GitHub (Mar 19, 2025):

Hi, all. I think I have a fix.

I had a similar issue and when I tried to follow @freers623 's lead and downgraded to 0.312 (which did work for me), I noticed that the intallation position of the binary file is not the same as the one in the PATH . I run instead and the model is able to run on GPU again./usr/bin/ollama``/usr/local/bin/ollama``/usr/bin/ollama

I guess people having this problem installed ollama with the on the offical site but upgraded ollama with the manual method. This could possibly lead to different locations for the different installs, vs .install.sh``/usr/local/``/usr/

The installation script detects if is in the $PATH first and would select as the installation dir if that's the case. But the manual installation guide would just lead the user to install ollama to ./usr/local/bin``/usr/local/bin``/usr/bin

For Windows this is probably from the same cause, check #9266

Unfortunately, this solution did not work on my Windows 10 system.

<!-- gh-comment-id:2737100424 --> @freers623 commented on GitHub (Mar 19, 2025): > Hi, all. I think I have a fix. > > I had a similar issue and when I tried to follow [@freers623](https://github.com/freers623) 's lead and downgraded to 0.312 (which did work for me), I noticed that the intallation position of the binary file is not the same as the one in the PATH . I run instead and the model is able to run on GPU again.`/usr/bin/ollama``/usr/local/bin/ollama``/usr/bin/ollama` > > I guess people having this problem installed ollama with the on the offical site but upgraded ollama with the manual method. This could possibly lead to different locations for the different installs, vs .`install.sh``/usr/local/``/usr/` > > The installation script detects if is in the $PATH first and would select as the installation dir if that's the case. But the manual installation guide would just lead the user to install ollama to .`/usr/local/bin``/usr/local/bin``/usr/bin` > > For **Windows** this is probably from the same cause, check [#9266](https://github.com/ollama/ollama/issues/9266) Unfortunately, this solution did not work on my Windows 10 system.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52919