[GH-ISSUE #11882] The model started by ollama does not use GPU for inference, how to solve it #7886

Closed
opened 2026-04-12 20:02:25 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @huanjiSCPing on GitHub (Aug 13, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11882

What is the issue?

I deployed ollama on the ubuntu system. cuda version 12.4, cudatools version 12.4.
Although ollama ps showed that the model 100% running on GPU, but no shareMem used and CPU used 100% nearly.
I have tried many methods to solve this problem, like set the environment.
and I checked the GPU is running, I can deploy reranking model on GPU by xinference

Relevant log output


OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.11.4

Originally created by @huanjiSCPing on GitHub (Aug 13, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11882 ### What is the issue? I deployed ollama on the ubuntu system. cuda version 12.4, cudatools version 12.4. Although ollama ps showed that the model 100% running on GPU, but no shareMem used and CPU used 100% nearly. I have tried many methods to solve this problem, like set the environment. and I checked the GPU is running, I can deploy reranking model on GPU by xinference ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.11.4
GiteaMirror added the bug label 2026-04-12 20:02:25 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 13, 2025):

Server logs will help in debugging.

<!-- gh-comment-id:3183097158 --> @rick-github commented on GitHub (Aug 13, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will help in debugging.
Author
Owner

@anthturner commented on GitHub (Aug 13, 2025):

I'm having the same issue (same version of ollama) but with an AMD processor and NVidia RTX 4090. Other tools properly use the GPU for inference but not ollama. (ComfyUI for example)

server.log:

time=2025-08-13T13:26:14.642-04:00 level=INFO source=routes.go:1304 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:8192 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\<<REDACTED>>\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-08-13T13:26:14.667-04:00 level=INFO source=images.go:477 msg="total blobs: 64"
time=2025-08-13T13:26:14.670-04:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-08-13T13:26:14.675-04:00 level=INFO source=routes.go:1357 msg="Listening on [::]:11434 (version 0.11.4)"
time=2025-08-13T13:26:14.676-04:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-13T13:26:14.676-04:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-08-13T13:26:14.677-04:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16
time=2025-08-13T13:26:15.959-04:00 level=INFO source=amd_windows.go:127 msg="unsupported Radeon iGPU detected skipping" id=0 total="24.0 GiB"
time=2025-08-13T13:26:15.964-04:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-6a32d7c3-bd59-d71f-cbbf-71859d54cfa6 library=cuda variant=v12 compute=8.9 driver=13.0 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="22.5 GiB"

I have both CUDA 12.9.1 and CUDA 13 installed.

System Info:

  • Windows 11 Pro Build 26100
  • AMD Ryzen 7 7800X3D @ 4.2GHz
  • 64GB RAM
  • ASUS ROG Strix RTX 4090, 24GB
  • Running NVidia Game-Ready Driver 580.97 (8/12/25)

This only started in the last few days; prior to that I had no issues.

<!-- gh-comment-id:3184836065 --> @anthturner commented on GitHub (Aug 13, 2025): I'm having the same issue (same version of ollama) but with an AMD processor and NVidia RTX 4090. Other tools properly use the GPU for inference but not ollama. (ComfyUI for example) server.log: ``` time=2025-08-13T13:26:14.642-04:00 level=INFO source=routes.go:1304 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:8192 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\<<REDACTED>>\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-08-13T13:26:14.667-04:00 level=INFO source=images.go:477 msg="total blobs: 64" time=2025-08-13T13:26:14.670-04:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-08-13T13:26:14.675-04:00 level=INFO source=routes.go:1357 msg="Listening on [::]:11434 (version 0.11.4)" time=2025-08-13T13:26:14.676-04:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-13T13:26:14.676-04:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-08-13T13:26:14.677-04:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16 time=2025-08-13T13:26:15.959-04:00 level=INFO source=amd_windows.go:127 msg="unsupported Radeon iGPU detected skipping" id=0 total="24.0 GiB" time=2025-08-13T13:26:15.964-04:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-6a32d7c3-bd59-d71f-cbbf-71859d54cfa6 library=cuda variant=v12 compute=8.9 driver=13.0 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="22.5 GiB" ``` I have both CUDA 12.9.1 and CUDA 13 installed. *System Info:* - Windows 11 Pro Build 26100 - AMD Ryzen 7 7800X3D @ 4.2GHz - 64GB RAM - ASUS ROG Strix RTX 4090, 24GB - Running NVidia Game-Ready Driver 580.97 (8/12/25) This only started in the last few days; prior to that I had no issues.
Author
Owner

@rick-github commented on GitHub (Aug 13, 2025):

Insufficient log, post the whole thing.

<!-- gh-comment-id:3184851835 --> @rick-github commented on GitHub (Aug 13, 2025): Insufficient log, post the whole thing.
Author
Owner

@anthturner commented on GitHub (Aug 13, 2025):

Just as an FYI, it took FOURTEEN minutes to populate anything beyond what I had posted (as shown in the timestamps here). At the time of posting, that was the whole log file, beginning to end, nothing omitted beyond the redaction of my username. Since then, here is the "current" version of server.log;

time=2025-08-13T13:26:14.642-04:00 level=INFO source=routes.go:1304 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:8192 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\<<REDACTED>>\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-08-13T13:26:14.667-04:00 level=INFO source=images.go:477 msg="total blobs: 64"
time=2025-08-13T13:26:14.670-04:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-08-13T13:26:14.675-04:00 level=INFO source=routes.go:1357 msg="Listening on [::]:11434 (version 0.11.4)"
time=2025-08-13T13:26:14.676-04:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-13T13:26:14.676-04:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-08-13T13:26:14.677-04:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16
time=2025-08-13T13:26:15.959-04:00 level=INFO source=amd_windows.go:127 msg="unsupported Radeon iGPU detected skipping" id=0 total="24.0 GiB"
time=2025-08-13T13:26:15.964-04:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-6a32d7c3-bd59-d71f-cbbf-71859d54cfa6 library=cuda variant=v12 compute=8.9 driver=13.0 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="22.5 GiB"
[GIN] 2025/08/13 - 13:40:20 | 200 |       1.113ms |       127.0.0.1 | GET      "/"
[GIN] 2025/08/13 - 13:40:21 | 200 |     20.8813ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/08/13 - 13:40:21 | 200 |     119.153ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/08/13 - 13:40:23 | 200 |      6.8301ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/08/13 - 13:40:23 | 200 |     97.3423ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/08/13 - 13:40:23 | 200 |     93.2697ms |       127.0.0.1 | POST     "/api/show"
time=2025-08-13T13:40:23.865-04:00 level=INFO source=server.go:135 msg="system memory" total="63.1 GiB" free="16.7 GiB" free_swap="40.7 GiB"
time=2025-08-13T13:40:23.866-04:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=23 layers.split="" memory.available="[13.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="14.9 GiB" memory.required.partial="13.4 GiB" memory.required.kv="300.0 MiB" memory.required.allocations="[13.4 GiB]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="2.0 GiB" memory.graph.partial="2.0 GiB"
time=2025-08-13T13:40:23.936-04:00 level=INFO source=server.go:438 msg="starting llama server" cmd="C:\\Users\\<<REDACTED>>\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\<<REDACTED>>\\.ollama\\models\\blobs\\sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --ctx-size 8192 --batch-size 512 --n-gpu-layers 23 --threads 8 --no-mmap --parallel 1 --port 61037"
time=2025-08-13T13:40:23.941-04:00 level=INFO source=sched.go:481 msg="loaded runners" count=1
time=2025-08-13T13:40:23.941-04:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
time=2025-08-13T13:40:23.946-04:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
time=2025-08-13T13:40:23.994-04:00 level=INFO source=runner.go:925 msg="starting ollama engine"
time=2025-08-13T13:40:23.996-04:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:61037"
time=2025-08-13T13:40:24.048-04:00 level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30
time=2025-08-13T13:40:24.198-04:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\<<REDACTED>>\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\<<REDACTED>>\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
time=2025-08-13T13:40:34.003-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-08-13T13:40:34.167-04:00 level=INFO source=ggml.go:365 msg="offloading 23 repeating layers to GPU"
time=2025-08-13T13:40:34.167-04:00 level=INFO source=ggml.go:369 msg="offloading output layer to CPU"
time=2025-08-13T13:40:34.167-04:00 level=INFO source=ggml.go:376 msg="offloaded 23/25 layers to GPU"
time=2025-08-13T13:40:34.167-04:00 level=INFO source=ggml.go:379 msg="model weights" buffer=CPU size="2.6 GiB"
time=2025-08-13T13:40:34.167-04:00 level=INFO source=ggml.go:379 msg="model weights" buffer=CUDA0 size="10.2 GiB"
time=2025-08-13T13:40:34.409-04:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="2.1 GiB"
time=2025-08-13T13:40:34.409-04:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CPU buffer_type=CPU size="1.2 GiB"
time=2025-08-13T13:40:38.469-04:00 level=INFO source=server.go:637 msg="llama runner started in 14.53 seconds"
[GIN] 2025/08/13 - 13:40:43 | 200 |   19.8989305s |       127.0.0.1 | POST     "/api/chat"
<!-- gh-comment-id:3185726588 --> @anthturner commented on GitHub (Aug 13, 2025): Just as an FYI, it took *FOURTEEN* minutes to populate anything beyond what I had posted (as shown in the timestamps here). At the time of posting, that *was* the whole log file, beginning to end, nothing omitted beyond the redaction of my username. Since then, here is the "current" version of `server.log`; ``` time=2025-08-13T13:26:14.642-04:00 level=INFO source=routes.go:1304 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:8192 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\<<REDACTED>>\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-08-13T13:26:14.667-04:00 level=INFO source=images.go:477 msg="total blobs: 64" time=2025-08-13T13:26:14.670-04:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-08-13T13:26:14.675-04:00 level=INFO source=routes.go:1357 msg="Listening on [::]:11434 (version 0.11.4)" time=2025-08-13T13:26:14.676-04:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-13T13:26:14.676-04:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-08-13T13:26:14.677-04:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16 time=2025-08-13T13:26:15.959-04:00 level=INFO source=amd_windows.go:127 msg="unsupported Radeon iGPU detected skipping" id=0 total="24.0 GiB" time=2025-08-13T13:26:15.964-04:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-6a32d7c3-bd59-d71f-cbbf-71859d54cfa6 library=cuda variant=v12 compute=8.9 driver=13.0 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="22.5 GiB" [GIN] 2025/08/13 - 13:40:20 | 200 | 1.113ms | 127.0.0.1 | GET "/" [GIN] 2025/08/13 - 13:40:21 | 200 | 20.8813ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/08/13 - 13:40:21 | 200 | 119.153ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/08/13 - 13:40:23 | 200 | 6.8301ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/08/13 - 13:40:23 | 200 | 97.3423ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/08/13 - 13:40:23 | 200 | 93.2697ms | 127.0.0.1 | POST "/api/show" time=2025-08-13T13:40:23.865-04:00 level=INFO source=server.go:135 msg="system memory" total="63.1 GiB" free="16.7 GiB" free_swap="40.7 GiB" time=2025-08-13T13:40:23.866-04:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=23 layers.split="" memory.available="[13.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="14.9 GiB" memory.required.partial="13.4 GiB" memory.required.kv="300.0 MiB" memory.required.allocations="[13.4 GiB]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="2.0 GiB" memory.graph.partial="2.0 GiB" time=2025-08-13T13:40:23.936-04:00 level=INFO source=server.go:438 msg="starting llama server" cmd="C:\\Users\\<<REDACTED>>\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\<<REDACTED>>\\.ollama\\models\\blobs\\sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --ctx-size 8192 --batch-size 512 --n-gpu-layers 23 --threads 8 --no-mmap --parallel 1 --port 61037" time=2025-08-13T13:40:23.941-04:00 level=INFO source=sched.go:481 msg="loaded runners" count=1 time=2025-08-13T13:40:23.941-04:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" time=2025-08-13T13:40:23.946-04:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error" time=2025-08-13T13:40:23.994-04:00 level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-08-13T13:40:23.996-04:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:61037" time=2025-08-13T13:40:24.048-04:00 level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30 time=2025-08-13T13:40:24.198-04:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\<<REDACTED>>\AppData\Local\Programs\Ollama\lib\ollama\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\<<REDACTED>>\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll time=2025-08-13T13:40:34.003-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-08-13T13:40:34.167-04:00 level=INFO source=ggml.go:365 msg="offloading 23 repeating layers to GPU" time=2025-08-13T13:40:34.167-04:00 level=INFO source=ggml.go:369 msg="offloading output layer to CPU" time=2025-08-13T13:40:34.167-04:00 level=INFO source=ggml.go:376 msg="offloaded 23/25 layers to GPU" time=2025-08-13T13:40:34.167-04:00 level=INFO source=ggml.go:379 msg="model weights" buffer=CPU size="2.6 GiB" time=2025-08-13T13:40:34.167-04:00 level=INFO source=ggml.go:379 msg="model weights" buffer=CUDA0 size="10.2 GiB" time=2025-08-13T13:40:34.409-04:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="2.1 GiB" time=2025-08-13T13:40:34.409-04:00 level=INFO source=ggml.go:668 msg="compute graph" backend=CPU buffer_type=CPU size="1.2 GiB" time=2025-08-13T13:40:38.469-04:00 level=INFO source=server.go:637 msg="llama runner started in 14.53 seconds" [GIN] 2025/08/13 - 13:40:43 | 200 | 19.8989305s | 127.0.0.1 | POST "/api/chat" ```
Author
Owner

@rick-github commented on GitHub (Aug 13, 2025):

The fourteen minutes seems to be between when the server was ready, and when the client asked the server to do something:

time=2025-08-13T13:26:15.964-04:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-6a32d7c3-bd59-d71f-cbbf-71859d54cfa6 library=cuda variant=v12 compute=8.9 driver=13.0 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="22.5 GiB"
[GIN] 2025/08/13 - 13:40:20 | 200 |       1.113ms |       127.0.0.1 | GET      "/"

When the client asked to load gpt-oss:20b at 13:40:23.865, the runner was spawned in 130ms, spent 14.53 seconds loading the model with 23 of 25 layers going into the GPU, and then used about 5.3 seconds doing the actual inference. In all, the time from idle to answer was less than 20 seconds. There's no evidence in this log that the GPU is not being used for inference. What does nvidia-smi show while the inference is running?

<!-- gh-comment-id:3185790733 --> @rick-github commented on GitHub (Aug 13, 2025): The fourteen minutes seems to be between when the server was ready, and when the client asked the server to do something: ``` time=2025-08-13T13:26:15.964-04:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-6a32d7c3-bd59-d71f-cbbf-71859d54cfa6 library=cuda variant=v12 compute=8.9 driver=13.0 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="22.5 GiB" [GIN] 2025/08/13 - 13:40:20 | 200 | 1.113ms | 127.0.0.1 | GET "/" ``` When the client asked to load gpt-oss:20b at 13:40:23.865, the runner was spawned in 130ms, spent 14.53 seconds loading the model with 23 of 25 layers going into the GPU, and then used about 5.3 seconds doing the actual inference. In all, the time from idle to answer was less than 20 seconds. There's no evidence in this log that the GPU is not being used for inference. What does `nvidia-smi` show while the inference is running?
Author
Owner

@anthturner commented on GitHub (Aug 13, 2025):

I figured it out actually. For whatever reason, WSL's ollama instance started eating tons of resources upon restarting my computer (before sending even a single request off to the inference engine, FWIW). Keep in mind, I hadn't been sending anything to WSL's internal IP address, but it seems that WSL's ollama took "ownership" of the GPU and refused to relinquish it to the Windows environment's ollama instance. Once I killed it inside the WSL container, removed ollama from the Ubuntu instance via snap, and rebooted, everything went back to normal. I have no explanation for this, as I've been running ollama inside WSL on a periodic basis for quite a while, but that does seem to have been the root cause for my specific issue.

edit: Not to disregard your question ... I had established it wasn't using the GPU by pulling up task manager and observing the GPU usage percentage while I was asking it an arbitrary question. Despite it being in the "thinking" state, it still showed zero GPU usage and instead indicated a CPU consumption of near-100%.

<!-- gh-comment-id:3185846110 --> @anthturner commented on GitHub (Aug 13, 2025): I figured it out actually. For whatever reason, WSL's ollama instance started eating tons of resources upon restarting my computer (before sending even a single request off to the inference engine, FWIW). Keep in mind, I hadn't been sending anything to WSL's internal IP address, but it seems that WSL's ollama took "ownership" of the GPU and refused to relinquish it to the Windows environment's ollama instance. Once I killed it inside the WSL container, removed ollama from the Ubuntu instance via `snap`, and rebooted, everything went back to normal. I have no explanation for this, as I've been running ollama inside WSL on a periodic basis for quite a while, but that does seem to have been the root cause for my specific issue. edit: Not to disregard your question ... I had established it wasn't using the GPU by pulling up task manager and observing the GPU usage percentage while I was asking it an arbitrary question. Despite it being in the "thinking" state, it still showed zero GPU usage and instead indicated a CPU consumption of near-100%.
Author
Owner

@rick-github commented on GitHub (Aug 13, 2025):

Task Manager is not reliable for determining GPU usage. nvidia-smi will give better data.

<!-- gh-comment-id:3185974586 --> @rick-github commented on GitHub (Aug 13, 2025): Task Manager is not reliable for determining GPU usage. `nvidia-smi` will give better data.
Author
Owner

@ArKam commented on GitHub (Aug 17, 2025):

If I may add my own experience, with anything handmade (Voxtral, Whispers models loaded using python) windows task manager is damn accurate to show you the GPU usage (Compute/Memory) and with any of both AMD Cards that I'm using (RX9070XT or RX6600) it works like a charm without any hurdle as long as you follow pilote installation instructions correctly.

Now, I know my python programs are not using llamacpp as the core and that using pytorch which is compatible with any rocm compatible device is indeed what does the diffrence.

However, with ollama on the same environment, it seems the binary always start using CPU instead of GPU ONLY when dealing with recent AMD card.

With the RX9070, Even using a specific HSA_OVERRIDE_GFX_VERSION=12.0.1 it doesn't work.
However, when using the RX6600 with HSA_OVERRIDE_GFX_VERSION=10.3.2 it works fine and can detect the card correctly.

Weird things tho (but I don't know if it is expected) rocminfo can perfectly retrieve the expected GPUs information while amd-smi doesn't work (again, I don't know if it is expected): Here are the logs

For the RX9070XT run: Here is the start log with command and envVars
For the RX9070XT run: Here is the run log

A root level run of the command ends with the exact same issue.
The missing module issue is related to OSS driver, with WSL2 you need to install AMD's official drivers which install under /opt/amdgpu/

And here is my ollama release version: ollama version is 0.11.4

Again, could be completely normal if your core layer (which seems to be llamacpp) doesn't support latest AMD hardware, but still I thought it would be nice for you to get more details and comparison within AMD line of hardware.

<!-- gh-comment-id:3194516940 --> @ArKam commented on GitHub (Aug 17, 2025): If I may add my own experience, with anything handmade (Voxtral, Whispers models loaded using python) windows task manager is damn accurate to show you the GPU usage (Compute/Memory) and with any of both AMD Cards that I'm using (RX9070XT or RX6600) it works like a charm without any hurdle as long as you follow pilote installation instructions correctly. Now, I know my python programs are not using llamacpp as the core and that using pytorch which is compatible with any rocm compatible device is indeed what does the diffrence. However, with ollama on the same environment, it seems the binary always start using CPU instead of GPU ONLY when dealing with recent AMD card. With the RX9070, Even using a specific `HSA_OVERRIDE_GFX_VERSION=12.0.1` it doesn't work. However, when using the RX6600 with `HSA_OVERRIDE_GFX_VERSION=10.3.2` it works fine and can detect the card correctly. Weird things tho (but I don't know if it is expected) rocminfo can perfectly retrieve the expected GPUs information while amd-smi doesn't work (again, I don't know if it is expected): [Here are the logs](https://paste.opendev.org/show/bCNBOdJPcOG5ZXgeXOaw/) For the RX9070XT run: [Here is the start log with command and envVars](https://paste.opendev.org/show/bI6FgDrKTqOs92ygjYci/) For the RX9070XT run: [Here is the run log](https://paste.opendev.org/show/bza3t1MGwQOfYkkjZhE0/) A root level run of the command ends with the exact same issue. The missing module issue is related to OSS driver, with WSL2 you need to install AMD's official drivers which install under `/opt/amdgpu/` And here is my ollama release version: `ollama version is 0.11.4` Again, could be completely normal if your core layer (which seems to be llamacpp) doesn't support latest AMD hardware, but still I thought it would be nice for you to get more details and comparison within AMD line of hardware.
Author
Owner

@huanjiSCPing commented on GitHub (Aug 20, 2025):

Sorry, I have to close this issue, although I did not give more response. This bug happened while we decided to update the manually installed ollama with
curl -fsSL https://ollama.com/install.sh | sh
My colleague and I chose to REINSTALL the ollama with manual installation method provided by ollama official website. It works, but we did not figure out how to solve this question

Anyway, thanks for your help

<!-- gh-comment-id:3204403508 --> @huanjiSCPing commented on GitHub (Aug 20, 2025): Sorry, I have to close this issue, although I did not give more response. This bug happened while we decided to update the manually installed ollama with `curl -fsSL https://ollama.com/install.sh | sh` My colleague and I chose to REINSTALL the ollama with manual installation method provided by ollama official website. It works, but we did not figure out how to solve this question Anyway, thanks for your help
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7886