[GH-ISSUE #13930] [BUG] Ollama uses CPU instead of GPU #55628

Closed
opened 2026-04-29 09:30:18 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @discostur on GitHub (Jan 27, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13930

What is the issue?

Ollama detects my Tesla T4 but uses CPU instead of GPU.

Relevant log output

time=2026-01-27T09:26:19.319Z level=INFO source=routes.go:1631 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:8192 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-01-27T09:26:19.343Z level=INFO source=images.go:473 msg="total blobs: 56"
time=2026-01-27T09:26:19.345Z level=INFO source=images.go:480 msg="total unused blobs removed: 0"
time=2026-01-27T09:26:19.346Z level=INFO source=routes.go:1684 msg="Listening on [::]:11434 (version 0.15.2)"
time=2026-01-27T09:26:19.349Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-01-27T09:26:19.353Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40685"
time=2026-01-27T09:26:20.119Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 46089"
time=2026-01-27T09:26:20.727Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-01-27T09:26:20.727Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41937"
time=2026-01-27T09:26:20.727Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38685"
time=2026-01-27T09:26:20.994Z level=INFO source=types.go:42 msg="inference compute" id=GPU-60d0c2c0-f953-d178-82f0-754b2f08821b filter_id="" library=CUDA compute=7.5 name=CUDA0 description="Tesla T4" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="16.0 GiB" available="15.6 GiB"
time=2026-01-27T09:26:20.994Z level=INFO source=routes.go:1725 msg="entering low vram mode" "total vram"="16.0 GiB" threshold="20.0 GiB"
[GIN] 2026/01/27 - 09:32:39 | 200 |    9.428926ms |     172.18.0.17 | GET      "/api/tags"
[GIN] 2026/01/27 - 09:32:39 | 200 |     122.215µs |     172.18.0.17 | GET      "/api/ps"
[GIN] 2026/01/27 - 09:32:53 | 200 |    4.137045ms |     172.18.0.17 | GET      "/api/tags"
[GIN] 2026/01/27 - 09:32:53 | 200 |      49.385µs |     172.18.0.17 | GET      "/api/ps"
[GIN] 2026/01/27 - 09:32:53 | 200 |      64.695µs |     172.18.0.17 | GET      "/api/version"
time=2026-01-27T09:33:04.803Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44837"
time=2026-01-27T09:33:05.003Z level=WARN source=runner.go:356 msg="unable to refresh free memory, using old values"
time=2026-01-27T09:33:05.006Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
time=2026-01-27T09:33:05.539Z level=INFO source=server.go:245 msg="enabling flash attention"
time=2026-01-27T09:33:05.539Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-10fe673de12c20b74b8d670a9fdf0fd36b43b0a86ffc04daeb175c0a2b98c4f9 --port 34369"
time=2026-01-27T09:33:05.540Z level=INFO source=sched.go:452 msg="system memory" total="125.8 GiB" free="125.5 GiB" free_swap="0 B"
time=2026-01-27T09:33:05.540Z level=INFO source=sched.go:459 msg="gpu memory" id=GPU-60d0c2c0-f953-d178-82f0-754b2f08821b library=CUDA available="15.1 GiB" free="15.6 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-01-27T09:33:05.540Z level=INFO source=server.go:755 msg="loading model" "model layers"=25 requested=-1
time=2026-01-27T09:33:05.565Z level=INFO source=runner.go:1405 msg="starting ollama engine"
time=2026-01-27T09:33:05.566Z level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:34369"
time=2026-01-27T09:33:05.573Z level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:16 GPULayers:25[ID:GPU-60d0c2c0-f953-d178-82f0-754b2f08821b Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-27T09:33:05.750Z level=INFO source=ggml.go:136 msg="" architecture=gpt-oss file_type=Q4_K_M name=Gpt-Oss-20B description="" num_tensors=459 num_key_values=38
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so
ggml_cuda_init: failed to initialize CUDA: no CUDA-capable device is detected
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so
time=2026-01-27T09:33:05.883Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2026-01-27T09:33:05.938Z level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-27T09:33:06.161Z level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-27T09:33:06.578Z level=INFO source=runner.go:1278 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-27T09:33:06.578Z level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU"
time=2026-01-27T09:33:06.578Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-01-27T09:33:06.578Z level=INFO source=ggml.go:494 msg="offloaded 0/25 layers to GPU"
time=2026-01-27T09:33:06.578Z level=INFO source=device.go:245 msg="model weights" device=CPU size="11.0 GiB"
time=2026-01-27T09:33:06.578Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="192.0 MiB"
time=2026-01-27T09:33:06.578Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="94.8 MiB"
time=2026-01-27T09:33:06.578Z level=INFO source=device.go:272 msg="total memory" size="11.3 GiB"
time=2026-01-27T09:33:06.578Z level=INFO source=sched.go:526 msg="loaded runners" count=1
time=2026-01-27T09:33:06.578Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding"
time=2026-01-27T09:33:06.579Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model"
time=2026-01-27T09:33:14.613Z level=INFO source=server.go:1385 msg="llama runner started in 9.07 seconds"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.15.2

Originally created by @discostur on GitHub (Jan 27, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13930 ### What is the issue? Ollama detects my Tesla T4 but uses CPU instead of GPU. ### Relevant log output ```shell time=2026-01-27T09:26:19.319Z level=INFO source=routes.go:1631 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:8192 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-01-27T09:26:19.343Z level=INFO source=images.go:473 msg="total blobs: 56" time=2026-01-27T09:26:19.345Z level=INFO source=images.go:480 msg="total unused blobs removed: 0" time=2026-01-27T09:26:19.346Z level=INFO source=routes.go:1684 msg="Listening on [::]:11434 (version 0.15.2)" time=2026-01-27T09:26:19.349Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-01-27T09:26:19.353Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40685" time=2026-01-27T09:26:20.119Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 46089" time=2026-01-27T09:26:20.727Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-01-27T09:26:20.727Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41937" time=2026-01-27T09:26:20.727Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38685" time=2026-01-27T09:26:20.994Z level=INFO source=types.go:42 msg="inference compute" id=GPU-60d0c2c0-f953-d178-82f0-754b2f08821b filter_id="" library=CUDA compute=7.5 name=CUDA0 description="Tesla T4" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="16.0 GiB" available="15.6 GiB" time=2026-01-27T09:26:20.994Z level=INFO source=routes.go:1725 msg="entering low vram mode" "total vram"="16.0 GiB" threshold="20.0 GiB" [GIN] 2026/01/27 - 09:32:39 | 200 | 9.428926ms | 172.18.0.17 | GET "/api/tags" [GIN] 2026/01/27 - 09:32:39 | 200 | 122.215µs | 172.18.0.17 | GET "/api/ps" [GIN] 2026/01/27 - 09:32:53 | 200 | 4.137045ms | 172.18.0.17 | GET "/api/tags" [GIN] 2026/01/27 - 09:32:53 | 200 | 49.385µs | 172.18.0.17 | GET "/api/ps" [GIN] 2026/01/27 - 09:32:53 | 200 | 64.695µs | 172.18.0.17 | GET "/api/version" time=2026-01-27T09:33:04.803Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44837" time=2026-01-27T09:33:05.003Z level=WARN source=runner.go:356 msg="unable to refresh free memory, using old values" time=2026-01-27T09:33:05.006Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2026-01-27T09:33:05.539Z level=INFO source=server.go:245 msg="enabling flash attention" time=2026-01-27T09:33:05.539Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-10fe673de12c20b74b8d670a9fdf0fd36b43b0a86ffc04daeb175c0a2b98c4f9 --port 34369" time=2026-01-27T09:33:05.540Z level=INFO source=sched.go:452 msg="system memory" total="125.8 GiB" free="125.5 GiB" free_swap="0 B" time=2026-01-27T09:33:05.540Z level=INFO source=sched.go:459 msg="gpu memory" id=GPU-60d0c2c0-f953-d178-82f0-754b2f08821b library=CUDA available="15.1 GiB" free="15.6 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-01-27T09:33:05.540Z level=INFO source=server.go:755 msg="loading model" "model layers"=25 requested=-1 time=2026-01-27T09:33:05.565Z level=INFO source=runner.go:1405 msg="starting ollama engine" time=2026-01-27T09:33:05.566Z level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:34369" time=2026-01-27T09:33:05.573Z level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:16 GPULayers:25[ID:GPU-60d0c2c0-f953-d178-82f0-754b2f08821b Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-01-27T09:33:05.750Z level=INFO source=ggml.go:136 msg="" architecture=gpt-oss file_type=Q4_K_M name=Gpt-Oss-20B description="" num_tensors=459 num_key_values=38 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so ggml_cuda_init: failed to initialize CUDA: no CUDA-capable device is detected load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so time=2026-01-27T09:33:05.883Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2026-01-27T09:33:05.938Z level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-01-27T09:33:06.161Z level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-01-27T09:33:06.578Z level=INFO source=runner.go:1278 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-01-27T09:33:06.578Z level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU" time=2026-01-27T09:33:06.578Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-01-27T09:33:06.578Z level=INFO source=ggml.go:494 msg="offloaded 0/25 layers to GPU" time=2026-01-27T09:33:06.578Z level=INFO source=device.go:245 msg="model weights" device=CPU size="11.0 GiB" time=2026-01-27T09:33:06.578Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="192.0 MiB" time=2026-01-27T09:33:06.578Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="94.8 MiB" time=2026-01-27T09:33:06.578Z level=INFO source=device.go:272 msg="total memory" size="11.3 GiB" time=2026-01-27T09:33:06.578Z level=INFO source=sched.go:526 msg="loaded runners" count=1 time=2026-01-27T09:33:06.578Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding" time=2026-01-27T09:33:06.579Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model" time=2026-01-27T09:33:14.613Z level=INFO source=server.go:1385 msg="llama runner started in 9.07 seconds" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.15.2
GiteaMirror added the bug label 2026-04-29 09:30:18 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 27, 2026):

It looks like you are running in a container, does restarting the container help? Might be this: https://github.com/ollama/ollama/blob/main/docs/troubleshooting.mdx#linux-docker

<!-- gh-comment-id:3804368275 --> @rick-github commented on GitHub (Jan 27, 2026): It looks like you are running in a container, does restarting the container help? Might be this: https://github.com/ollama/ollama/blob/main/docs/troubleshooting.mdx#linux-docker
Author
Owner

@discostur commented on GitHub (Jan 27, 2026):

@rick-github ok seems like a simple reboot fixed it ... not sure why ;) thanks anyway

<!-- gh-comment-id:3804617474 --> @discostur commented on GitHub (Jan 27, 2026): @rick-github ok seems like a simple reboot fixed it ... not sure why ;) thanks anyway
Author
Owner

@rick-github commented on GitHub (Jan 27, 2026):

A reboot restarted the container. If ollama starts using the CPU instead of the GPU again, see the link.

<!-- gh-comment-id:3804654408 --> @rick-github commented on GitHub (Jan 27, 2026): A reboot restarted the container. If ollama starts using the CPU instead of the GPU again, see the link.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55628