[GH-ISSUE #13836] Using GPU #71120

Closed
opened 2026-05-05 00:25:20 -05:00 by GiteaMirror · 24 comments
Owner

Originally created by @Eb7CAPJi on GitHub (Jan 22, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13836

What is the issue?

After updating to version 0.14.3, it refuses to use the GPU.
Do you even perform basic testing before releasing? Otherwise, you keep pushing half-baked versions into production.

Relevant log output


OS

Windows

GPU

NVIDIA GeForce RTX 3090

CPU

Intel

Ollama version

0.14.3

Originally created by @Eb7CAPJi on GitHub (Jan 22, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13836 ### What is the issue? After updating to version 0.14.3, it refuses to use the GPU. Do you even perform basic testing before releasing? Otherwise, you keep pushing half-baked versions into production. ### Relevant log output ```shell ``` ### OS Windows ### GPU NVIDIA GeForce RTX 3090 ### CPU Intel ### Ollama version 0.14.3
GiteaMirror added the intelbugnvidianeeds more info labels 2026-05-05 00:25:23 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 22, 2026):

Server log may help in debugging.

<!-- gh-comment-id:3783150900 --> @rick-github commented on GitHub (Jan 22, 2026): [Server log](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.mdx) may help in debugging.
Author
Owner

@Eb7CAPJi commented on GitHub (Jan 22, 2026):

time=2026-01-22T16:15:05.622+03:00 level=INFO source=routes.go:1629 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:65536 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:12h0m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:e:\.ollama\models\ OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"
time=2026-01-22T16:15:05.655+03:00 level=INFO source=images.go:501 msg="total blobs: 170"
time=2026-01-22T16:15:05.661+03:00 level=INFO source=images.go:508 msg="total unused blobs removed: 0"
time=2026-01-22T16:15:05.666+03:00 level=INFO source=routes.go:1682 msg="Listening on 127.0.0.1:11434 (version 0.14.3)"
time=2026-01-22T16:15:05.667+03:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-01-22T16:15:05.691+03:00 level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=0
time=2026-01-22T16:15:05.691+03:00 level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again"
time=2026-01-22T16:15:05.701+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\Ollama\ollama.exe runner --ollama-engine --port 51769"
time=2026-01-22T16:15:05.895+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\Ollama\ollama.exe runner --ollama-engine --port 51776"
time=2026-01-22T16:15:06.033+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\Ollama\ollama.exe runner --ollama-engine --port 51783"
time=2026-01-22T16:15:06.147+03:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1"
time=2026-01-22T16:15:06.147+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\Ollama\ollama.exe runner --ollama-engine --port 51793"
time=2026-01-22T16:15:06.355+03:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3090" libdirs=ollama,cuda_v12 driver=12.7 pci_id=0000:01:00.0 type=discrete total="24.0 GiB" available="1.6 GiB"
[GIN] 2026/01/22 - 16:15:06 | 200 | 13.3592ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2026/01/22 - 16:15:06 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2026/01/22 - 16:15:06 | 200 | 1.0428ms | 127.0.0.1 | GET "/api/version"
time=2026-01-22T16:15:14.284+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\Ollama\ollama.exe runner --ollama-engine --port 51846"
time=2026-01-22T16:15:14.463+03:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2026-01-22T16:15:14.463+03:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=16
time=2026-01-22T16:15:14.574+03:00 level=INFO source=server.go:245 msg="enabling flash attention"
time=2026-01-22T16:15:14.575+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\Ollama\ollama.exe runner --ollama-engine --model e:\.ollama\models\blobs\sha256-4e4f9cd88d6456e4f389e7262eca4a8d565211e2b22ece9ca7a8556168ff3c66 --port 51853"
time=2026-01-22T16:15:14.578+03:00 level=INFO source=sched.go:452 msg="system memory" total="63.9 GiB" free="40.7 GiB" free_swap="30.6 GiB"
time=2026-01-22T16:15:14.578+03:00 level=INFO source=sched.go:459 msg="gpu memory" id=GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 library=CUDA available="1.1 GiB" free="1.5 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-01-22T16:15:14.578+03:00 level=INFO source=server.go:755 msg="loading model" "model layers"=25 requested=-1
time=2026-01-22T16:15:14.614+03:00 level=INFO source=runner.go:1405 msg="starting ollama engine"
time=2026-01-22T16:15:14.635+03:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:51853"
time=2026-01-22T16:15:14.650+03:00 level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType:q8_0 NumThreads:8 GPULayers:25[ID:GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-22T16:15:14.692+03:00 level=INFO source=ggml.go:136 msg="" architecture=gpt-oss file_type=F16 name=Gpt-Oss-20B description="" num_tensors=459 num_key_values=38
load_backend: loaded CPU backend from C:\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes, ID: GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2
load_backend: loaded CUDA backend from C:\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2026-01-22T16:15:14.785+03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2026-01-22T16:15:15.294+03:00 level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType:q8_0 NumThreads:8 GPULayers:1[ID:GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 Layers:1(23..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-22T16:15:15.449+03:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType:q8_0 NumThreads:8 GPULayers:1[ID:GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 Layers:1(23..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-22T16:15:15.740+03:00 level=INFO source=runner.go:1278 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType:q8_0 NumThreads:8 GPULayers:1[ID:GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 Layers:1(23..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-22T16:15:15.740+03:00 level=INFO source=ggml.go:482 msg="offloading 1 repeating layers to GPU"
time=2026-01-22T16:15:15.741+03:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-01-22T16:15:15.741+03:00 level=INFO source=ggml.go:494 msg="offloaded 1/25 layers to GPU"
time=2026-01-22T16:15:15.750+03:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="455.5 MiB"
time=2026-01-22T16:15:15.750+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="12.4 GiB"
time=2026-01-22T16:15:15.750+03:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="68.0 MiB"
time=2026-01-22T16:15:15.750+03:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="805.4 MiB"
time=2026-01-22T16:15:15.750+03:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="547.4 MiB"
time=2026-01-22T16:15:15.750+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="5.6 MiB"
time=2026-01-22T16:15:15.750+03:00 level=INFO source=device.go:272 msg="total memory" size="14.2 GiB"
time=2026-01-22T16:15:15.750+03:00 level=INFO source=sched.go:526 msg="loaded runners" count=1
time=2026-01-22T16:15:15.750+03:00 level=INFO source=server.go:1347 msg="waiting for llama runner to start responding"
time=2026-01-22T16:15:15.756+03:00 level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model"
time=2026-01-22T16:15:20.366+03:00 level=INFO source=server.go:1385 msg="llama runner started in 5.79 seconds"
[GIN] 2026/01/22 - 16:15:36 | 200 | 22.8055312s | 127.0.0.1 | POST "/api/chat"
[GIN] 2026/01/22 - 16:15:55 | 200 | 16.1012ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2026/01/22 - 16:15:55 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2026/01/22 - 16:16:02 | 200 | 14.4309ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2026/01/22 - 16:16:02 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2026/01/22 - 16:16:06 | 200 | 15.9733ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2026/01/22 - 16:16:06 | 200 | 531.4µs | 127.0.0.1 | GET "/api/ps"
[GIN] 2026/01/22 - 16:16:09 | 200 | 32.6872368s | 127.0.0.1 | POST "/api/chat"
[GIN] 2026/01/22 - 16:16:32 | 200 | 23.1922197s | 127.0.0.1 | POST "/api/chat"
[GIN] 2026/01/22 - 16:16:43 | 200 | 15.9981ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2026/01/22 - 16:16:43 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2026/01/22 - 16:16:45 | 200 | 13.9975ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2026/01/22 - 16:16:45 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2026/01/22 - 16:16:48 | 200 | 15.5631ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2026/01/22 - 16:16:48 | 200 | 513.3µs | 127.0.0.1 | GET "/api/ps"
[GIN] 2026/01/22 - 16:17:17 | 200 | 44.3396272s | 127.0.0.1 | POST "/api/chat"
ggml_backend_cuda_device_get_memory device GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 utilizing NVML memory reporting free: 333504512 total: 25769803776
time=2026-01-22T16:17:18.155+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\Ollama\ollama.exe runner --ollama-engine --port 52593"
time=2026-01-22T16:17:18.361+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\Ollama\ollama.exe runner --ollama-engine --port 52601"
time=2026-01-22T16:17:18.545+03:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2026-01-22T16:17:18.545+03:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=16
time=2026-01-22T16:17:18.604+03:00 level=INFO source=server.go:245 msg="enabling flash attention"
time=2026-01-22T16:17:18.604+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\Ollama\ollama.exe runner --ollama-engine --model e:\.ollama\models\blobs\sha256-aaaa4f4b6e97126792fd433a2985b2a0bee5c4d69d9bf2cbcc7d0795c53b1d81 --port 52609"
time=2026-01-22T16:17:18.607+03:00 level=INFO source=sched.go:452 msg="system memory" total="63.9 GiB" free="41.0 GiB" free_swap="30.2 GiB"
time=2026-01-22T16:17:18.607+03:00 level=INFO source=sched.go:459 msg="gpu memory" id=GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 library=CUDA available="1.1 GiB" free="1.6 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-01-22T16:17:18.607+03:00 level=INFO source=server.go:755 msg="loading model" "model layers"=37 requested=-1
time=2026-01-22T16:17:18.646+03:00 level=INFO source=runner.go:1405 msg="starting ollama engine"
time=2026-01-22T16:17:18.667+03:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:52609"
time=2026-01-22T16:17:18.678+03:00 level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType:q8_0 NumThreads:8 GPULayers:37[ID:GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-22T16:17:18.701+03:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3 file_type=Q8_0 name="DeepSeek R1 0528 Qwen3 8B" description="" num_tensors=399 num_key_values=33
load_backend: loaded CPU backend from C:\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes, ID: GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2
load_backend: loaded CUDA backend from C:\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2026-01-22T16:17:18.794+03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2026-01-22T16:17:19.103+03:00 level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType:q8_0 NumThreads:8 GPULayers:1[ID:GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 Layers:1(35..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-22T16:17:19.268+03:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType:q8_0 NumThreads:8 GPULayers:1[ID:GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 Layers:1(35..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-22T16:17:20.016+03:00 level=INFO source=runner.go:1278 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType:q8_0 NumThreads:8 GPULayers:1[ID:GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 Layers:1(35..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-22T16:17:20.016+03:00 level=INFO source=ggml.go:482 msg="offloading 1 repeating layers to GPU"
time=2026-01-22T16:17:20.017+03:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-01-22T16:17:20.017+03:00 level=INFO source=ggml.go:494 msg="offloaded 1/37 layers to GPU"
time=2026-01-22T16:17:20.021+03:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="199.3 MiB"
time=2026-01-22T16:17:20.021+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="8.0 GiB"
time=2026-01-22T16:17:20.021+03:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="136.0 MiB"
time=2026-01-22T16:17:20.021+03:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="4.6 GiB"
time=2026-01-22T16:17:20.021+03:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="674.0 MiB"
time=2026-01-22T16:17:20.021+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="8.0 MiB"
time=2026-01-22T16:17:20.021+03:00 level=INFO source=device.go:272 msg="total memory" size="13.7 GiB"
time=2026-01-22T16:17:20.021+03:00 level=INFO source=sched.go:526 msg="loaded runners" count=1
time=2026-01-22T16:17:20.021+03:00 level=INFO source=server.go:1347 msg="waiting for llama runner to start responding"
time=2026-01-22T16:17:20.025+03:00 level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model"
time=2026-01-22T16:17:22.843+03:00 level=INFO source=server.go:1385 msg="llama runner started in 4.24 seconds"
[GIN] 2026/01/22 - 16:20:27 | 200 | 3m26s | 127.0.0.1 | POST "/api/chat"
[GIN] 2026/01/22 - 16:24:15 | 200 | 15.5055ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2026/01/22 - 16:24:15 | 200 | 1.0003ms | 127.0.0.1 | GET "/api/ps"

<!-- gh-comment-id:3784413891 --> @Eb7CAPJi commented on GitHub (Jan 22, 2026): time=2026-01-22T16:15:05.622+03:00 level=INFO source=routes.go:1629 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:65536 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:12h0m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:e:\\.ollama\\models\\ OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2026-01-22T16:15:05.655+03:00 level=INFO source=images.go:501 msg="total blobs: 170" time=2026-01-22T16:15:05.661+03:00 level=INFO source=images.go:508 msg="total unused blobs removed: 0" time=2026-01-22T16:15:05.666+03:00 level=INFO source=routes.go:1682 msg="Listening on 127.0.0.1:11434 (version 0.14.3)" time=2026-01-22T16:15:05.667+03:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-01-22T16:15:05.691+03:00 level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=0 time=2026-01-22T16:15:05.691+03:00 level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again" time=2026-01-22T16:15:05.701+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Ollama\\ollama.exe runner --ollama-engine --port 51769" time=2026-01-22T16:15:05.895+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Ollama\\ollama.exe runner --ollama-engine --port 51776" time=2026-01-22T16:15:06.033+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Ollama\\ollama.exe runner --ollama-engine --port 51783" time=2026-01-22T16:15:06.147+03:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-01-22T16:15:06.147+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Ollama\\ollama.exe runner --ollama-engine --port 51793" time=2026-01-22T16:15:06.355+03:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3090" libdirs=ollama,cuda_v12 driver=12.7 pci_id=0000:01:00.0 type=discrete total="24.0 GiB" available="1.6 GiB" [GIN] 2026/01/22 - 16:15:06 | 200 | 13.3592ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/01/22 - 16:15:06 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/22 - 16:15:06 | 200 | 1.0428ms | 127.0.0.1 | GET "/api/version" time=2026-01-22T16:15:14.284+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Ollama\\ollama.exe runner --ollama-engine --port 51846" time=2026-01-22T16:15:14.463+03:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-01-22T16:15:14.463+03:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=16 time=2026-01-22T16:15:14.574+03:00 level=INFO source=server.go:245 msg="enabling flash attention" time=2026-01-22T16:15:14.575+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Ollama\\ollama.exe runner --ollama-engine --model e:\\.ollama\\models\\blobs\\sha256-4e4f9cd88d6456e4f389e7262eca4a8d565211e2b22ece9ca7a8556168ff3c66 --port 51853" time=2026-01-22T16:15:14.578+03:00 level=INFO source=sched.go:452 msg="system memory" total="63.9 GiB" free="40.7 GiB" free_swap="30.6 GiB" time=2026-01-22T16:15:14.578+03:00 level=INFO source=sched.go:459 msg="gpu memory" id=GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 library=CUDA available="1.1 GiB" free="1.5 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-01-22T16:15:14.578+03:00 level=INFO source=server.go:755 msg="loading model" "model layers"=25 requested=-1 time=2026-01-22T16:15:14.614+03:00 level=INFO source=runner.go:1405 msg="starting ollama engine" time=2026-01-22T16:15:14.635+03:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:51853" time=2026-01-22T16:15:14.650+03:00 level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType:q8_0 NumThreads:8 GPULayers:25[ID:GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-01-22T16:15:14.692+03:00 level=INFO source=ggml.go:136 msg="" architecture=gpt-oss file_type=F16 name=Gpt-Oss-20B description="" num_tensors=459 num_key_values=38 load_backend: loaded CPU backend from C:\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes, ID: GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 load_backend: loaded CUDA backend from C:\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2026-01-22T16:15:14.785+03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-01-22T16:15:15.294+03:00 level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType:q8_0 NumThreads:8 GPULayers:1[ID:GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 Layers:1(23..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-01-22T16:15:15.449+03:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType:q8_0 NumThreads:8 GPULayers:1[ID:GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 Layers:1(23..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-01-22T16:15:15.740+03:00 level=INFO source=runner.go:1278 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType:q8_0 NumThreads:8 GPULayers:1[ID:GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 Layers:1(23..23)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-01-22T16:15:15.740+03:00 level=INFO source=ggml.go:482 msg="offloading 1 repeating layers to GPU" time=2026-01-22T16:15:15.741+03:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-01-22T16:15:15.741+03:00 level=INFO source=ggml.go:494 msg="offloaded 1/25 layers to GPU" time=2026-01-22T16:15:15.750+03:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="455.5 MiB" time=2026-01-22T16:15:15.750+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="12.4 GiB" time=2026-01-22T16:15:15.750+03:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="68.0 MiB" time=2026-01-22T16:15:15.750+03:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="805.4 MiB" time=2026-01-22T16:15:15.750+03:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="547.4 MiB" time=2026-01-22T16:15:15.750+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="5.6 MiB" time=2026-01-22T16:15:15.750+03:00 level=INFO source=device.go:272 msg="total memory" size="14.2 GiB" time=2026-01-22T16:15:15.750+03:00 level=INFO source=sched.go:526 msg="loaded runners" count=1 time=2026-01-22T16:15:15.750+03:00 level=INFO source=server.go:1347 msg="waiting for llama runner to start responding" time=2026-01-22T16:15:15.756+03:00 level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model" time=2026-01-22T16:15:20.366+03:00 level=INFO source=server.go:1385 msg="llama runner started in 5.79 seconds" [GIN] 2026/01/22 - 16:15:36 | 200 | 22.8055312s | 127.0.0.1 | POST "/api/chat" [GIN] 2026/01/22 - 16:15:55 | 200 | 16.1012ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/01/22 - 16:15:55 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/22 - 16:16:02 | 200 | 14.4309ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/01/22 - 16:16:02 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/22 - 16:16:06 | 200 | 15.9733ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/01/22 - 16:16:06 | 200 | 531.4µs | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/22 - 16:16:09 | 200 | 32.6872368s | 127.0.0.1 | POST "/api/chat" [GIN] 2026/01/22 - 16:16:32 | 200 | 23.1922197s | 127.0.0.1 | POST "/api/chat" [GIN] 2026/01/22 - 16:16:43 | 200 | 15.9981ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/01/22 - 16:16:43 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/22 - 16:16:45 | 200 | 13.9975ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/01/22 - 16:16:45 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/22 - 16:16:48 | 200 | 15.5631ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/01/22 - 16:16:48 | 200 | 513.3µs | 127.0.0.1 | GET "/api/ps" [GIN] 2026/01/22 - 16:17:17 | 200 | 44.3396272s | 127.0.0.1 | POST "/api/chat" ggml_backend_cuda_device_get_memory device GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 utilizing NVML memory reporting free: 333504512 total: 25769803776 time=2026-01-22T16:17:18.155+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Ollama\\ollama.exe runner --ollama-engine --port 52593" time=2026-01-22T16:17:18.361+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Ollama\\ollama.exe runner --ollama-engine --port 52601" time=2026-01-22T16:17:18.545+03:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-01-22T16:17:18.545+03:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=16 time=2026-01-22T16:17:18.604+03:00 level=INFO source=server.go:245 msg="enabling flash attention" time=2026-01-22T16:17:18.604+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Ollama\\ollama.exe runner --ollama-engine --model e:\\.ollama\\models\\blobs\\sha256-aaaa4f4b6e97126792fd433a2985b2a0bee5c4d69d9bf2cbcc7d0795c53b1d81 --port 52609" time=2026-01-22T16:17:18.607+03:00 level=INFO source=sched.go:452 msg="system memory" total="63.9 GiB" free="41.0 GiB" free_swap="30.2 GiB" time=2026-01-22T16:17:18.607+03:00 level=INFO source=sched.go:459 msg="gpu memory" id=GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 library=CUDA available="1.1 GiB" free="1.6 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-01-22T16:17:18.607+03:00 level=INFO source=server.go:755 msg="loading model" "model layers"=37 requested=-1 time=2026-01-22T16:17:18.646+03:00 level=INFO source=runner.go:1405 msg="starting ollama engine" time=2026-01-22T16:17:18.667+03:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:52609" time=2026-01-22T16:17:18.678+03:00 level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType:q8_0 NumThreads:8 GPULayers:37[ID:GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-01-22T16:17:18.701+03:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3 file_type=Q8_0 name="DeepSeek R1 0528 Qwen3 8B" description="" num_tensors=399 num_key_values=33 load_backend: loaded CPU backend from C:\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes, ID: GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 load_backend: loaded CUDA backend from C:\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2026-01-22T16:17:18.794+03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-01-22T16:17:19.103+03:00 level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType:q8_0 NumThreads:8 GPULayers:1[ID:GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 Layers:1(35..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-01-22T16:17:19.268+03:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType:q8_0 NumThreads:8 GPULayers:1[ID:GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 Layers:1(35..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-01-22T16:17:20.016+03:00 level=INFO source=runner.go:1278 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:65536 KvCacheType:q8_0 NumThreads:8 GPULayers:1[ID:GPU-43d944cd-e7c6-c3e9-4414-6ab0ef6870b2 Layers:1(35..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-01-22T16:17:20.016+03:00 level=INFO source=ggml.go:482 msg="offloading 1 repeating layers to GPU" time=2026-01-22T16:17:20.017+03:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-01-22T16:17:20.017+03:00 level=INFO source=ggml.go:494 msg="offloaded 1/37 layers to GPU" time=2026-01-22T16:17:20.021+03:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="199.3 MiB" time=2026-01-22T16:17:20.021+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="8.0 GiB" time=2026-01-22T16:17:20.021+03:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="136.0 MiB" time=2026-01-22T16:17:20.021+03:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="4.6 GiB" time=2026-01-22T16:17:20.021+03:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="674.0 MiB" time=2026-01-22T16:17:20.021+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="8.0 MiB" time=2026-01-22T16:17:20.021+03:00 level=INFO source=device.go:272 msg="total memory" size="13.7 GiB" time=2026-01-22T16:17:20.021+03:00 level=INFO source=sched.go:526 msg="loaded runners" count=1 time=2026-01-22T16:17:20.021+03:00 level=INFO source=server.go:1347 msg="waiting for llama runner to start responding" time=2026-01-22T16:17:20.025+03:00 level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model" time=2026-01-22T16:17:22.843+03:00 level=INFO source=server.go:1385 msg="llama runner started in 4.24 seconds" [GIN] 2026/01/22 - 16:20:27 | 200 | 3m26s | 127.0.0.1 | POST "/api/chat" [GIN] 2026/01/22 - 16:24:15 | 200 | 15.5055ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/01/22 - 16:24:15 | 200 | 1.0003ms | 127.0.0.1 | GET "/api/ps"
Author
Owner

@rick-github commented on GitHub (Jan 22, 2026):

What's the output of nvidia-smi?

<!-- gh-comment-id:3785006754 --> @rick-github commented on GitHub (Jan 22, 2026): What's the output of `nvidia-smi`?
Author
Owner

@mchiang0610 commented on GitHub (Jan 22, 2026):

hey @lnix so sorry about your experience. We do test across various hardware, including the 3090 you are using.

May I ask what model you are using? Would love to help troubleshoot this.

What happens is if using a model that uses more than the amount of GPU memory you have (or hitting memory limits due to context length settings), Ollama will start using CPU offloading.

<!-- gh-comment-id:3786211814 --> @mchiang0610 commented on GitHub (Jan 22, 2026): hey @lnix so sorry about your experience. We do test across various hardware, including the 3090 you are using. May I ask what model you are using? Would love to help troubleshoot this. What happens is if using a model that uses more than the amount of GPU memory you have (or hitting memory limits due to context length settings), Ollama will start using CPU offloading.
Author
Owner

@rick-github commented on GitHub (Jan 22, 2026):

The model is deepseek-r1:8b-0528-qwen3-q8_0. Based on the available VRAM, I suspect that there are other processes using the GPU.

<!-- gh-comment-id:3786229277 --> @rick-github commented on GitHub (Jan 22, 2026): The model is deepseek-r1:8b-0528-qwen3-q8_0. Based on the available VRAM, I suspect that there are other processes using the GPU.
Author
Owner

@KeepitSimpleAnalytics commented on GitHub (Jan 22, 2026):

I'm having the same issue with .14.3 -- I have A6000 Pro Blackwell

<!-- gh-comment-id:3787008584 --> @KeepitSimpleAnalytics commented on GitHub (Jan 22, 2026): I'm having the same issue with .14.3 -- I have A6000 Pro Blackwell
Author
Owner

@rick-github commented on GitHub (Jan 22, 2026):

Server log may help in debugging.

<!-- gh-comment-id:3787013136 --> @rick-github commented on GitHub (Jan 22, 2026): [Server log](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.mdx) may help in debugging.
Author
Owner

@NagelTuev commented on GitHub (Jan 23, 2026):

Had the same issue, an update of the NVIDIA-Driver fixed the problem for me. on a 1080Ti and 5090

<!-- gh-comment-id:3790753480 --> @NagelTuev commented on GitHub (Jan 23, 2026): Had the same issue, an update of the NVIDIA-Driver fixed the problem for me. on a 1080Ti and 5090
Author
Owner

@charliboy commented on GitHub (Jan 26, 2026):

I have the same issue. ollama version:0.15.1 cuda:13.0 RTX A6000 ada os:ubuntu22.04

<!-- gh-comment-id:3798969095 --> @charliboy commented on GitHub (Jan 26, 2026): I have the same issue. ollama version:0.15.1 cuda:13.0 RTX A6000 ada os:ubuntu22.04
Author
Owner

@rick-github commented on GitHub (Jan 26, 2026):

Server log may help in debugging.

<!-- gh-comment-id:3798983680 --> @rick-github commented on GitHub (Jan 26, 2026): [Server log](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.mdx) may help in debugging.
Author
Owner

@charliboy commented on GitHub (Jan 26, 2026):

I have the same issue. ollama version:0.15.1 cuda:13.0 RTX A6000 ada os:ubuntu22.04

I solved this problem by increasing the environment variable OLLAMA_GPU=1, but ollama will no longer use memory.

<!-- gh-comment-id:3799127486 --> @charliboy commented on GitHub (Jan 26, 2026): > I have the same issue. ollama version:0.15.1 cuda:13.0 RTX A6000 ada os:ubuntu22.04 I solved this problem by increasing the environment variable OLLAMA_GPU=1, but ollama will no longer use memory.
Author
Owner

@rick-github commented on GitHub (Jan 26, 2026):

OLLAMA_GPU is not an ollama configuration variable.

<!-- gh-comment-id:3800732569 --> @rick-github commented on GitHub (Jan 26, 2026): `OLLAMA_GPU` is not an ollama configuration variable.
Author
Owner

@wangwuqi commented on GitHub (Jan 28, 2026):

I have the same issue. ollama doesnot use GPU, it only using CPU
ollama version: 0.15.1
cuda: 11.7
model: glm-4.7-flash, model size 19GB
GPU: 4* NVIDIA 3090

<!-- gh-comment-id:3810623719 --> @wangwuqi commented on GitHub (Jan 28, 2026): I have the same issue. ollama doesnot use GPU, it only using CPU ollama version: 0.15.1 cuda: 11.7 model: glm-4.7-flash, model size 19GB GPU: 4* NVIDIA 3090
Author
Owner

@rick-github commented on GitHub (Jan 28, 2026):

Server log may help in debugging.

<!-- gh-comment-id:3810630529 --> @rick-github commented on GitHub (Jan 28, 2026): [Server log](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.mdx) may help in debugging.
Author
Owner

@rick-github commented on GitHub (Jan 29, 2026):

part of the logs

The full log.

<!-- gh-comment-id:3816437907 --> @rick-github commented on GitHub (Jan 29, 2026): > part of the logs The full log.
Author
Owner

@wangwuqi commented on GitHub (Jan 30, 2026):

i upgrade cuda to 12.8 and restart server, the problem was solved

<!-- gh-comment-id:3822047869 --> @wangwuqi commented on GitHub (Jan 30, 2026): i upgrade cuda to 12.8 and restart server, the problem was solved
Author
Owner

@charliboy commented on GitHub (Feb 4, 2026):

part of the logs

The full log.

I have the same issue. ollama version:0.15.1 cuda:13.0 RTX A6000 ada os:ubuntu22.04
Today I ran the BGE-M3 model, and Olama loaded the model into the CPU, while the GPU was idle. Attached is the complete log.

ollama.log

<!-- gh-comment-id:3844681638 --> @charliboy commented on GitHub (Feb 4, 2026): > > part of the logs > > The full log. > I have the same issue. ollama version:0.15.1 cuda:13.0 RTX A6000 ada os:ubuntu22.04 Today I ran the BGE-M3 model, and Olama loaded the model into the CPU, while the GPU was idle. Attached is the complete log. [ollama.log](https://github.com/user-attachments/files/25059800/ollama.log)
Author
Owner

@rick-github commented on GitHub (Feb 4, 2026):

Not the full log, but enough to show that the runner is not detecting any accelerator backends:

Feb 04 08:18:55 suma ollama[3739816]: time=2026-02-04T08:18:55.948+08:00 level=INFO source=runner.go:965 msg="starting go runner"
Feb 04 08:18:55 suma ollama[3739816]: time=2026-02-04T08:18:55.949+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)

Increase the debugging level by setting OLLAMA_DEBUG=2 in the server environment, restart the server, and then run

journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)" | sed -ne '/server config/,/inference compute/p'
<!-- gh-comment-id:3844703236 --> @rick-github commented on GitHub (Feb 4, 2026): Not the full log, but enough to show that the runner is not detecting any accelerator backends: ``` Feb 04 08:18:55 suma ollama[3739816]: time=2026-02-04T08:18:55.948+08:00 level=INFO source=runner.go:965 msg="starting go runner" Feb 04 08:18:55 suma ollama[3739816]: time=2026-02-04T08:18:55.949+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) ``` Increase the debugging level by setting `OLLAMA_DEBUG=2` in the server environment, restart the server, and then run ``` journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)" | sed -ne '/server config/,/inference compute/p' ```
Author
Owner

@charliboy commented on GitHub (Feb 4, 2026):

Not the full log, but enough to show that the runner is not detecting any accelerator backends:

Feb 04 08:18:55 suma ollama[3739816]: time=2026-02-04T08:18:55.948+08:00 level=INFO source=runner.go:965 msg="starting go runner"
Feb 04 08:18:55 suma ollama[3739816]: time=2026-02-04T08:18:55.949+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)

Increase the debugging level by setting OLLAMA_DEBUG=2 in the server environment, restart the server, and then run

journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)" | sed -ne '/server config/,/inference compute/p'

I saw this in the log submitted earlier. Could it be a flash attention issue:
Feb 04 08:18:55 suma ollama[3739816]: time=2026-02-04T08:18:55.921+08:00 level=WARN source=server.go:207 msg="flash attention enabled but not supported by model"

The attachment is the debug level log

ollama_debug.log

<!-- gh-comment-id:3844904449 --> @charliboy commented on GitHub (Feb 4, 2026): > Not the full log, but enough to show that the runner is not detecting any accelerator backends: > > ``` > Feb 04 08:18:55 suma ollama[3739816]: time=2026-02-04T08:18:55.948+08:00 level=INFO source=runner.go:965 msg="starting go runner" > Feb 04 08:18:55 suma ollama[3739816]: time=2026-02-04T08:18:55.949+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) > ``` > > Increase the debugging level by setting `OLLAMA_DEBUG=2` in the server environment, restart the server, and then run > > ``` > journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)" | sed -ne '/server config/,/inference compute/p' > ``` I saw this in the log submitted earlier. Could it be a flash attention issue: Feb 04 08:18:55 suma ollama[3739816]: time=2026-02-04T08:18:55.921+08:00 level=WARN source=server.go:207 msg="flash attention enabled but not supported by model" The attachment is the debug level log [ollama_debug.log](https://github.com/user-attachments/files/25060744/ollama_debug.log)
Author
Owner

@rick-github commented on GitHub (Feb 4, 2026):

I saw this in the log submitted earlier. Could it be a flash attention issue:

No.

Feb 04 10:24:37 suma ollama[975008]: time=2026-02-04T10:24:37.017+08:00 level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=0,1
Feb 04 10:24:37 suma ollama[975008]: time=2026-02-04T10:24:37.017+08:00 level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again"

Try unsetting CUDA_VISIBLE_DEVICES.

Feb 04 10:24:37 suma ollama[975008]: time=2026-02-04T10:24:37.052+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Feb 04 10:24:37 suma ollama[975008]: time=2026-02-04T10:24:37.053+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)

What's the output of

find /usr/local/lib/ollama
<!-- gh-comment-id:3844927680 --> @rick-github commented on GitHub (Feb 4, 2026): > I saw this in the log submitted earlier. Could it be a flash attention issue: No. ``` Feb 04 10:24:37 suma ollama[975008]: time=2026-02-04T10:24:37.017+08:00 level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=0,1 Feb 04 10:24:37 suma ollama[975008]: time=2026-02-04T10:24:37.017+08:00 level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again" ``` Try unsetting `CUDA_VISIBLE_DEVICES`. ``` Feb 04 10:24:37 suma ollama[975008]: time=2026-02-04T10:24:37.052+08:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama Feb 04 10:24:37 suma ollama[975008]: time=2026-02-04T10:24:37.053+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) ``` What's the output of ``` find /usr/local/lib/ollama ```
Author
Owner

@charliboy commented on GitHub (Feb 4, 2026):

Try unsetting CUDA_VISIBLE_DEVICES.
I annotated this environment variable, but the problem still persists. Please refer to the attachment

ollama_debug (rev02).log

What's the output of
find /usr/local/lib/ollama
(base) dbcszh@suma:~$ find /usr/local/lib/ollama
/usr/local/lib/ollama

<!-- gh-comment-id:3844989977 --> @charliboy commented on GitHub (Feb 4, 2026): > Try unsetting `CUDA_VISIBLE_DEVICES`. I annotated this environment variable, but the problem still persists. Please refer to the attachment [ollama_debug (rev02).log](https://github.com/user-attachments/files/25061078/ollama_debug.rev02.log) > What's the output of > find /usr/local/lib/ollama (base) dbcszh@suma:~$ find /usr/local/lib/ollama /usr/local/lib/ollama
Author
Owner

@charliboy commented on GitHub (Feb 4, 2026):

I upgraded from 0.15.1 to 0.15.4 , but its configuration remains unchanged. After restarting, it seems to be able to use the GPU normally again. In 0.15.1, I restarted it several times, but it only used the CPU. I don't know if the issue of GPU unavailability will occur again in version 0.15.4.

<!-- gh-comment-id:3845066958 --> @charliboy commented on GitHub (Feb 4, 2026): I upgraded from 0.15.1 to 0.15.4 , but its configuration remains unchanged. After restarting, it seems to be able to use the GPU normally again. In 0.15.1, I restarted it several times, but it only used the CPU. I don't know if the issue of GPU unavailability will occur again in version 0.15.4.
Author
Owner

@rick-github commented on GitHub (Feb 4, 2026):

(base) dbcszh@suma:~$ find /usr/local/lib/ollama
/usr/local/lib/ollama

If this was the full output of the find command, then your installation was incomplete. Upgrading will have added the missing files.

<!-- gh-comment-id:3847447262 --> @rick-github commented on GitHub (Feb 4, 2026): ```console (base) dbcszh@suma:~$ find /usr/local/lib/ollama /usr/local/lib/ollama ``` If this was the full output of the find command, then your installation was incomplete. Upgrading will have added the missing files.
Author
Owner

@charliboy commented on GitHub (Feb 5, 2026):

(base) dbcszh@suma:~$ find /usr/local/lib/ollama
/usr/local/lib/ollama
If this was the full output of the find command, then your installation was incomplete. Upgrading will have added the missing files.

It was the entire output of the find command at that time, and now I have tried it again. This time it outputs more content, which is too strange. When I ran ollama, there were no errors or even warnings. I always download this file: "ollama-linux-amd64.tar.zst" and install it using the "tar -xf ollama-linux-amd64. tar. zst - C /usr/local" command.

<!-- gh-comment-id:3850618084 --> @charliboy commented on GitHub (Feb 5, 2026): > (base) dbcszh@suma:~$ find /usr/local/lib/ollama > /usr/local/lib/ollama > If this was the full output of the find command, then your installation was incomplete. Upgrading will have added the missing files. It was the entire output of the find command at that time, and now I have tried it again. This time it outputs more content, which is too strange. When I ran ollama, there were no errors or even warnings. I always download this file: "ollama-linux-amd64.tar.zst" and install it using the "tar -xf ollama-linux-amd64. tar. zst - C /usr/local" command.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71120