[GH-ISSUE #13726] Since version 0.13.2, ollama on arch cannot detect discret GPU Nvidia GeForce GTX 970M on hybrid laptop #55512

Closed
opened 2026-04-29 09:19:22 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @emiltoacs on GitHub (Jan 15, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13726

Ollama stop detecting my Nvidia card since ollama 0.13.2 although it works with 0.13.1

I have an hybrid laptop (integrated GPU + discret GPU) with Arch Linux, running ollama-cuda12-bin because the Nvidia Maxwell cards
does not support CUDA 13. Since ollama version 0.13.2, ollama cannot run on my Nvidia GPU card
GTX 970M. Although in ollama-cuda12-bin version 0.13.1 my Nvidia card is well detected and
ollama runs on Nvidia.

Still buggy in version 0.14.1

nvidia-smi -L :

GPU 0: NVIDIA GeForce GTX 970M (UUID: GPU-ac90367f-1113-7f49-c527-d785b1684ce9)

Log when it works in 0.13.1

Log when ollama starts correctly in 0.13.1 :

source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0,1,2,3 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
source=images.go:522 msg="total blobs: 26"
source=images.go:529 msg="total unused blobs removed: 0"
source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.1)"
source=runner.go:67 msg="discovering available GPUs..."
source=runner.go:484 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=0,1,2,3
source=runner.go:488 msg="if GPUs are not correctly discovered, unset and try again"
source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35545"
source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37359"
source=types.go:42 msg="inference compute" id=GPU-ac90367f-1113-7f49-c527-d785b1684ce9 filter_id="" library=CUDA compute=5.2 name=CUDA0 description="NVIDIA GeForce GTX 970M" libdirs=ollama driver=13.0 pci_id=0000:01:00.0 type=discrete total="6.0 GiB" available="5.9 GiB"
source=routes.go:1638 msg="entering low vram mode" "total vram"="6.0 GiB" threshold="20.0 GiB"

Log when running model gemma3:4b-it-qat correctly in 0.13.1 for instance :

[GIN] 2026/01/15 - 09:54:29 | 200 |     129.625µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/15 - 09:54:29 | 200 |    4.826244ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/01/15 - 09:54:43 | 200 |      24.179µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/15 - 09:54:43 | 200 |  107.799622ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/15 - 09:54:43 | 200 |   84.906352ms |       127.0.0.1 | POST     "/api/show"
source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42589"
source=server.go:209 msg="enabling flash attention"
source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /var/lib/ollama/blobs/sha256-529850705c0884a283b87d3b261d36ee30821e16f0310962ba977b456ad3b8cd --port 39057"
source=sched.go:443 msg="system memory" total="31.2 GiB" free="23.1 GiB" free_swap="47.5 GiB"
source=sched.go:450 msg="gpu memory" id=GPU-ac90367f-1113-7f49-c527-d785b1684ce9 library=CUDA available="5.4 GiB" free="5.9 GiB" minimum="457.0 MiB" overhead="0 B"
source=server.go:702 msg="loading model" "model layers"=35 requested=-1
source=runner.go:1398 msg="starting ollama engine"
source=runner.go:1433 msg="Server listening on 127.0.0.1:39057"
source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:4 GPULayers:35[ID:GPU-ac90367f-1113-7f49-c527-d785b1684ce9 Layers:35(0..34)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
source=ggml.go:136 msg="" architecture=gemma3 file_type=Q4_0 name="Gemma3 4b It Qa_0 Qat Hf" description="" num_tensors=883 num_key_values=42
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce GTX 970M, compute capability 5.2, VMM: yes, ID: GPU-ac90367f-1113-7f49-c527-d785b1684ce9
load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:4 GPULayers:35[ID:GPU-ac90367f-1113-7f49-c527-d785b1684ce9 Layers:35(0..34)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:4 GPULayers:35[ID:GPU-ac90367f-1113-7f49-c527-d785b1684ce9 Layers:35(0..34)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
source=ggml.go:482 msg="offloading 34 repeating layers to GPU"
source=ggml.go:489 msg="offloading output layer to GPU"
source=ggml.go:494 msg="offloaded 35/35 layers to GPU"
source=device.go:240 msg="model weights" device=CUDA0 size="3.7 GiB"
source=device.go:245 msg="model weights" device=CPU size="1.3 GiB"
source=device.go:251 msg="kv cache" device=CUDA0 size="254.0 MiB"
source=device.go:262 msg="compute graph" device=CUDA0 size="1.2 GiB"
source=device.go:267 msg="compute graph" device=CPU size="5.0 MiB"
source=device.go:272 msg="total memory" size="6.4 GiB"
source=sched.go:517 msg="loaded runners" count=1
source=server.go:1294 msg="waiting for llama runner to start responding"
source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
source=server.go:1332 msg="llama runner started in 9.58 seconds"
[GIN] 2026/01/15 - 09:54:53 | 200 |  9.961565254s |       127.0.0.1 | POST     "/api/generate"

Log when it fails in version 0.14.1

Log when ollama starts in 0.14.1 :

source=routes.go:1614 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0,1,2,3 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
source=images.go:499 msg="total blobs: 26"
source=images.go:506 msg="total unused blobs removed: 0"
source=routes.go:1667 msg="Listening on 127.0.0.1:11434 (version 0.14.1)"
source=runner.go:67 msg="discovering available GPUs..."
source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=0,1,2,3
source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again"
source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38909"
source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.2 GiB" available="17.3 GiB"
source=routes.go:1708 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
[GIN] 2026/01/15 - 10:17:05 | 200 |      70.962µs |       127.0.0.1 | GET      "/api/version"

Log when running model gemma3:4b-it-qat in 0.14.1 for instance :

[GIN] 2026/01/15 - 10:21:30 | 200 |      29.904µs |       127.0.0.1 | HEAD     "/"
[GIN] 2026/01/15 - 10:21:31 | 200 |  209.123423ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/15 - 10:21:31 | 200 |  189.735896ms |       127.0.0.1 | POST     "/api/show"
source=server.go:245 msg="enabling flash attention"
source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /var/lib/ollama/blobs/sha256-529850705c0884a283b87d3b261d36ee30821e16f0310962ba977b456ad3b8cd --port 41595"
source=sched.go:452 msg="system memory" total="31.2 GiB" free="17.3 GiB" free_swap="47.3 GiB"
source=server.go:755 msg="loading model" "model layers"=35 requested=-1
source=runner.go:1405 msg="starting ollama engine"
source=runner.go:1440 msg="Server listening on 127.0.0.1:41595"
source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:4 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
source=ggml.go:136 msg="" architecture=gemma3 file_type=Q4_0 name="Gemma3 4b It Qa_0 Qat Hf" description="" num_tensors=883 num_key_values=42
source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:4 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
source=runner.go:1278 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:4 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
source=ggml.go:482 msg="offloading 0 repeating layers to GPU"
source=ggml.go:486 msg="offloading output layer to CPU"
source=ggml.go:494 msg="offloaded 0/35 layers to GPU"
source=device.go:245 msg="model weights" device=CPU size="5.0 GiB"
source=device.go:256 msg="kv cache" device=CPU size="254.0 MiB"
source=device.go:267 msg="compute graph" device=CPU size="126.9 MiB"
source=device.go:272 msg="total memory" size="5.3 GiB"
source=sched.go:526 msg="loaded runners" count=1
source=server.go:1347 msg="waiting for llama runner to start responding"
source=server.go:1381 msg="waiting for server to become available" status="llm server loading model"
source=server.go:1385 msg="llama runner started in 8.88 seconds"
[GIN] 2026/01/15 - 10:21:40 | 200 |  9.166107971s |       127.0.0.1 | POST     "/api/generate"

OS

Linux 6.18.5-arch1-1

GPU

Intel Corporation Skylake-H GT2 [HD Graphics 530]
NVIDIA Corporation GM204M [GeForce GTX 960 OEM / 970M]

CPU

Intel Core i7-6700HQ

Ollama version

0.14.1

Originally created by @emiltoacs on GitHub (Jan 15, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13726 # Ollama stop detecting my Nvidia card since ollama 0.13.2 although it works with 0.13.1 I have an hybrid laptop (integrated GPU + discret GPU) with Arch Linux, running `ollama-cuda12-bin` because the Nvidia Maxwell cards does not support CUDA 13. Since ollama version `0.13.2`, ollama cannot run on my Nvidia GPU card GTX 970M. Although in `ollama-cuda12-bin` version `0.13.1` my Nvidia card is well detected and ollama runs on Nvidia. Still buggy in version `0.14.1` nvidia-smi -L : GPU 0: NVIDIA GeForce GTX 970M (UUID: GPU-ac90367f-1113-7f49-c527-d785b1684ce9) <a id="orgc4c4d42"></a> # Log when it works in 0.13.1 Log when ollama starts correctly in 0.13.1 : source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0,1,2,3 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" source=images.go:522 msg="total blobs: 26" source=images.go:529 msg="total unused blobs removed: 0" source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.1)" source=runner.go:67 msg="discovering available GPUs..." source=runner.go:484 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=0,1,2,3 source=runner.go:488 msg="if GPUs are not correctly discovered, unset and try again" source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35545" source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37359" source=types.go:42 msg="inference compute" id=GPU-ac90367f-1113-7f49-c527-d785b1684ce9 filter_id="" library=CUDA compute=5.2 name=CUDA0 description="NVIDIA GeForce GTX 970M" libdirs=ollama driver=13.0 pci_id=0000:01:00.0 type=discrete total="6.0 GiB" available="5.9 GiB" source=routes.go:1638 msg="entering low vram mode" "total vram"="6.0 GiB" threshold="20.0 GiB" Log when running model gemma3:4b-it-qat correctly in 0.13.1 for instance : [GIN] 2026/01/15 - 09:54:29 | 200 | 129.625µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/15 - 09:54:29 | 200 | 4.826244ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/01/15 - 09:54:43 | 200 | 24.179µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/15 - 09:54:43 | 200 | 107.799622ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/15 - 09:54:43 | 200 | 84.906352ms | 127.0.0.1 | POST "/api/show" source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42589" source=server.go:209 msg="enabling flash attention" source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /var/lib/ollama/blobs/sha256-529850705c0884a283b87d3b261d36ee30821e16f0310962ba977b456ad3b8cd --port 39057" source=sched.go:443 msg="system memory" total="31.2 GiB" free="23.1 GiB" free_swap="47.5 GiB" source=sched.go:450 msg="gpu memory" id=GPU-ac90367f-1113-7f49-c527-d785b1684ce9 library=CUDA available="5.4 GiB" free="5.9 GiB" minimum="457.0 MiB" overhead="0 B" source=server.go:702 msg="loading model" "model layers"=35 requested=-1 source=runner.go:1398 msg="starting ollama engine" source=runner.go:1433 msg="Server listening on 127.0.0.1:39057" source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:4 GPULayers:35[ID:GPU-ac90367f-1113-7f49-c527-d785b1684ce9 Layers:35(0..34)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" source=ggml.go:136 msg="" architecture=gemma3 file_type=Q4_0 name="Gemma3 4b It Qa_0 Qat Hf" description="" num_tensors=883 num_key_values=42 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce GTX 970M, compute capability 5.2, VMM: yes, ID: GPU-ac90367f-1113-7f49-c527-d785b1684ce9 load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:4 GPULayers:35[ID:GPU-ac90367f-1113-7f49-c527-d785b1684ce9 Layers:35(0..34)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:4 GPULayers:35[ID:GPU-ac90367f-1113-7f49-c527-d785b1684ce9 Layers:35(0..34)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" source=ggml.go:482 msg="offloading 34 repeating layers to GPU" source=ggml.go:489 msg="offloading output layer to GPU" source=ggml.go:494 msg="offloaded 35/35 layers to GPU" source=device.go:240 msg="model weights" device=CUDA0 size="3.7 GiB" source=device.go:245 msg="model weights" device=CPU size="1.3 GiB" source=device.go:251 msg="kv cache" device=CUDA0 size="254.0 MiB" source=device.go:262 msg="compute graph" device=CUDA0 size="1.2 GiB" source=device.go:267 msg="compute graph" device=CPU size="5.0 MiB" source=device.go:272 msg="total memory" size="6.4 GiB" source=sched.go:517 msg="loaded runners" count=1 source=server.go:1294 msg="waiting for llama runner to start responding" source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" source=server.go:1332 msg="llama runner started in 9.58 seconds" [GIN] 2026/01/15 - 09:54:53 | 200 | 9.961565254s | 127.0.0.1 | POST "/api/generate" <a id="orga7fee46"></a> # Log when it fails in version 0.14.1 Log when ollama starts in 0.14.1 : source=routes.go:1614 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0,1,2,3 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" source=images.go:499 msg="total blobs: 26" source=images.go:506 msg="total unused blobs removed: 0" source=routes.go:1667 msg="Listening on 127.0.0.1:11434 (version 0.14.1)" source=runner.go:67 msg="discovering available GPUs..." source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=0,1,2,3 source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again" source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38909" source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.2 GiB" available="17.3 GiB" source=routes.go:1708 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" [GIN] 2026/01/15 - 10:17:05 | 200 | 70.962µs | 127.0.0.1 | GET "/api/version" Log when running model gemma3:4b-it-qat in 0.14.1 for instance : [GIN] 2026/01/15 - 10:21:30 | 200 | 29.904µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/15 - 10:21:31 | 200 | 209.123423ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/01/15 - 10:21:31 | 200 | 189.735896ms | 127.0.0.1 | POST "/api/show" source=server.go:245 msg="enabling flash attention" source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /var/lib/ollama/blobs/sha256-529850705c0884a283b87d3b261d36ee30821e16f0310962ba977b456ad3b8cd --port 41595" source=sched.go:452 msg="system memory" total="31.2 GiB" free="17.3 GiB" free_swap="47.3 GiB" source=server.go:755 msg="loading model" "model layers"=35 requested=-1 source=runner.go:1405 msg="starting ollama engine" source=runner.go:1440 msg="Server listening on 127.0.0.1:41595" source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:4 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" source=ggml.go:136 msg="" architecture=gemma3 file_type=Q4_0 name="Gemma3 4b It Qa_0 Qat Hf" description="" num_tensors=883 num_key_values=42 source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:4 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" source=runner.go:1278 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:4 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" source=ggml.go:482 msg="offloading 0 repeating layers to GPU" source=ggml.go:486 msg="offloading output layer to CPU" source=ggml.go:494 msg="offloaded 0/35 layers to GPU" source=device.go:245 msg="model weights" device=CPU size="5.0 GiB" source=device.go:256 msg="kv cache" device=CPU size="254.0 MiB" source=device.go:267 msg="compute graph" device=CPU size="126.9 MiB" source=device.go:272 msg="total memory" size="5.3 GiB" source=sched.go:526 msg="loaded runners" count=1 source=server.go:1347 msg="waiting for llama runner to start responding" source=server.go:1381 msg="waiting for server to become available" status="llm server loading model" source=server.go:1385 msg="llama runner started in 8.88 seconds" [GIN] 2026/01/15 - 10:21:40 | 200 | 9.166107971s | 127.0.0.1 | POST "/api/generate" ### OS Linux 6.18.5-arch1-1 ### GPU Intel Corporation Skylake-H GT2 [HD Graphics 530] NVIDIA Corporation GM204M [GeForce GTX 960 OEM / 970M] ### CPU Intel Core i7-6700HQ ### Ollama version 0.14.1
GiteaMirror added the bug label 2026-04-29 09:19:22 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 15, 2026):

Set OLLAMA_DEBUG=2 in the server environment and post the log.

Why have you set CUDA_VISIBLE_DEVICES?

<!-- gh-comment-id:3753831839 --> @rick-github commented on GitHub (Jan 15, 2026): Set `OLLAMA_DEBUG=2` in the server environment and post the log. Why have you set `CUDA_VISIBLE_DEVICES`?
Author
Owner

@emiltoacs commented on GitHub (Jan 15, 2026):

Ooops I tried different value with CUDA_VISIBLE_DEVICES and forgot to remove it. I unset the variable.

There is the log with OLLAMA_DEBUG=2

source=routes.go:1614 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
source=images.go:499 msg="total blobs: 26"
source=images.go:506 msg="total unused blobs removed: 0"
source=routes.go:1667 msg="Listening on 127.0.0.1:11434 (version 0.14.1)"
 source=sched.go:121 msg="starting llm scheduler"
source=runner.go:67 msg="discovering available GPUs..."
 source=runner.go:440 msg="starting runner for device discovery" libDirs=[/usr/lib/ollama] extraEnvs=map[]
source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34747"
 source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/bin OLLAMA_MODELS=/var/lib/ollama CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_LAUNCH_BLOCKING=0 CUDA_MODULE_LOADING=LAZY CUDA_MODULE_DATA_LOADING=LAZY CUDA_CACHE_MAXSIZE=2147483648 CUDA_CACHE_PATH=/var/cache/cuda CUDA_LOG_FILE=/var/log/cuda.log OLLAMA_HOST=http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE=5m OLLAMA_LOAD_TIMEOUT=5m OLLAMA_CONTEXT_LENGTH=4096 OLLAMA_MAX_LOADED_MODELS=0 OLLAMA_GPU_OVERHEAD=0 OLLAMA_MAX_QUEUE=512 OLLAMA_NUM_PARALLEL=1 OLLAMA_NOHISTORY=false OLLAMA_NOPRUNE=false OLLAMA_FLASH_ATTENTION=true OLLAMA_SCHED_SPREAD=false OLLAMA_DEBUG=2 LD_LIBRARY_PATH=/usr/lib/ollama OLLAMA_LIBRARY_PATH=/usr/lib/ollama
source=runner.go:1405 msg="starting ollama engine"
source=runner.go:1440 msg="Server listening on 127.0.0.1:34747"
 source=gguf.go:589 msg=general.architecture type=string
 source=gguf.go:589 msg=tokenizer.ggml.model type=string
 source=ggml.go:296 msg="key with type not found" key=general.alignment default=32
 source=ggml.go:296 msg="key with type not found" key=general.alignment default=32
 source=ggml.go:296 msg="key with type not found" key=general.file_type default=0
 source=ggml.go:296 msg="key with type not found" key=general.name default=""
 source=ggml.go:296 msg="key with type not found" key=general.description default=""
source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
 source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
 source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0
 source=ggml.go:296 msg="key with type not found" key=llama.pooling_type default=0
 source=ggml.go:296 msg="key with type not found" key=llama.expert_count default=0
 source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
 source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
 source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
 source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
 source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
 source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
 source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
 source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
 source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
 source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.pre default=""
 source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0
 source=ggml.go:296 msg="key with type not found" key=llama.embedding_length default=0
 source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count default=0
 source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count_kv default=0
 source=ggml.go:296 msg="key with type not found" key=llama.attention.key_length default=0
 source=ggml.go:296 msg="key with type not found" key=llama.rope.dimension_count default=0
 source=ggml.go:296 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
 source=ggml.go:296 msg="key with type not found" key=llama.rope.freq_base default=100000
 source=ggml.go:296 msg="key with type not found" key=llama.rope.scaling.factor default=1
 source=runner.go:1380 msg="dummy model load took" duration=7.001259ms
 source=runner.go:1385 msg="gathering device infos took" duration=711ns
 source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH=[/usr/lib/ollama] devices=[]
 source=runner.go:437 msg="bootstrap discovery took" duration=30.816301ms OLLAMA_LIBRARY_PATH=[/usr/lib/ollama] extra_envs=map[]
 source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0
 source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[]
 source=runner.go:40 msg="GPU bootstrap discovery took" duration=31.149958ms
source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.2 GiB" available="14.1 GiB"
source=routes.go:1708 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
<!-- gh-comment-id:3753989858 --> @emiltoacs commented on GitHub (Jan 15, 2026): Ooops I tried different value with CUDA_VISIBLE_DEVICES and forgot to remove it. I unset the variable. There is the log with OLLAMA_DEBUG=2 source=routes.go:1614 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" source=images.go:499 msg="total blobs: 26" source=images.go:506 msg="total unused blobs removed: 0" source=routes.go:1667 msg="Listening on 127.0.0.1:11434 (version 0.14.1)" source=sched.go:121 msg="starting llm scheduler" source=runner.go:67 msg="discovering available GPUs..." source=runner.go:440 msg="starting runner for device discovery" libDirs=[/usr/lib/ollama] extraEnvs=map[] source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34747" source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/bin OLLAMA_MODELS=/var/lib/ollama CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_LAUNCH_BLOCKING=0 CUDA_MODULE_LOADING=LAZY CUDA_MODULE_DATA_LOADING=LAZY CUDA_CACHE_MAXSIZE=2147483648 CUDA_CACHE_PATH=/var/cache/cuda CUDA_LOG_FILE=/var/log/cuda.log OLLAMA_HOST=http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE=5m OLLAMA_LOAD_TIMEOUT=5m OLLAMA_CONTEXT_LENGTH=4096 OLLAMA_MAX_LOADED_MODELS=0 OLLAMA_GPU_OVERHEAD=0 OLLAMA_MAX_QUEUE=512 OLLAMA_NUM_PARALLEL=1 OLLAMA_NOHISTORY=false OLLAMA_NOPRUNE=false OLLAMA_FLASH_ATTENTION=true OLLAMA_SCHED_SPREAD=false OLLAMA_DEBUG=2 LD_LIBRARY_PATH=/usr/lib/ollama OLLAMA_LIBRARY_PATH=/usr/lib/ollama source=runner.go:1405 msg="starting ollama engine" source=runner.go:1440 msg="Server listening on 127.0.0.1:34747" source=gguf.go:589 msg=general.architecture type=string source=gguf.go:589 msg=tokenizer.ggml.model type=string source=ggml.go:296 msg="key with type not found" key=general.alignment default=32 source=ggml.go:296 msg="key with type not found" key=general.alignment default=32 source=ggml.go:296 msg="key with type not found" key=general.file_type default=0 source=ggml.go:296 msg="key with type not found" key=general.name default="" source=ggml.go:296 msg="key with type not found" key=general.description default="" source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0 source=ggml.go:296 msg="key with type not found" key=llama.pooling_type default=0 source=ggml.go:296 msg="key with type not found" key=llama.expert_count default=0 source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" source=ggml.go:296 msg="key with type not found" key=tokenizer.ggml.pre default="" source=ggml.go:296 msg="key with type not found" key=llama.block_count default=0 source=ggml.go:296 msg="key with type not found" key=llama.embedding_length default=0 source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count default=0 source=ggml.go:296 msg="key with type not found" key=llama.attention.head_count_kv default=0 source=ggml.go:296 msg="key with type not found" key=llama.attention.key_length default=0 source=ggml.go:296 msg="key with type not found" key=llama.rope.dimension_count default=0 source=ggml.go:296 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 source=ggml.go:296 msg="key with type not found" key=llama.rope.freq_base default=100000 source=ggml.go:296 msg="key with type not found" key=llama.rope.scaling.factor default=1 source=runner.go:1380 msg="dummy model load took" duration=7.001259ms source=runner.go:1385 msg="gathering device infos took" duration=711ns source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH=[/usr/lib/ollama] devices=[] source=runner.go:437 msg="bootstrap discovery took" duration=30.816301ms OLLAMA_LIBRARY_PATH=[/usr/lib/ollama] extra_envs=map[] source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0 source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[] source=runner.go:40 msg="GPU bootstrap discovery took" duration=31.149958ms source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.2 GiB" available="14.1 GiB" source=routes.go:1708 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
Author
Owner

@emiltoacs commented on GitHub (Jan 26, 2026):

The bug seems to affect only the aur package ollama-cuda12-bin, when you install ollama
version manually with the archive from the ollama website :
https://ollama.com/download/ollama-linux-amd64.tar.zst
at least from version 14.1 it works.

<!-- gh-comment-id:3801362330 --> @emiltoacs commented on GitHub (Jan 26, 2026): The bug seems to affect only the aur package `ollama-cuda12-bin`, when you install ollama version manually with the archive from the ollama website : <https://ollama.com/download/ollama-linux-amd64.tar.zst> at least from version 14.1 it works.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55512