[GH-ISSUE #15400] Ollama 20.3 fails to initialize GPU #71910

Open
opened 2026-05-05 02:55:47 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @0xCA on GitHub (Apr 7, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15400

What is the issue?

CPU-only is available when starting ollama
nvidia-smi works flawlessly from within a container, other containers work with nvidia

6.18.15+deb13-amd64
Driver Version: 550.163.01
NVIDIA Container Runtime Hook version 1.19.0

nvidia-smi from inside the ollama container:

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.163.01             Driver Version: 550.163.01     CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4070        Off |   00000000:01:00.0  On |                  N/A |
|  0%   32C    P8              7W /  200W |     744MiB /  12282MiB |     14%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

compose.yml:

services:
  ollama:
    image: ollama/ollama
    container_name: ollama
    volumes:
      - ./ollama:/root/.ollama
    environment:
      OLLAMA_DEBUG: 2
    restart: always
    devices:
      - "nvidia.com/gpu=all"
    mem_limit: 50g

Relevant log output

time=2026-04-07T20:03:58.261Z level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG-4 OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-04-07T20:03:58.261Z level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false"
time=2026-04-07T20:03:58.261Z level=INFO source=images.go:499 msg="total blobs: 4"
time=2026-04-07T20:03:58.261Z level=INFO source=images.go:506 msg="total unused blobs removed: 0"
time=2026-04-07T20:03:58.261Z level=INFO source=routes.go:1802 msg="Listening on [::]:11434 (version 0.20.3)"
time=2026-04-07T20:03:58.261Z level=DEBUG source=sched.go:145 msg="starting llm scheduler"
time=2026-04-07T20:03:58.262Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-04-07T20:03:58.262Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extraEnvs=map[]
time=2026-04-07T20:03:58.262Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42989"
time=2026-04-07T20:03:58.262Z level=DEBUG source=server.go:433 msg=subprocess OLLAMA_HOST=0.0.0.0:11434 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_DEBUG=2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13
time=2026-04-07T20:03:58.270Z level=INFO source=runner.go:1417 msg="starting ollama engine"
time=2026-04-07T20:03:58.270Z level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:42989"
time=2026-04-07T20:03:58.278Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string
time=2026-04-07T20:03:58.278Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string
time=2026-04-07T20:03:58.278Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-07T20:03:58.278Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-07T20:03:58.278Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.file_type default=0
time=2026-04-07T20:03:58.278Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.name default=""
time=2026-04-07T20:03:58.278Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.description default=""
time=2026-04-07T20:03:58.278Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-04-07T20:03:58.278Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2026-04-07T20:03:58.282Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v13
ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so
time=2026-04-07T20:03:58.303Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.block_count default=0
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.pooling_type default=0
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.expert_count default=0
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.block_count default=0
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.embedding_length default=0
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-04-07T20:03:58.304Z level=DEBUG source=runner.go:1392 msg="dummy model load took" duration=29.80352ms
time=2026-04-07T20:03:58.304Z level=DEBUG source=runner.go:1397 msg="gathering device infos took" duration=520ns
time=2026-04-07T20:03:58.304Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" devices=[]
time=2026-04-07T20:03:58.304Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=42.168872ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[]
time=2026-04-07T20:03:58.304Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-04-07T20:03:58.304Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extraEnvs=map[]
time=2026-04-07T20:03:58.304Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43653"
time=2026-04-07T20:03:58.304Z level=DEBUG source=server.go:433 msg=subprocess OLLAMA_HOST=0.0.0.0:11434 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_DEBUG=2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12
time=2026-04-07T20:03:58.314Z level=INFO source=runner.go:1417 msg="starting ollama engine"
time=2026-04-07T20:03:58.314Z level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:43653"
time=2026-04-07T20:03:58.316Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string
time=2026-04-07T20:03:58.316Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string
time=2026-04-07T20:03:58.316Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-07T20:03:58.316Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-07T20:03:58.316Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.file_type default=0
time=2026-04-07T20:03:58.316Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.name default=""
time=2026-04-07T20:03:58.316Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.description default=""
time=2026-04-07T20:03:58.316Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-04-07T20:03:58.316Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2026-04-07T20:03:58.320Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v12
ggml_cuda_init: failed to initialize CUDA: unknown error
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
time=2026-04-07T20:03:58.435Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.block_count default=0
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.pooling_type default=0
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.expert_count default=0
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.block_count default=0
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.embedding_length default=0
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-04-07T20:03:58.435Z level=DEBUG source=runner.go:1392 msg="dummy model load took" duration=119.947481ms
time=2026-04-07T20:03:58.435Z level=DEBUG source=runner.go:1397 msg="gathering device infos took" duration=430ns
time=2026-04-07T20:03:58.436Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" devices=[]
time=2026-04-07T20:03:58.436Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=131.478694ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[]
time=2026-04-07T20:03:58.436Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0
time=2026-04-07T20:03:58.436Z level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[]
time=2026-04-07T20:03:58.436Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=174.244896ms
time=2026-04-07T20:03:58.436Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="50.0 GiB" available="49.9 GiB"
time=2026-04-07T20:03:58.436Z level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096

OS

Debian 13 / Podman

GPU

Nvidia

CPU

AMD

Ollama version

20.3

Originally created by @0xCA on GitHub (Apr 7, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15400 ### What is the issue? CPU-only is available when starting ollama nvidia-smi works flawlessly from within a container, other containers work with nvidia 6.18.15+deb13-amd64 Driver Version: 550.163.01 NVIDIA Container Runtime Hook version 1.19.0 nvidia-smi **from inside the ollama container**: ``` +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 4070 Off | 00000000:01:00.0 On | N/A | | 0% 32C P8 7W / 200W | 744MiB / 12282MiB | 14% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ ``` compose.yml: ``` services: ollama: image: ollama/ollama container_name: ollama volumes: - ./ollama:/root/.ollama environment: OLLAMA_DEBUG: 2 restart: always devices: - "nvidia.com/gpu=all" mem_limit: 50g ``` ### Relevant log output ```shell time=2026-04-07T20:03:58.261Z level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG-4 OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-04-07T20:03:58.261Z level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false" time=2026-04-07T20:03:58.261Z level=INFO source=images.go:499 msg="total blobs: 4" time=2026-04-07T20:03:58.261Z level=INFO source=images.go:506 msg="total unused blobs removed: 0" time=2026-04-07T20:03:58.261Z level=INFO source=routes.go:1802 msg="Listening on [::]:11434 (version 0.20.3)" time=2026-04-07T20:03:58.261Z level=DEBUG source=sched.go:145 msg="starting llm scheduler" time=2026-04-07T20:03:58.262Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-04-07T20:03:58.262Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extraEnvs=map[] time=2026-04-07T20:03:58.262Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42989" time=2026-04-07T20:03:58.262Z level=DEBUG source=server.go:433 msg=subprocess OLLAMA_HOST=0.0.0.0:11434 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_DEBUG=2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13 time=2026-04-07T20:03:58.270Z level=INFO source=runner.go:1417 msg="starting ollama engine" time=2026-04-07T20:03:58.270Z level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:42989" time=2026-04-07T20:03:58.278Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string time=2026-04-07T20:03:58.278Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string time=2026-04-07T20:03:58.278Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-07T20:03:58.278Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-07T20:03:58.278Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.file_type default=0 time=2026-04-07T20:03:58.278Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.name default="" time=2026-04-07T20:03:58.278Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.description default="" time=2026-04-07T20:03:58.278Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-04-07T20:03:58.278Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2026-04-07T20:03:58.282Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v13 ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so time=2026-04-07T20:03:58.303Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.block_count default=0 time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.pooling_type default=0 time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.expert_count default=0 time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.block_count default=0 time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.embedding_length default=0 time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-04-07T20:03:58.304Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-04-07T20:03:58.304Z level=DEBUG source=runner.go:1392 msg="dummy model load took" duration=29.80352ms time=2026-04-07T20:03:58.304Z level=DEBUG source=runner.go:1397 msg="gathering device infos took" duration=520ns time=2026-04-07T20:03:58.304Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" devices=[] time=2026-04-07T20:03:58.304Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=42.168872ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[] time=2026-04-07T20:03:58.304Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-04-07T20:03:58.304Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extraEnvs=map[] time=2026-04-07T20:03:58.304Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43653" time=2026-04-07T20:03:58.304Z level=DEBUG source=server.go:433 msg=subprocess OLLAMA_HOST=0.0.0.0:11434 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_DEBUG=2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 time=2026-04-07T20:03:58.314Z level=INFO source=runner.go:1417 msg="starting ollama engine" time=2026-04-07T20:03:58.314Z level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:43653" time=2026-04-07T20:03:58.316Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string time=2026-04-07T20:03:58.316Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string time=2026-04-07T20:03:58.316Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-07T20:03:58.316Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-07T20:03:58.316Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.file_type default=0 time=2026-04-07T20:03:58.316Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.name default="" time=2026-04-07T20:03:58.316Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.description default="" time=2026-04-07T20:03:58.316Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-04-07T20:03:58.316Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2026-04-07T20:03:58.320Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v12 ggml_cuda_init: failed to initialize CUDA: unknown error load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2026-04-07T20:03:58.435Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.block_count default=0 time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.pooling_type default=0 time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.expert_count default=0 time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.block_count default=0 time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.embedding_length default=0 time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-04-07T20:03:58.435Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-04-07T20:03:58.435Z level=DEBUG source=runner.go:1392 msg="dummy model load took" duration=119.947481ms time=2026-04-07T20:03:58.435Z level=DEBUG source=runner.go:1397 msg="gathering device infos took" duration=430ns time=2026-04-07T20:03:58.436Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" devices=[] time=2026-04-07T20:03:58.436Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=131.478694ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[] time=2026-04-07T20:03:58.436Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0 time=2026-04-07T20:03:58.436Z level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[] time=2026-04-07T20:03:58.436Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=174.244896ms time=2026-04-07T20:03:58.436Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="50.0 GiB" available="49.9 GiB" time=2026-04-07T20:03:58.436Z level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096 ``` ### OS Debian 13 / Podman ### GPU Nvidia ### CPU AMD ### Ollama version 20.3
GiteaMirror added the bug label 2026-05-05 02:55:47 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 7, 2026):

ggml_cuda_init: failed to initialize CUDA: unknown error

Does the following allow GPU discovery to succeed:

sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm
<!-- gh-comment-id:4201971474 --> @rick-github commented on GitHub (Apr 7, 2026): ``` ggml_cuda_init: failed to initialize CUDA: unknown error ``` Does the following allow GPU discovery to succeed: ``` sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm ```
Author
Owner

@imtiendat0311 commented on GitHub (Apr 7, 2026):

I have similar issue but after execute the command provided by @rick-github

sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm

and reboot the system it's working again

Here is my log before reboot

time=2026-04-07T21:40:56.826Z level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG-4 OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:cuda_v13 OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-04-07T21:40:56.826Z level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false"
time=2026-04-07T21:40:56.828Z level=INFO source=images.go:499 msg="total blobs: 27"
time=2026-04-07T21:40:56.828Z level=INFO source=images.go:506 msg="total unused blobs removed: 0"
time=2026-04-07T21:40:56.828Z level=INFO source=routes.go:1802 msg="Listening on [::]:11434 (version 0.20.2)"
time=2026-04-07T21:40:56.828Z level=DEBUG source=sched.go:145 msg="starting llm scheduler"
time=2026-04-07T21:40:56.828Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-04-07T21:40:56.828Z level=DEBUG source=runner.go:98 msg="skipping available library at user's request" requested=cuda_v13 libDir=/usr/lib/ollama/vulkan
time=2026-04-07T21:40:56.828Z level=DEBUG source=runner.go:98 msg="skipping available library at user's request" requested=cuda_v13 libDir=/usr/lib/ollama/cuda_v12
time=2026-04-07T21:40:56.828Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extraEnvs=map[]
time=2026-04-07T21:40:56.829Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45727"
time=2026-04-07T21:40:56.829Z level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LLM_LIBRARY=cuda_v13 OLLAMA_DEBUG=2 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13
time=2026-04-07T21:40:56.837Z level=INFO source=runner.go:1417 msg="starting ollama engine"
time=2026-04-07T21:40:56.838Z level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:45727"
time=2026-04-07T21:40:56.840Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string
time=2026-04-07T21:40:56.840Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string
time=2026-04-07T21:40:56.840Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-07T21:40:56.840Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-07T21:40:56.840Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.file_type default=0
time=2026-04-07T21:40:56.840Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.name default=""
time=2026-04-07T21:40:56.840Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.description default=""
time=2026-04-07T21:40:56.840Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-04-07T21:40:56.840Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2026-04-07T21:40:56.845Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v13
ggml_cuda_init: failed to initialize CUDA: unknown error
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so
time=2026-04-07T21:40:56.865Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.block_count default=0
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.pooling_type default=0
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.expert_count default=0
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.block_count default=0
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.embedding_length default=0
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-04-07T21:40:56.865Z level=DEBUG source=runner.go:1392 msg="dummy model load took" duration=25.117648ms
time=2026-04-07T21:40:56.865Z level=DEBUG source=runner.go:1397 msg="gathering device infos took" duration=401ns
time=2026-04-07T21:40:56.865Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" devices=[]
time=2026-04-07T21:40:56.865Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=36.835259ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[]
time=2026-04-07T21:40:56.865Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0
time=2026-04-07T21:40:56.865Z level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[]
time=2026-04-07T21:40:56.865Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=37.125809ms
time=2026-04-07T21:40:56.865Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="30.5 GiB" available="30.3 GiB"
time=2026-04-07T21:40:56.865Z level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096

compose.yml:

services:
    ## Ollama for hosting and serving LLMs
    ollama:
        image: ollama/ollama:0.20.2
        container_name: ollama
        runtime: nvidia
        environment:
            - NVIDIA_VISIBLE_DEVICES=all
            - NVIDIA_DRIVER_CAPABILITIES=compute,utility
            - OLLAMA_LLM_LIBRARY=cuda_v13
            - OLLAMA_DEBUG=2
        ports:
            - "11434:11434"
        volumes:
            - ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/ollama:/root/.ollama
        healthcheck:
            test: ["CMD", "ollama", "list"]
            interval: 30s
            timeout: 20s
            retries: 3
        deploy:
            resources:
                reservations:
                    devices:
                        - driver: nvidia
                          capabilities: [gpu]

Nvidia Driver Version 580.142
NVIDIA Container Runtime Hook Version 1.19.0

OS

ArchLinux

GPU

Nvidia

CPU

AMD

Ollama version

20.2 & 20.3

<!-- gh-comment-id:4202445899 --> @imtiendat0311 commented on GitHub (Apr 7, 2026): I have similar issue but after execute the command provided by @rick-github ```bash sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm ``` and reboot the system it's working again Here is my log before reboot ```log time=2026-04-07T21:40:56.826Z level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG-4 OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:cuda_v13 OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-04-07T21:40:56.826Z level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false" time=2026-04-07T21:40:56.828Z level=INFO source=images.go:499 msg="total blobs: 27" time=2026-04-07T21:40:56.828Z level=INFO source=images.go:506 msg="total unused blobs removed: 0" time=2026-04-07T21:40:56.828Z level=INFO source=routes.go:1802 msg="Listening on [::]:11434 (version 0.20.2)" time=2026-04-07T21:40:56.828Z level=DEBUG source=sched.go:145 msg="starting llm scheduler" time=2026-04-07T21:40:56.828Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-04-07T21:40:56.828Z level=DEBUG source=runner.go:98 msg="skipping available library at user's request" requested=cuda_v13 libDir=/usr/lib/ollama/vulkan time=2026-04-07T21:40:56.828Z level=DEBUG source=runner.go:98 msg="skipping available library at user's request" requested=cuda_v13 libDir=/usr/lib/ollama/cuda_v12 time=2026-04-07T21:40:56.828Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extraEnvs=map[] time=2026-04-07T21:40:56.829Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45727" time=2026-04-07T21:40:56.829Z level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LLM_LIBRARY=cuda_v13 OLLAMA_DEBUG=2 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13 time=2026-04-07T21:40:56.837Z level=INFO source=runner.go:1417 msg="starting ollama engine" time=2026-04-07T21:40:56.838Z level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:45727" time=2026-04-07T21:40:56.840Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string time=2026-04-07T21:40:56.840Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string time=2026-04-07T21:40:56.840Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-07T21:40:56.840Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-07T21:40:56.840Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.file_type default=0 time=2026-04-07T21:40:56.840Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.name default="" time=2026-04-07T21:40:56.840Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.description default="" time=2026-04-07T21:40:56.840Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-04-07T21:40:56.840Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2026-04-07T21:40:56.845Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v13 ggml_cuda_init: failed to initialize CUDA: unknown error load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so time=2026-04-07T21:40:56.865Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.block_count default=0 time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.pooling_type default=0 time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.expert_count default=0 time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.block_count default=0 time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.embedding_length default=0 time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-04-07T21:40:56.865Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-04-07T21:40:56.865Z level=DEBUG source=runner.go:1392 msg="dummy model load took" duration=25.117648ms time=2026-04-07T21:40:56.865Z level=DEBUG source=runner.go:1397 msg="gathering device infos took" duration=401ns time=2026-04-07T21:40:56.865Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" devices=[] time=2026-04-07T21:40:56.865Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=36.835259ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[] time=2026-04-07T21:40:56.865Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0 time=2026-04-07T21:40:56.865Z level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[] time=2026-04-07T21:40:56.865Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=37.125809ms time=2026-04-07T21:40:56.865Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="30.5 GiB" available="30.3 GiB" time=2026-04-07T21:40:56.865Z level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096 ``` compose.yml: ```yaml services: ## Ollama for hosting and serving LLMs ollama: image: ollama/ollama:0.20.2 container_name: ollama runtime: nvidia environment: - NVIDIA_VISIBLE_DEVICES=all - NVIDIA_DRIVER_CAPABILITIES=compute,utility - OLLAMA_LLM_LIBRARY=cuda_v13 - OLLAMA_DEBUG=2 ports: - "11434:11434" volumes: - ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/ollama:/root/.ollama healthcheck: test: ["CMD", "ollama", "list"] interval: 30s timeout: 20s retries: 3 deploy: resources: reservations: devices: - driver: nvidia capabilities: [gpu] ``` Nvidia Driver Version 580.142 NVIDIA Container Runtime Hook Version 1.19.0 ### OS ArchLinux ### GPU Nvidia ### CPU AMD ### Ollama version 20.2 & 20.3
Author
Owner

@rick-github commented on GitHub (Apr 7, 2026):

If you rebooted then the rmmod was likely unnecessary. The kernel module sometimes gets wedged and reloading helps, but a reboot would do the same.

<!-- gh-comment-id:4202463266 --> @rick-github commented on GitHub (Apr 7, 2026): If you rebooted then the `rmmod` was likely unnecessary. The kernel module sometimes gets wedged and reloading helps, but a reboot would do the same.
Author
Owner

@imtiendat0311 commented on GitHub (Apr 7, 2026):

So after

sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm

and

docker compose down && docker compose up

It's still unable to discover GPU only after rebooting the system and testing again, the GPU was suddenly discoverable.

here is log after reboot the system

time=2026-04-07T22:17:11.610Z level=DEBUG source=sched.go:311 msg="timer expired, expiring to unload" runner.name=registry.ollama.ai/library/qwen3.5:latest runner.inference="[{ID:GPU-35f3a9c2-d9d2-ae32-a0d6-8880c3eecabf Library:CUDA}]" runner.size="8.1 GiB" runner.vram="8.1 GiB" runner.parallel=1 runner.pid=1796 runner.model=/root/.ollama/models/blobs/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c runner.num_ctx=4096
time=2026-04-07T22:17:11.610Z level=DEBUG source=sched.go:330 msg="runner expired event received" runner.name=registry.ollama.ai/library/qwen3.5:latest runner.inference="[{ID:GPU-35f3a9c2-d9d2-ae32-a0d6-8880c3eecabf Library:CUDA}]" runner.size="8.1 GiB" runner.vram="8.1 GiB" runner.parallel=1 runner.pid=1796 runner.model=/root/.ollama/models/blobs/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c runner.num_ctx=4096
time=2026-04-07T22:17:11.610Z level=DEBUG source=sched.go:345 msg="got lock to unload expired event" runner.name=registry.ollama.ai/library/qwen3.5:latest runner.inference="[{ID:GPU-35f3a9c2-d9d2-ae32-a0d6-8880c3eecabf Library:CUDA}]" runner.size="8.1 GiB" runner.vram="8.1 GiB" runner.parallel=1 runner.pid=1796 runner.model=/root/.ollama/models/blobs/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c runner.num_ctx=4096
time=2026-04-07T22:17:11.610Z level=DEBUG source=sched.go:368 msg="starting background wait for VRAM recovery" runner.name=registry.ollama.ai/library/qwen3.5:latest runner.inference="[{ID:GPU-35f3a9c2-d9d2-ae32-a0d6-8880c3eecabf Library:CUDA}]" runner.size="8.1 GiB" runner.vram="8.1 GiB" runner.parallel=1 runner.pid=1796 runner.model=/root/.ollama/models/blobs/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c runner.num_ctx=4096
time=2026-04-07T22:17:11.610Z level=DEBUG source=runner.go:264 msg="refreshing free memory"
ggml_backend_cuda_device_get_memory device GPU-35f3a9c2-d9d2-ae32-a0d6-8880c3eecabf utilizing NVML memory reporting free: 5906104320 total: 17171480576
time=2026-04-07T22:17:11.633Z level=DEBUG source=runner.go:1397 msg="gathering device infos took" duration=13.071359ms
time=2026-04-07T22:17:11.633Z level=DEBUG source=runner.go:312 msg="existing runner discovery took" duration=23.413144ms
time=2026-04-07T22:17:11.633Z level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=23.43757ms
time=2026-04-07T22:17:11.633Z level=DEBUG source=server.go:1832 msg="stopping llama server" pid=1796
time=2026-04-07T22:17:11.633Z level=DEBUG source=server.go:1838 msg="waiting for llama server to exit" pid=1796
time=2026-04-07T22:17:11.791Z level=DEBUG source=server.go:1842 msg="llama server stopped" pid=1796
time=2026-04-07T22:17:11.791Z level=DEBUG source=sched.go:377 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="8.1 GiB" runner.vram="8.1 GiB" runner.parallel=1 runner.pid=1796 runner.model=/root/.ollama/models/blobs/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c
time=2026-04-07T22:17:11.883Z level=DEBUG source=runner.go:264 msg="refreshing free memory"
time=2026-04-07T22:17:11.883Z level=DEBUG source=runner.go:328 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery"
time=2026-04-07T22:17:11.883Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extraEnvs=map[]
time=2026-04-07T22:17:11.884Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43367"
time=2026-04-07T22:17:11.884Z level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LLM_LIBRARY=cuda_v13 OLLAMA_DEBUG=2 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13
time=2026-04-07T22:17:11.891Z level=INFO source=runner.go:1417 msg="starting ollama engine"
time=2026-04-07T22:17:11.891Z level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:43367"
time=2026-04-07T22:17:11.895Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string
time=2026-04-07T22:17:11.895Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string
time=2026-04-07T22:17:11.895Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-07T22:17:11.895Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32
time=2026-04-07T22:17:11.895Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.file_type default=0
time=2026-04-07T22:17:11.895Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.name default=""
time=2026-04-07T22:17:11.895Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.description default=""
time=2026-04-07T22:17:11.895Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-04-07T22:17:11.895Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2026-04-07T22:17:11.898Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v13
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4080 SUPER, compute capability 8.9, VMM: yes, ID: GPU-35f3a9c2-d9d2-ae32-a0d6-8880c3eecabf
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so
time=2026-04-07T22:17:12.003Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.block_count default=0
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.pooling_type default=0
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.expert_count default=0
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.block_count default=0
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.embedding_length default=0
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-04-07T22:17:12.003Z level=DEBUG source=runner.go:1392 msg="dummy model load took" duration=108.458786ms
ggml_backend_cuda_device_get_memory device GPU-35f3a9c2-d9d2-ae32-a0d6-8880c3eecabf utilizing NVML memory reporting free: 14258339840 total: 17171480576
time=2026-04-07T22:17:12.017Z level=DEBUG source=runner.go:1397 msg="gathering device infos took" duration=14.19598ms
time=2026-04-07T22:17:12.017Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" devices="[{DeviceID:{ID:GPU-35f3a9c2-d9d2-ae32-a0d6-8880c3eecabf Library:CUDA} Name:CUDA0 Description:NVIDIA GeForce RTX 4080 SUPER FilterID: Integrated:false PCIID:0000:01:00.0 TotalMemory:17171480576 FreeMemory:14258339840 ComputeMajor:8 ComputeMinor:9 DriverMajor:13 DriverMinor:0 LibraryPath:[/usr/lib/ollama /usr/lib/ollama/cuda_v13]}]"
time=2026-04-07T22:17:12.017Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=134.039645ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[]
time=2026-04-07T22:17:12.017Z level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=134.075894ms
time=2026-04-07T22:17:12.017Z level=TRACE source=sched.go:759 msg="gpu VRAM convergence" percent=96
time=2026-04-07T22:17:12.017Z level=DEBUG source=sched.go:765 msg="gpu VRAM free memory converged after 0.41 seconds" free_before="5.5 GiB" free_now="13.3 GiB" runner.size="8.1 GiB" runner.vram="8.1 GiB" runner.parallel=1 runner.pid=1796 runner.model=/root/.ollama/models/blobs/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c
time=2026-04-07T22:17:12.018Z level=DEBUG source=sched.go:380 msg="sending an unloaded event" runner.size="8.1 GiB" runner.vram="8.1 GiB" runner.parallel=1 runner.pid=1796 runner.model=/root/.ollama/models/blobs/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c
time=2026-04-07T22:17:12.018Z level=DEBUG source=sched.go:277 msg="ignoring unload event with no pending requests"
<!-- gh-comment-id:4202539358 --> @imtiendat0311 commented on GitHub (Apr 7, 2026): So after ```bash sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm ``` and ```bash docker compose down && docker compose up ``` It's still unable to discover GPU only after rebooting the system and testing again, the GPU was suddenly discoverable. here is log after reboot the system ```log time=2026-04-07T22:17:11.610Z level=DEBUG source=sched.go:311 msg="timer expired, expiring to unload" runner.name=registry.ollama.ai/library/qwen3.5:latest runner.inference="[{ID:GPU-35f3a9c2-d9d2-ae32-a0d6-8880c3eecabf Library:CUDA}]" runner.size="8.1 GiB" runner.vram="8.1 GiB" runner.parallel=1 runner.pid=1796 runner.model=/root/.ollama/models/blobs/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c runner.num_ctx=4096 time=2026-04-07T22:17:11.610Z level=DEBUG source=sched.go:330 msg="runner expired event received" runner.name=registry.ollama.ai/library/qwen3.5:latest runner.inference="[{ID:GPU-35f3a9c2-d9d2-ae32-a0d6-8880c3eecabf Library:CUDA}]" runner.size="8.1 GiB" runner.vram="8.1 GiB" runner.parallel=1 runner.pid=1796 runner.model=/root/.ollama/models/blobs/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c runner.num_ctx=4096 time=2026-04-07T22:17:11.610Z level=DEBUG source=sched.go:345 msg="got lock to unload expired event" runner.name=registry.ollama.ai/library/qwen3.5:latest runner.inference="[{ID:GPU-35f3a9c2-d9d2-ae32-a0d6-8880c3eecabf Library:CUDA}]" runner.size="8.1 GiB" runner.vram="8.1 GiB" runner.parallel=1 runner.pid=1796 runner.model=/root/.ollama/models/blobs/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c runner.num_ctx=4096 time=2026-04-07T22:17:11.610Z level=DEBUG source=sched.go:368 msg="starting background wait for VRAM recovery" runner.name=registry.ollama.ai/library/qwen3.5:latest runner.inference="[{ID:GPU-35f3a9c2-d9d2-ae32-a0d6-8880c3eecabf Library:CUDA}]" runner.size="8.1 GiB" runner.vram="8.1 GiB" runner.parallel=1 runner.pid=1796 runner.model=/root/.ollama/models/blobs/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c runner.num_ctx=4096 time=2026-04-07T22:17:11.610Z level=DEBUG source=runner.go:264 msg="refreshing free memory" ggml_backend_cuda_device_get_memory device GPU-35f3a9c2-d9d2-ae32-a0d6-8880c3eecabf utilizing NVML memory reporting free: 5906104320 total: 17171480576 time=2026-04-07T22:17:11.633Z level=DEBUG source=runner.go:1397 msg="gathering device infos took" duration=13.071359ms time=2026-04-07T22:17:11.633Z level=DEBUG source=runner.go:312 msg="existing runner discovery took" duration=23.413144ms time=2026-04-07T22:17:11.633Z level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=23.43757ms time=2026-04-07T22:17:11.633Z level=DEBUG source=server.go:1832 msg="stopping llama server" pid=1796 time=2026-04-07T22:17:11.633Z level=DEBUG source=server.go:1838 msg="waiting for llama server to exit" pid=1796 time=2026-04-07T22:17:11.791Z level=DEBUG source=server.go:1842 msg="llama server stopped" pid=1796 time=2026-04-07T22:17:11.791Z level=DEBUG source=sched.go:377 msg="runner terminated and removed from list, blocking for VRAM recovery" runner.size="8.1 GiB" runner.vram="8.1 GiB" runner.parallel=1 runner.pid=1796 runner.model=/root/.ollama/models/blobs/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c time=2026-04-07T22:17:11.883Z level=DEBUG source=runner.go:264 msg="refreshing free memory" time=2026-04-07T22:17:11.883Z level=DEBUG source=runner.go:328 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery" time=2026-04-07T22:17:11.883Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extraEnvs=map[] time=2026-04-07T22:17:11.884Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43367" time=2026-04-07T22:17:11.884Z level=DEBUG source=server.go:433 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_LLM_LIBRARY=cuda_v13 OLLAMA_DEBUG=2 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13 time=2026-04-07T22:17:11.891Z level=INFO source=runner.go:1417 msg="starting ollama engine" time=2026-04-07T22:17:11.891Z level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:43367" time=2026-04-07T22:17:11.895Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string time=2026-04-07T22:17:11.895Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string time=2026-04-07T22:17:11.895Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-07T22:17:11.895Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.alignment default=32 time=2026-04-07T22:17:11.895Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.file_type default=0 time=2026-04-07T22:17:11.895Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.name default="" time=2026-04-07T22:17:11.895Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=general.description default="" time=2026-04-07T22:17:11.895Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-04-07T22:17:11.895Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2026-04-07T22:17:11.898Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v13 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4080 SUPER, compute capability 8.9, VMM: yes, ID: GPU-35f3a9c2-d9d2-ae32-a0d6-8880c3eecabf load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so time=2026-04-07T22:17:12.003Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.block_count default=0 time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.pooling_type default=0 time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.expert_count default=0 time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.block_count default=0 time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.embedding_length default=0 time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-04-07T22:17:12.003Z level=DEBUG source=ggml.go:325 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-04-07T22:17:12.003Z level=DEBUG source=runner.go:1392 msg="dummy model load took" duration=108.458786ms ggml_backend_cuda_device_get_memory device GPU-35f3a9c2-d9d2-ae32-a0d6-8880c3eecabf utilizing NVML memory reporting free: 14258339840 total: 17171480576 time=2026-04-07T22:17:12.017Z level=DEBUG source=runner.go:1397 msg="gathering device infos took" duration=14.19598ms time=2026-04-07T22:17:12.017Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" devices="[{DeviceID:{ID:GPU-35f3a9c2-d9d2-ae32-a0d6-8880c3eecabf Library:CUDA} Name:CUDA0 Description:NVIDIA GeForce RTX 4080 SUPER FilterID: Integrated:false PCIID:0000:01:00.0 TotalMemory:17171480576 FreeMemory:14258339840 ComputeMajor:8 ComputeMinor:9 DriverMajor:13 DriverMinor:0 LibraryPath:[/usr/lib/ollama /usr/lib/ollama/cuda_v13]}]" time=2026-04-07T22:17:12.017Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=134.039645ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[] time=2026-04-07T22:17:12.017Z level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=134.075894ms time=2026-04-07T22:17:12.017Z level=TRACE source=sched.go:759 msg="gpu VRAM convergence" percent=96 time=2026-04-07T22:17:12.017Z level=DEBUG source=sched.go:765 msg="gpu VRAM free memory converged after 0.41 seconds" free_before="5.5 GiB" free_now="13.3 GiB" runner.size="8.1 GiB" runner.vram="8.1 GiB" runner.parallel=1 runner.pid=1796 runner.model=/root/.ollama/models/blobs/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c time=2026-04-07T22:17:12.018Z level=DEBUG source=sched.go:380 msg="sending an unloaded event" runner.size="8.1 GiB" runner.vram="8.1 GiB" runner.parallel=1 runner.pid=1796 runner.model=/root/.ollama/models/blobs/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c time=2026-04-07T22:17:12.018Z level=DEBUG source=sched.go:277 msg="ignoring unload event with no pending requests" ```
Author
Owner

@rick-github commented on GitHub (Apr 7, 2026):

What's the output of

cat /etc/docker/daemon.json 
<!-- gh-comment-id:4202547944 --> @rick-github commented on GitHub (Apr 7, 2026): What's the output of ``` cat /etc/docker/daemon.json ```
Author
Owner

@rick-github commented on GitHub (Apr 7, 2026):

Never mind, you aren't using a container like OP.

<!-- gh-comment-id:4202553118 --> @rick-github commented on GitHub (Apr 7, 2026): Never mind, you aren't using a container like OP.
Author
Owner

@rick-github commented on GitHub (Apr 7, 2026):

No, yes you are. So back to output of

cat /etc/docker/daemon.json 
<!-- gh-comment-id:4202556992 --> @rick-github commented on GitHub (Apr 7, 2026): No, yes you are. So back to output of ``` cat /etc/docker/daemon.json ```
Author
Owner

@imtiendat0311 commented on GitHub (Apr 7, 2026):

here is the content of daemon.json

{
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"
        }
    }
}
<!-- gh-comment-id:4202684327 --> @imtiendat0311 commented on GitHub (Apr 7, 2026): here is the content of daemon.json ```json { "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } } } ```
Author
Owner
<!-- gh-comment-id:4202689980 --> @rick-github commented on GitHub (Apr 7, 2026): https://github.com/ollama/ollama/blob/main/docs/troubleshooting.mdx#linux-docker
Author
Owner

@imtiendat0311 commented on GitHub (Apr 7, 2026):

I just update my daemon.json

{
   "exec-opts": ["native.cgroupdriver=cgroupfs"],
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"
        }
    }
}

So far it's working fine. I will let's you know if this actually fix the problem @rick-github

<!-- gh-comment-id:4202705223 --> @imtiendat0311 commented on GitHub (Apr 7, 2026): I just update my daemon.json ```json { "exec-opts": ["native.cgroupdriver=cgroupfs"], "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } } } ``` So far it's working fine. I will let's you know if this actually fix the problem @rick-github
Author
Owner

@0xCA commented on GitHub (Apr 8, 2026):

ggml_cuda_init: failed to initialize CUDA: unknown error

Does the following allow GPU discovery to succeed:

sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm

There is no such thing as nvidia_uvm on my machine.
I did sudo modprobe nvidia-current-open-uvm, it successfully loaded the module, confirmed by lsmod.
The module was not loaded before, and I never manually loaded it with previous versions of ollama around half a year ago.

I did reboot before this, it changed nothing.
Loading nvidia_uvm (nvidia-current-open-uvm) also changed nothing.

crw-rw-rw- 1 root root 509, 0 Apr  8 05:50 /dev/nvidia-uvm
crw-rw-rw- 1 root root 509, 1 Apr  8 05:50 /dev/nvidia-uvm-tools
[14959.590957] nvidia-uvm: Loaded the UVM driver, major device number 509.

UPD: I re-created CDI after loading UVM:
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
Now GPU is properly detected and inference works full speed.

Please add to GPU docs that it's required to manually configure auto-load of uvm and how to do so.

<!-- gh-comment-id:4203062459 --> @0xCA commented on GitHub (Apr 8, 2026): > ``` > ggml_cuda_init: failed to initialize CUDA: unknown error > ``` > > Does the following allow GPU discovery to succeed: > > ``` > sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm > ``` There is no such thing as `nvidia_uvm` on my machine. I did `sudo modprobe nvidia-current-open-uvm`, it successfully loaded the module, confirmed by lsmod. The module was not loaded before, and I never manually loaded it with previous versions of ollama around half a year ago. I did reboot before this, it changed nothing. Loading nvidia_uvm (nvidia-current-open-uvm) also changed nothing. ``` crw-rw-rw- 1 root root 509, 0 Apr 8 05:50 /dev/nvidia-uvm crw-rw-rw- 1 root root 509, 1 Apr 8 05:50 /dev/nvidia-uvm-tools ``` ``` [14959.590957] nvidia-uvm: Loaded the UVM driver, major device number 509. ``` UPD: I re-created CDI after loading UVM: `sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml` Now GPU is properly detected and inference works full speed. Please add to GPU docs that it's required to manually configure auto-load of uvm and how to do so.
Author
Owner

@PureBlissAK commented on GitHub (Apr 18, 2026):

🤖 Automated Triage & Analysis Report

Issue: #15400
Analyzed: 2026-04-18T18:22:21.221478

Analysis

  • Type: unknown
  • Severity: medium
  • Components: unknown

Implementation Plan

  • Effort: medium
  • Steps:

This issue has been triaged and marked for implementation.

<!-- gh-comment-id:4274309855 --> @PureBlissAK commented on GitHub (Apr 18, 2026): <!-- ollama-issue-orchestrator:v1 issue:15400 --> ## 🤖 Automated Triage & Analysis Report **Issue**: #15400 **Analyzed**: 2026-04-18T18:22:21.221478 ### Analysis - **Type**: unknown - **Severity**: medium - **Components**: unknown ### Implementation Plan - **Effort**: medium - **Steps**: *This issue has been triaged and marked for implementation.*
Author
Owner

@GorudoYami commented on GitHub (Apr 30, 2026):

I have encountered the same issue ggml_cuda_init: failed to initialize CUDA: unknown error and using the command sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml provided by @0xCA fixed it for me too.

Version: 22.0
6.12.77-1-MANJARO
Driver Version: 590.48.01
CUDA Version: 13.1
nvidia-container-toolkit 1.19.0-1

<!-- gh-comment-id:4355638143 --> @GorudoYami commented on GitHub (Apr 30, 2026): I have encountered the same issue `ggml_cuda_init: failed to initialize CUDA: unknown error` and using the command `sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml` provided by @0xCA fixed it for me too. Version: 22.0 6.12.77-1-MANJARO Driver Version: 590.48.01 CUDA Version: 13.1 nvidia-container-toolkit 1.19.0-1
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71910