[GH-ISSUE #13800] Failing to initialize A100 MIG in >=0.13.2 #55552

Open
opened 2026-04-29 09:23:55 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @jessiewbailey on GitHub (Jan 20, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13800

What is the issue?

Running in k8s with Ollama's default image as a non-root user, Ollama will work with older images such as 0.12.11 but for whatever reason the GPU configuration doesn't seem to make it past the new filtering process and uses the CPU. I have tried with and without overriding the OLLAMA_LLM_LIBRARY to either cuda_v12 or cuda_v13. I have set OLLAMA_DEBUG=2 in the logs.

Image

Relevant log output

[ollama-non-root-v14-cb97b7f76-48d5d_ollama.log](https://github.com/user-attachments/files/24749022/ollama-non-root-v14-cb97b7f76-48d5d_ollama.log)

time=2026-01-20T19:32:01.706Z level=INFO source=routes.go:1626 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:http://proxy.company.com:8080 HTTP_PROXY:http://proxy.company.com:8080 NO_PROXY:localhost, 127.0.0.1, 0.0.0.0, svc,local,10.42.0.0/16, openwebui-service, openwebui-service.llm.svc.cluster.local OLLAMA_CONTEXT_LENGTH:8192 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:cuda_v13 OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ubuntu/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy:http://proxy.company.com:8080 https_proxy:http://proxy.company.com:8080 no_proxy:]"
time=2026-01-20T19:32:01.721Z level=INFO source=images.go:501 msg="total blobs: 6"
time=2026-01-20T19:32:01.722Z level=INFO source=images.go:508 msg="total unused blobs removed: 0"
time=2026-01-20T19:32:01.724Z level=INFO source=routes.go:1679 msg="Listening on [::]:11434 (version 0.14.3-rc2)"
time=2026-01-20T19:32:01.724Z level=DEBUG source=sched.go:121 msg="starting llm scheduler"
time=2026-01-20T19:32:01.724Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-01-20T19:32:01.724Z level=DEBUG source=runner.go:98 msg="skipping available library at user's request" requested=cuda_v13 libDir=/usr/lib/ollama/cuda_v12
time=2026-01-20T19:32:01.724Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extraEnvs=map[]
time=2026-01-20T19:32:01.725Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33281"
time=2026-01-20T19:32:01.725Z level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LOG_LEVEL=TRACE OLLAMA_ORIGINS=* OLLAMA_DEBUG=2 OLLAMA_KEEP_ALIVE=-1 OLLAMA_LLM_LIBRARY=cuda_v13 OLLAMA_MODELS=/home/ubuntu/.ollama/models OLLAMA_CONTEXT_LENGTH=8192 OLLAMA_SERVICE_HOST=10.43.70.198 OLLAMA_SERVICE_PORT_8080_TCP_ADDR=10.43.19.255 OLLAMA_PORT_11434_TCP=tcp://10.43.70.198:11434 OLLAMA_SERVICE_PORT_8080_TCP_PORT=8080 OLLAMA_SERVICE_PORT_11434_TCP_PROTO=tcp OLLAMA_SERVICE_PORT_11434_TCP_ADDR=10.43.19.255 OLLAMA_PORT_11434_TCP_ADDR=10.43.70.198 OLLAMA_SERVICE_PORT_HTTP=11434 OLLAMA_SERVICE_SERVICE_HOST=10.43.19.255 OLLAMA_SERVICE_PORT=tcp://10.43.19.255:8080 OLLAMA_PORT_11434_TCP_PROTO=tcp OLLAMA_PORT=tcp://10.43.70.198:11434 OLLAMA_SERVICE_SERVICE_PORT_API=8080 OLLAMA_SERVICE_SERVICE_PORT_API2=11434 OLLAMA_SERVICE_SERVICE_PORT=8080 OLLAMA_SERVICE_PORT_11434_TCP_PORT=11434 OLLAMA_PORT_11434_TCP_PORT=11434 OLLAMA_SERVICE_PORT_11434_TCP=tcp://10.43.19.255:11434 OLLAMA_SERVICE_PORT_8080_TCP_PROTO=tcp OLLAMA_SERVICE_PORT_8080_TCP=tcp://10.43.19.255:8080 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13
time=2026-01-20T19:32:01.737Z level=INFO source=runner.go:1405 msg="starting ollama engine"
time=2026-01-20T19:32:01.738Z level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:33281"
time=2026-01-20T19:32:01.746Z level=DEBUG source=gguf.go:589 msg=general.architecture type=string
time=2026-01-20T19:32:01.746Z level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string
time=2026-01-20T19:32:01.746Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.alignment default=32
time=2026-01-20T19:32:01.746Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.alignment default=32
time=2026-01-20T19:32:01.747Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.file_type default=0
time=2026-01-20T19:32:01.747Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.name default=""
time=2026-01-20T19:32:01.747Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.description default=""
time=2026-01-20T19:32:01.747Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-01-20T19:32:01.747Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so
time=2026-01-20T19:32:01.753Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v13
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA A100 80GB PCIe MIG 7g.80gb, compute capability 8.0, VMM: yes, ID: GPU-7327fca2-18fa-b021-ba3d-8499a04c9c81
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so
time=2026-01-20T19:32:01.860Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.block_count default=0
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.pooling_type default=0
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.expert_count default=0
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.block_count default=0
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.embedding_length default=0
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-01-20T19:32:01.860Z level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=113.825277ms
time=2026-01-20T19:32:02.052Z level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=191.5977ms
time=2026-01-20T19:32:02.052Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" devices="[{DeviceID:{ID:GPU-7327fca2-18fa-b021-ba3d-8499a04c9c81 Library:CUDA} Name:CUDA0 Description:NVIDIA A100 80GB PCIe MIG 7g.80gb FilterID: Integrated:false PCIID:0000:af:00.0 TotalMemory:85094825984 FreeMemory:84690075648 ComputeMajor:8 ComputeMinor:0 DriverMajor:13 DriverMinor:1 LibraryPath:[/usr/lib/ollama /usr/lib/ollama/cuda_v13]}]"
time=2026-01-20T19:32:02.052Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=328.006584ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[]
time=2026-01-20T19:32:02.052Z level=DEBUG source=runner.go:98 msg="skipping available library at user's request" requested=cuda_v13 libDir=/usr/lib/ollama/vulkan
time=2026-01-20T19:32:02.052Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=1
time=2026-01-20T19:32:02.052Z level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/lib/ollama/cuda_v13 description="NVIDIA A100 80GB PCIe MIG 7g.80gb" compute=8.0 id=GPU-7327fca2-18fa-b021-ba3d-8499a04c9c81 pci_id=0000:af:00.0
time=2026-01-20T19:32:02.052Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extraEnvs="map[CUDA_VISIBLE_DEVICES:GPU-7327fca2-18fa-b021-ba3d-8499a04c9c81 GGML_CUDA_INIT:1]"
time=2026-01-20T19:32:02.053Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37869"
time=2026-01-20T19:32:02.053Z level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LOG_LEVEL=TRACE OLLAMA_ORIGINS=* OLLAMA_DEBUG=2 OLLAMA_KEEP_ALIVE=-1 OLLAMA_LLM_LIBRARY=cuda_v13 OLLAMA_MODELS=/home/ubuntu/.ollama/models OLLAMA_CONTEXT_LENGTH=8192 OLLAMA_SERVICE_HOST=10.43.70.198 OLLAMA_SERVICE_PORT_8080_TCP_ADDR=10.43.19.255 OLLAMA_PORT_11434_TCP=tcp://10.43.70.198:11434 OLLAMA_SERVICE_PORT_8080_TCP_PORT=8080 OLLAMA_SERVICE_PORT_11434_TCP_PROTO=tcp OLLAMA_SERVICE_PORT_11434_TCP_ADDR=10.43.19.255 OLLAMA_PORT_11434_TCP_ADDR=10.43.70.198 OLLAMA_SERVICE_PORT_HTTP=11434 OLLAMA_SERVICE_SERVICE_HOST=10.43.19.255 OLLAMA_SERVICE_PORT=tcp://10.43.19.255:8080 OLLAMA_PORT_11434_TCP_PROTO=tcp OLLAMA_PORT=tcp://10.43.70.198:11434 OLLAMA_SERVICE_SERVICE_PORT_API=8080 OLLAMA_SERVICE_SERVICE_PORT_API2=11434 OLLAMA_SERVICE_SERVICE_PORT=8080 OLLAMA_SERVICE_PORT_11434_TCP_PORT=11434 OLLAMA_PORT_11434_TCP_PORT=11434 OLLAMA_SERVICE_PORT_11434_TCP=tcp://10.43.19.255:11434 OLLAMA_SERVICE_PORT_8080_TCP_PROTO=tcp OLLAMA_SERVICE_PORT_8080_TCP=tcp://10.43.19.255:8080 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13 CUDA_VISIBLE_DEVICES=GPU-7327fca2-18fa-b021-ba3d-8499a04c9c81 GGML_CUDA_INIT=1
time=2026-01-20T19:32:02.065Z level=INFO source=runner.go:1405 msg="starting ollama engine"
time=2026-01-20T19:32:02.066Z level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:37869"
time=2026-01-20T19:32:02.074Z level=DEBUG source=gguf.go:589 msg=general.architecture type=string
time=2026-01-20T19:32:02.074Z level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string
time=2026-01-20T19:32:02.074Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.alignment default=32
time=2026-01-20T19:32:02.074Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.alignment default=32
time=2026-01-20T19:32:02.074Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.file_type default=0
time=2026-01-20T19:32:02.074Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.name default=""
time=2026-01-20T19:32:02.074Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.description default=""
time=2026-01-20T19:32:02.074Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-01-20T19:32:02.074Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so
time=2026-01-20T19:32:02.080Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v13
ggml_cuda_init: failed to initialize CUDA: no CUDA-capable device is detected
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so
time=2026-01-20T19:32:02.197Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2026-01-20T19:32:02.197Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.block_count default=0
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.pooling_type default=0
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.expert_count default=0
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.block_count default=0
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.embedding_length default=0
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-01-20T19:32:02.198Z level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=123.88735ms
time=2026-01-20T19:32:02.198Z level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=497ns
time=2026-01-20T19:32:02.198Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" devices=[]
time=2026-01-20T19:32:02.198Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=145.955602ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-7327fca2-18fa-b021-ba3d-8499a04c9c81 GGML_CUDA_INIT:1]"
time=2026-01-20T19:32:02.198Z level=DEBUG source=runner.go:153 msg="filtering device which didn't fully initialize" id=GPU-7327fca2-18fa-b021-ba3d-8499a04c9c81 libdir=/usr/lib/ollama/cuda_v13 pci_id=0000:af:00.0 library=CUDA
time=2026-01-20T19:32:02.198Z level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[]
time=2026-01-20T19:32:02.198Z level=TRACE source=runner.go:183 msg="removing unsupported or overlapping GPU combination" libDir=/usr/lib/ollama/cuda_v13 description="NVIDIA A100 80GB PCIe MIG 7g.80gb" compute=8.0 pci_id=0000:af:00.0
time=2026-01-20T19:32:02.198Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=474.602195ms
time=2026-01-20T19:32:02.199Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.2 GiB" available="31.1 GiB"
time=2026-01-20T19:32:02.199Z level=INFO source=routes.go:1720 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
[GIN] 2026/01/20 - 19:32:14 | 200 |      47.906µs |  172.24.192.128 | GET      "/"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.14.3-rc2

Originally created by @jessiewbailey on GitHub (Jan 20, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13800 ### What is the issue? Running in k8s with Ollama's default image as a non-root user, Ollama will work with older images such as 0.12.11 but for whatever reason the GPU configuration doesn't seem to make it past the new filtering process and uses the CPU. I have tried with and without overriding the OLLAMA_LLM_LIBRARY to either cuda_v12 or cuda_v13. I have set OLLAMA_DEBUG=2 in the logs. <img width="644" height="420" alt="Image" src="https://github.com/user-attachments/assets/d851ab34-392f-4d7b-9edb-efd5aeebdfb4" /> ### Relevant log output ```shell [ollama-non-root-v14-cb97b7f76-48d5d_ollama.log](https://github.com/user-attachments/files/24749022/ollama-non-root-v14-cb97b7f76-48d5d_ollama.log) time=2026-01-20T19:32:01.706Z level=INFO source=routes.go:1626 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:http://proxy.company.com:8080 HTTP_PROXY:http://proxy.company.com:8080 NO_PROXY:localhost, 127.0.0.1, 0.0.0.0, svc,local,10.42.0.0/16, openwebui-service, openwebui-service.llm.svc.cluster.local OLLAMA_CONTEXT_LENGTH:8192 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:cuda_v13 OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ubuntu/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy:http://proxy.company.com:8080 https_proxy:http://proxy.company.com:8080 no_proxy:]" time=2026-01-20T19:32:01.721Z level=INFO source=images.go:501 msg="total blobs: 6" time=2026-01-20T19:32:01.722Z level=INFO source=images.go:508 msg="total unused blobs removed: 0" time=2026-01-20T19:32:01.724Z level=INFO source=routes.go:1679 msg="Listening on [::]:11434 (version 0.14.3-rc2)" time=2026-01-20T19:32:01.724Z level=DEBUG source=sched.go:121 msg="starting llm scheduler" time=2026-01-20T19:32:01.724Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-01-20T19:32:01.724Z level=DEBUG source=runner.go:98 msg="skipping available library at user's request" requested=cuda_v13 libDir=/usr/lib/ollama/cuda_v12 time=2026-01-20T19:32:01.724Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extraEnvs=map[] time=2026-01-20T19:32:01.725Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33281" time=2026-01-20T19:32:01.725Z level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LOG_LEVEL=TRACE OLLAMA_ORIGINS=* OLLAMA_DEBUG=2 OLLAMA_KEEP_ALIVE=-1 OLLAMA_LLM_LIBRARY=cuda_v13 OLLAMA_MODELS=/home/ubuntu/.ollama/models OLLAMA_CONTEXT_LENGTH=8192 OLLAMA_SERVICE_HOST=10.43.70.198 OLLAMA_SERVICE_PORT_8080_TCP_ADDR=10.43.19.255 OLLAMA_PORT_11434_TCP=tcp://10.43.70.198:11434 OLLAMA_SERVICE_PORT_8080_TCP_PORT=8080 OLLAMA_SERVICE_PORT_11434_TCP_PROTO=tcp OLLAMA_SERVICE_PORT_11434_TCP_ADDR=10.43.19.255 OLLAMA_PORT_11434_TCP_ADDR=10.43.70.198 OLLAMA_SERVICE_PORT_HTTP=11434 OLLAMA_SERVICE_SERVICE_HOST=10.43.19.255 OLLAMA_SERVICE_PORT=tcp://10.43.19.255:8080 OLLAMA_PORT_11434_TCP_PROTO=tcp OLLAMA_PORT=tcp://10.43.70.198:11434 OLLAMA_SERVICE_SERVICE_PORT_API=8080 OLLAMA_SERVICE_SERVICE_PORT_API2=11434 OLLAMA_SERVICE_SERVICE_PORT=8080 OLLAMA_SERVICE_PORT_11434_TCP_PORT=11434 OLLAMA_PORT_11434_TCP_PORT=11434 OLLAMA_SERVICE_PORT_11434_TCP=tcp://10.43.19.255:11434 OLLAMA_SERVICE_PORT_8080_TCP_PROTO=tcp OLLAMA_SERVICE_PORT_8080_TCP=tcp://10.43.19.255:8080 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13 time=2026-01-20T19:32:01.737Z level=INFO source=runner.go:1405 msg="starting ollama engine" time=2026-01-20T19:32:01.738Z level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:33281" time=2026-01-20T19:32:01.746Z level=DEBUG source=gguf.go:589 msg=general.architecture type=string time=2026-01-20T19:32:01.746Z level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string time=2026-01-20T19:32:01.746Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.alignment default=32 time=2026-01-20T19:32:01.746Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.alignment default=32 time=2026-01-20T19:32:01.747Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.file_type default=0 time=2026-01-20T19:32:01.747Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.name default="" time=2026-01-20T19:32:01.747Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.description default="" time=2026-01-20T19:32:01.747Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-01-20T19:32:01.747Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so time=2026-01-20T19:32:01.753Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v13 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA A100 80GB PCIe MIG 7g.80gb, compute capability 8.0, VMM: yes, ID: GPU-7327fca2-18fa-b021-ba3d-8499a04c9c81 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so time=2026-01-20T19:32:01.860Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.block_count default=0 time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.pooling_type default=0 time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.expert_count default=0 time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.block_count default=0 time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.embedding_length default=0 time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-01-20T19:32:01.860Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-01-20T19:32:01.860Z level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=113.825277ms time=2026-01-20T19:32:02.052Z level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=191.5977ms time=2026-01-20T19:32:02.052Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" devices="[{DeviceID:{ID:GPU-7327fca2-18fa-b021-ba3d-8499a04c9c81 Library:CUDA} Name:CUDA0 Description:NVIDIA A100 80GB PCIe MIG 7g.80gb FilterID: Integrated:false PCIID:0000:af:00.0 TotalMemory:85094825984 FreeMemory:84690075648 ComputeMajor:8 ComputeMinor:0 DriverMajor:13 DriverMinor:1 LibraryPath:[/usr/lib/ollama /usr/lib/ollama/cuda_v13]}]" time=2026-01-20T19:32:02.052Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=328.006584ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[] time=2026-01-20T19:32:02.052Z level=DEBUG source=runner.go:98 msg="skipping available library at user's request" requested=cuda_v13 libDir=/usr/lib/ollama/vulkan time=2026-01-20T19:32:02.052Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=1 time=2026-01-20T19:32:02.052Z level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/lib/ollama/cuda_v13 description="NVIDIA A100 80GB PCIe MIG 7g.80gb" compute=8.0 id=GPU-7327fca2-18fa-b021-ba3d-8499a04c9c81 pci_id=0000:af:00.0 time=2026-01-20T19:32:02.052Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extraEnvs="map[CUDA_VISIBLE_DEVICES:GPU-7327fca2-18fa-b021-ba3d-8499a04c9c81 GGML_CUDA_INIT:1]" time=2026-01-20T19:32:02.053Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37869" time=2026-01-20T19:32:02.053Z level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LOG_LEVEL=TRACE OLLAMA_ORIGINS=* OLLAMA_DEBUG=2 OLLAMA_KEEP_ALIVE=-1 OLLAMA_LLM_LIBRARY=cuda_v13 OLLAMA_MODELS=/home/ubuntu/.ollama/models OLLAMA_CONTEXT_LENGTH=8192 OLLAMA_SERVICE_HOST=10.43.70.198 OLLAMA_SERVICE_PORT_8080_TCP_ADDR=10.43.19.255 OLLAMA_PORT_11434_TCP=tcp://10.43.70.198:11434 OLLAMA_SERVICE_PORT_8080_TCP_PORT=8080 OLLAMA_SERVICE_PORT_11434_TCP_PROTO=tcp OLLAMA_SERVICE_PORT_11434_TCP_ADDR=10.43.19.255 OLLAMA_PORT_11434_TCP_ADDR=10.43.70.198 OLLAMA_SERVICE_PORT_HTTP=11434 OLLAMA_SERVICE_SERVICE_HOST=10.43.19.255 OLLAMA_SERVICE_PORT=tcp://10.43.19.255:8080 OLLAMA_PORT_11434_TCP_PROTO=tcp OLLAMA_PORT=tcp://10.43.70.198:11434 OLLAMA_SERVICE_SERVICE_PORT_API=8080 OLLAMA_SERVICE_SERVICE_PORT_API2=11434 OLLAMA_SERVICE_SERVICE_PORT=8080 OLLAMA_SERVICE_PORT_11434_TCP_PORT=11434 OLLAMA_PORT_11434_TCP_PORT=11434 OLLAMA_SERVICE_PORT_11434_TCP=tcp://10.43.19.255:11434 OLLAMA_SERVICE_PORT_8080_TCP_PROTO=tcp OLLAMA_SERVICE_PORT_8080_TCP=tcp://10.43.19.255:8080 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13 CUDA_VISIBLE_DEVICES=GPU-7327fca2-18fa-b021-ba3d-8499a04c9c81 GGML_CUDA_INIT=1 time=2026-01-20T19:32:02.065Z level=INFO source=runner.go:1405 msg="starting ollama engine" time=2026-01-20T19:32:02.066Z level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:37869" time=2026-01-20T19:32:02.074Z level=DEBUG source=gguf.go:589 msg=general.architecture type=string time=2026-01-20T19:32:02.074Z level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string time=2026-01-20T19:32:02.074Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.alignment default=32 time=2026-01-20T19:32:02.074Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.alignment default=32 time=2026-01-20T19:32:02.074Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.file_type default=0 time=2026-01-20T19:32:02.074Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.name default="" time=2026-01-20T19:32:02.074Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=general.description default="" time=2026-01-20T19:32:02.074Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-01-20T19:32:02.074Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so time=2026-01-20T19:32:02.080Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v13 ggml_cuda_init: failed to initialize CUDA: no CUDA-capable device is detected load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so time=2026-01-20T19:32:02.197Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2026-01-20T19:32:02.197Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.block_count default=0 time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.pooling_type default=0 time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.expert_count default=0 time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.block_count default=0 time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.embedding_length default=0 time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-01-20T19:32:02.198Z level=DEBUG source=ggml.go:297 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-01-20T19:32:02.198Z level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=123.88735ms time=2026-01-20T19:32:02.198Z level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=497ns time=2026-01-20T19:32:02.198Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" devices=[] time=2026-01-20T19:32:02.198Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=145.955602ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-7327fca2-18fa-b021-ba3d-8499a04c9c81 GGML_CUDA_INIT:1]" time=2026-01-20T19:32:02.198Z level=DEBUG source=runner.go:153 msg="filtering device which didn't fully initialize" id=GPU-7327fca2-18fa-b021-ba3d-8499a04c9c81 libdir=/usr/lib/ollama/cuda_v13 pci_id=0000:af:00.0 library=CUDA time=2026-01-20T19:32:02.198Z level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[] time=2026-01-20T19:32:02.198Z level=TRACE source=runner.go:183 msg="removing unsupported or overlapping GPU combination" libDir=/usr/lib/ollama/cuda_v13 description="NVIDIA A100 80GB PCIe MIG 7g.80gb" compute=8.0 pci_id=0000:af:00.0 time=2026-01-20T19:32:02.198Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=474.602195ms time=2026-01-20T19:32:02.199Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.2 GiB" available="31.1 GiB" time=2026-01-20T19:32:02.199Z level=INFO source=routes.go:1720 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" [GIN] 2026/01/20 - 19:32:14 | 200 | 47.906µs | 172.24.192.128 | GET "/" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.14.3-rc2
GiteaMirror added the bug label 2026-04-29 09:23:55 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 20, 2026):

Can you pinpoint at which version the detection started to fail?

<!-- gh-comment-id:3775225758 --> @rick-github commented on GitHub (Jan 20, 2026): Can you pinpoint at which version the detection started to fail?
Author
Owner

@jessiewbailey commented on GitHub (Jan 20, 2026):

The issues seem to start with 0.13.0

<!-- gh-comment-id:3775502126 --> @jessiewbailey commented on GitHub (Jan 20, 2026): The issues seem to start with 0.13.0
Author
Owner

@rick-github commented on GitHub (Jan 21, 2026):

Can you provide a log with OLLAMA_DEBUG=2 for both 0.12.11 and 0.13.0?

<!-- gh-comment-id:3775634806 --> @rick-github commented on GitHub (Jan 21, 2026): Can you provide a log with `OLLAMA_DEBUG=2` for both 0.12.11 and 0.13.0?
Author
Owner
<!-- gh-comment-id:3775760765 --> @jessiewbailey commented on GitHub (Jan 21, 2026): 0.12.11: [ollama-non-root-test-66f6c7d9d7-wbfvv_ollama_v-0-12-11.log](https://github.com/user-attachments/files/24754842/ollama-non-root-test-66f6c7d9d7-wbfvv_ollama_v-0-12-11.log) 0.13.0: [ollama-non-root-test-c99b68bd6-cb4kv_ollama-v0-13-0.log](https://github.com/user-attachments/files/24754838/ollama-non-root-test-c99b68bd6-cb4kv_ollama-v0-13-0.log)
Author
Owner

@rick-github commented on GitHub (Jan 21, 2026):

OLLAMA_DEBUG=2

<!-- gh-comment-id:3777698215 --> @rick-github commented on GitHub (Jan 21, 2026): `OLLAMA_DEBUG=2`
Author
Owner

@jessiewbailey commented on GitHub (Jan 21, 2026):

Sorry about that! I also misidentified the version where it stops working. It works fine up to 13.1. I have posted working logs in 13.1 and logs showing failure in 13.2

ollama-non-root-test-0-13-1-working.log
ollama-non-root-test-0-13-2-not-working.log

<!-- gh-comment-id:3778577628 --> @jessiewbailey commented on GitHub (Jan 21, 2026): Sorry about that! I also misidentified the version where it stops working. It works fine up to 13.1. I have posted working logs in 13.1 and logs showing failure in 13.2 [ollama-non-root-test-0-13-1-working.log](https://github.com/user-attachments/files/24769838/ollama-non-root-test-0-13-1-working.log) [ollama-non-root-test-0-13-2-not-working.log](https://github.com/user-attachments/files/24769839/ollama-non-root-test-0-13-2-not-working.log)
Author
Owner

@devilmetal commented on GitHub (Mar 13, 2026):

a merged fix for this would be appreciated, we are unable to use up-to-date models

<!-- gh-comment-id:4054950741 --> @devilmetal commented on GitHub (Mar 13, 2026): a merged fix for this would be appreciated, we are unable to use up-to-date models
Author
Owner

@jessiewbailey commented on GitHub (Mar 24, 2026):

Do I need to do something else to get this fixed? I think it is waiting on a code review but I am not certain.

<!-- gh-comment-id:4120843434 --> @jessiewbailey commented on GitHub (Mar 24, 2026): Do I need to do something else to get this fixed? I think it is waiting on a code review but I am not certain.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55552