[GH-ISSUE #13569] Ollama does not use GPU for AMD Radeon RX 6800 XT #70994

Closed
opened 2026-05-04 23:41:32 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @Tusenka on GitHub (Dec 26, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13569

What is the issue?

ollama ps

blackened/t-lite:latest f7dfa19f99d5 15 GB 100% CPU 4096 4 minutes from now

lspci:

08:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21 [Radeon RX 6800/6800 XT / 6900 XT] (rev c1)

OS:

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=25.10
DISTRIB_CODENAME=questing
DISTRIB_DESCRIPTION="Ubuntu 25.10"

There no warns about gpu but it utilizes only cpu.

Relevant log output

logs:

> `time=2025-12-26T17:49:41.398+03:00 level=INFO source=routes.go:1554 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: `OLLAMA_CONTEXT_LENGOLLAMA_CONTEXT_LENGTH:4096TH:4096` OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/cas12/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
> time=2025-12-26T17:49:41.399+03:00 level=INFO source=images.go:493 msg="total blobs: 8"
> time=2025-12-26T17:49:41.399+03:00 level=INFO source=images.go:500 msg="total unused blobs removed: 0"
> time=2025-12-26T17:49:41.08:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21 [Radeon RX 6800/6800 XT / 6900 XT] (rev c1)
399+03:00 level=INFO source=routes.go:1607 msg="Listening on 127.0.0.1:11434 (version 0.13.5)"
> time=2025-12-26T17:49:41.399+03:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
> time=2025-12-26T17:49:41.400+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 46787"
> time=2025-12-26T17:49:41.433+03:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="30.7 GiB" available="18.2 GiB"
> time=2025-12-26T17:49:41.433+03:00 level=INFO source=routes.go:1648 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.13.5

Originally created by @Tusenka on GitHub (Dec 26, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13569 ### What is the issue? `ollama ps` > blackened/t-lite:latest f7dfa19f99d5 15 GB 100% CPU 4096 4 minutes from now `lspci: ` > `08:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21 [Radeon RX 6800/6800 XT / 6900 XT] (rev c1) > ` OS: > DISTRIB_ID=Ubuntu DISTRIB_RELEASE=25.10 DISTRIB_CODENAME=questing DISTRIB_DESCRIPTION="Ubuntu 25.10" There no warns about gpu but it utilizes only cpu. ### Relevant log output ```shell logs: > `time=2025-12-26T17:49:41.398+03:00 level=INFO source=routes.go:1554 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: `OLLAMA_CONTEXT_LENGOLLAMA_CONTEXT_LENGTH:4096TH:4096` OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/cas12/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" > time=2025-12-26T17:49:41.399+03:00 level=INFO source=images.go:493 msg="total blobs: 8" > time=2025-12-26T17:49:41.399+03:00 level=INFO source=images.go:500 msg="total unused blobs removed: 0" > time=2025-12-26T17:49:41.08:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21 [Radeon RX 6800/6800 XT / 6900 XT] (rev c1) 399+03:00 level=INFO source=routes.go:1607 msg="Listening on 127.0.0.1:11434 (version 0.13.5)" > time=2025-12-26T17:49:41.399+03:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." > time=2025-12-26T17:49:41.400+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 46787" > time=2025-12-26T17:49:41.433+03:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="30.7 GiB" available="18.2 GiB" > time=2025-12-26T17:49:41.433+03:00 level=INFO source=routes.go:1648 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.13.5
GiteaMirror added the bug label 2026-05-04 23:41:32 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 26, 2025):

Set OLLAMA_DEBUG=2 in the server environment and post the log.

<!-- gh-comment-id:3692982285 --> @rick-github commented on GitHub (Dec 26, 2025): Set `OLLAMA_DEBUG=2` in the server environment and post the log.
Author
Owner

@anumukul commented on GitHub (Dec 26, 2025):

Hi, I’d like to take this issue and can deliver a fix within 24 hours.
I’ve worked on similar projects before and have relevant experience, so I should be able to handle this efficiently.

<!-- gh-comment-id:3693471274 --> @anumukul commented on GitHub (Dec 26, 2025): Hi, I’d like to take this issue and can deliver a fix within 24 hours. I’ve worked on similar projects before and have relevant experience, so I should be able to handle this efficiently.
Author
Owner

@Tusenka commented on GitHub (Dec 29, 2025):

@rick-github, @anumukul here is the logs:

time=2025-12-29T10:47:27.882+03:00 level=INFO source=routes.go:1554 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/cas12/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-12-29T10:47:27.888+03:00 level=INFO source=images.go:493 msg="total blobs: 13"
time=2025-12-29T10:47:27.889+03:00 level=INFO source=images.go:500 msg="total unused blobs removed: 0"
time=2025-12-29T10:47:27.890+03:00 level=INFO source=routes.go:1607 msg="Listening on 127.0.0.1:11434 (version 0.13.5)"
time=2025-12-29T10:47:27.890+03:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler"
time=2025-12-29T10:47:27.891+03:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-12-29T10:47:27.891+03:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs=[/usr/local/lib/ollama] extraEnvs=map[]
time=2025-12-29T10:47:27.893+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41009"
time=2025-12-29T10:47:27.893+03:00 level=DEBUG source=server.go:430 msg=subprocess PATH=/home/cas12/.codon/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin:/home/cas12/.local/bin OLLAMA_DEBUG=2 LD_LIBRARY_PATH=/usr/local/lib/ollama OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama
time=2025-12-29T10:47:27.926+03:00 level=INFO source=runner.go:1405 msg="starting ollama engine"
time=2025-12-29T10:47:27.927+03:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:41009"
time=2025-12-29T10:47:27.936+03:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string
time=2025-12-29T10:47:27.936+03:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string
time=2025-12-29T10:47:27.937+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32
time=2025-12-29T10:47:27.937+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32
time=2025-12-29T10:47:27.937+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.file_type default=0
time=2025-12-29T10:47:27.937+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.name default=""
time=2025-12-29T10:47:27.937+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.description default=""
time=2025-12-29T10:47:27.937+03:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2025-12-29T10:47:27.937+03:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
time=2025-12-29T10:47:27.939+03:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.block_count default=0
time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.pooling_type default=0
time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.expert_count default=0
time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.block_count default=0
time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.embedding_length default=0
time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.head_count default=0
time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.key_length default=0
time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2025-12-29T10:47:27.948+03:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=11.74997ms
time=2025-12-29T10:47:27.948+03:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=812ns
time=2025-12-29T10:47:27.948+03:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH=[/usr/local/lib/ollama] devices=[]
time=2025-12-29T10:47:27.948+03:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=57.102631ms OLLAMA_LIBRARY_PATH=[/usr/local/lib/ollama] extra_envs=map[]
time=2025-12-29T10:47:27.948+03:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0
time=2025-12-29T10:47:27.948+03:00 level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[]
time=2025-12-29T10:47:27.948+03:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=58.025186ms
time=2025-12-29T10:47:27.949+03:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="30.7 GiB" available="15.9 GiB"
time=2025-12-29T10:47:27.949+03:00 level=INFO source=routes.go:1648 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"

Thank you

<!-- gh-comment-id:3695756810 --> @Tusenka commented on GitHub (Dec 29, 2025): @rick-github, @anumukul here is the logs: > time=2025-12-29T10:47:27.882+03:00 level=INFO source=routes.go:1554 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/cas12/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-12-29T10:47:27.888+03:00 level=INFO source=images.go:493 msg="total blobs: 13" time=2025-12-29T10:47:27.889+03:00 level=INFO source=images.go:500 msg="total unused blobs removed: 0" time=2025-12-29T10:47:27.890+03:00 level=INFO source=routes.go:1607 msg="Listening on 127.0.0.1:11434 (version 0.13.5)" time=2025-12-29T10:47:27.890+03:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler" time=2025-12-29T10:47:27.891+03:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-12-29T10:47:27.891+03:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs=[/usr/local/lib/ollama] extraEnvs=map[] time=2025-12-29T10:47:27.893+03:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41009" time=2025-12-29T10:47:27.893+03:00 level=DEBUG source=server.go:430 msg=subprocess PATH=/home/cas12/.codon/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin:/home/cas12/.local/bin OLLAMA_DEBUG=2 LD_LIBRARY_PATH=/usr/local/lib/ollama OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama time=2025-12-29T10:47:27.926+03:00 level=INFO source=runner.go:1405 msg="starting ollama engine" time=2025-12-29T10:47:27.927+03:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:41009" time=2025-12-29T10:47:27.936+03:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string time=2025-12-29T10:47:27.936+03:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string time=2025-12-29T10:47:27.937+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32 time=2025-12-29T10:47:27.937+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32 time=2025-12-29T10:47:27.937+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.file_type default=0 time=2025-12-29T10:47:27.937+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.name default="" time=2025-12-29T10:47:27.937+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.description default="" time=2025-12-29T10:47:27.937+03:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-12-29T10:47:27.937+03:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama time=2025-12-29T10:47:27.939+03:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.block_count default=0 time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.pooling_type default=0 time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.expert_count default=0 time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-12-29T10:47:27.946+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.block_count default=0 time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.embedding_length default=0 time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-12-29T10:47:27.947+03:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-12-29T10:47:27.948+03:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=11.74997ms time=2025-12-29T10:47:27.948+03:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=812ns time=2025-12-29T10:47:27.948+03:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH=[/usr/local/lib/ollama] devices=[] time=2025-12-29T10:47:27.948+03:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=57.102631ms OLLAMA_LIBRARY_PATH=[/usr/local/lib/ollama] extra_envs=map[] time=2025-12-29T10:47:27.948+03:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0 time=2025-12-29T10:47:27.948+03:00 level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[] time=2025-12-29T10:47:27.948+03:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=58.025186ms time=2025-12-29T10:47:27.949+03:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="30.7 GiB" available="15.9 GiB" time=2025-12-29T10:47:27.949+03:00 level=INFO source=routes.go:1648 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" Thank you
Author
Owner

@rick-github commented on GitHub (Dec 29, 2025):

What's the output of:

ls -lR /usr/local/lib/ollama
<!-- gh-comment-id:3695763690 --> @rick-github commented on GitHub (Dec 29, 2025): What's the output of: ``` ls -lR /usr/local/lib/ollama ```
Author
Owner

@Tusenka commented on GitHub (Dec 30, 2025):

Image

/usr/local/lib/ollama:
total 4
drwx------ 2 root root 4096 Dec 25 15:47 cuda_v12

/usr/local/lib/ollama/cuda_v12:
total 507352
lrwxrwxrwx 1 root root 21 Dec 19 00:26 libcublas.so.12 -> libcublas.so.12.8.4.1
-rwxr-xr-x 1 root root 116388640 Jul 8 2015 libcublas.so.12.8.4.1
lrwxrwxrwx 1 root root 23 Dec 19 00:26 libcublasLt.so.12 -> libcublasLt.so.12.8.4.1
-rwx------ 1 root root 402396672 Dec 25 15:52 libcublasLt.so.12.8.4.1
-rwxr-xr-x 1 root root 728800 Jul 8 2015 libcudart.so.12.8.90

<!-- gh-comment-id:3698601433 --> @Tusenka commented on GitHub (Dec 30, 2025): <img width="2067" height="397" alt="Image" src="https://github.com/user-attachments/assets/909a429c-c6a0-4a82-90e2-54b6d094779b" /> > /usr/local/lib/ollama: > total 4 > drwx------ 2 root root 4096 Dec 25 15:47 cuda_v12 > > /usr/local/lib/ollama/cuda_v12: > total 507352 > lrwxrwxrwx 1 root root 21 Dec 19 00:26 libcublas.so.12 -> libcublas.so.12.8.4.1 > -rwxr-xr-x 1 root root 116388640 Jul 8 2015 libcublas.so.12.8.4.1 > lrwxrwxrwx 1 root root 23 Dec 19 00:26 libcublasLt.so.12 -> libcublasLt.so.12.8.4.1 > -rwx------ 1 root root 402396672 Dec 25 15:52 libcublasLt.so.12.8.4.1 > -rwxr-xr-x 1 root root 728800 Jul 8 2015 libcudart.so.12.8.90
Author
Owner

@rick-github commented on GitHub (Dec 30, 2025):

Your installation is incomplete, it should look like this:

/usr/local/lib/ollama:
total 6476
drwxr-xr-x 2 root root    4096 Dez 18 22:26 cuda_v12
drwxr-xr-x 2 root root    4096 Dez 18 22:22 cuda_v13
lrwxrwxrwx 1 root root      17 Dez 18 22:07 libggml-base.so -> libggml-base.so.0
lrwxrwxrwx 1 root root      21 Dez 18 22:07 libggml-base.so.0 -> libggml-base.so.0.0.0
-rwxr-xr-x 1 root root  744056 Dez 18 22:07 libggml-base.so.0.0.0
-rwxr-xr-x 1 root root  873912 Dez 18 22:07 libggml-cpu-alderlake.so
-rwxr-xr-x 1 root root  873912 Dez 18 22:07 libggml-cpu-haswell.so
-rwxr-xr-x 1 root root 1009080 Dez 18 22:07 libggml-cpu-icelake.so
-rwxr-xr-x 1 root root  820728 Dez 18 22:07 libggml-cpu-sandybridge.so
-rwxr-xr-x 1 root root 1009080 Dez 18 22:07 libggml-cpu-skylakex.so
-rwxr-xr-x 1 root root  636536 Dez 18 22:07 libggml-cpu-sse42.so
-rwxr-xr-x 1 root root  632472 Dez 18 22:07 libggml-cpu-x64.so
drwxr-xr-x 2 root root    4096 Dez 18 22:07 vulkan

/usr/local/lib/ollama/cuda_v12:
total 2477724
lrwxrwxrwx 1 root root         23 Dez 18 22:26 libcublasLt.so.12 -> libcublasLt.so.12.8.4.1
-rwxr-xr-x 1 root root  751771728 Jul  8  2015 libcublasLt.so.12.8.4.1
lrwxrwxrwx 1 root root         21 Dez 18 22:26 libcublas.so.12 -> libcublas.so.12.8.4.1
-rwxr-xr-x 1 root root  116388640 Jul  8  2015 libcublas.so.12.8.4.1
lrwxrwxrwx 1 root root         20 Dez 18 22:26 libcudart.so.12 -> libcudart.so.12.8.90
-rwxr-xr-x 1 root root     728800 Jul  8  2015 libcudart.so.12.8.90
-rwxr-xr-x 1 root root 1668281616 Dez 18 22:26 libggml-cuda.so

/usr/local/lib/ollama/cuda_v13:
total 949156
lrwxrwxrwx 1 root root        23 Dez 18 22:22 libcublasLt.so.13 -> libcublasLt.so.13.1.0.3
-rwxr-xr-x 1 root root 541595600 Jul  8  2015 libcublasLt.so.13.1.0.3
lrwxrwxrwx 1 root root        21 Dez 18 22:22 libcublas.so.13 -> libcublas.so.13.1.0.3
-rwxr-xr-x 1 root root  54177976 Jul  8  2015 libcublas.so.13.1.0.3
lrwxrwxrwx 1 root root        20 Dez 18 22:22 libcudart.so.13 -> libcudart.so.13.0.96
-rwxr-xr-x 1 root root    704288 Jul  8  2015 libcudart.so.13.0.96
-rwxr-xr-x 1 root root 375444752 Dez 18 22:22 libggml-cuda.so

/usr/local/lib/ollama/vulkan:
total 55364
-rwxr-xr-x 1 root root 52220200 Dez 18 22:07 libggml-vulkan.so
lrwxrwxrwx 1 root root       20 Dez 18 22:07 libvulkan.so.1 -> libvulkan.so.1.4.321
-rwxr-xr-x 1 root root  4466776 Dez 18 22:06 libvulkan.so.1.4.321

How did you install ollama? I recommend re-installing with:

curl -fsSL https://ollama.com/install.sh | sh
<!-- gh-comment-id:3699199103 --> @rick-github commented on GitHub (Dec 30, 2025): Your installation is incomplete, it should look like this: ``` /usr/local/lib/ollama: total 6476 drwxr-xr-x 2 root root 4096 Dez 18 22:26 cuda_v12 drwxr-xr-x 2 root root 4096 Dez 18 22:22 cuda_v13 lrwxrwxrwx 1 root root 17 Dez 18 22:07 libggml-base.so -> libggml-base.so.0 lrwxrwxrwx 1 root root 21 Dez 18 22:07 libggml-base.so.0 -> libggml-base.so.0.0.0 -rwxr-xr-x 1 root root 744056 Dez 18 22:07 libggml-base.so.0.0.0 -rwxr-xr-x 1 root root 873912 Dez 18 22:07 libggml-cpu-alderlake.so -rwxr-xr-x 1 root root 873912 Dez 18 22:07 libggml-cpu-haswell.so -rwxr-xr-x 1 root root 1009080 Dez 18 22:07 libggml-cpu-icelake.so -rwxr-xr-x 1 root root 820728 Dez 18 22:07 libggml-cpu-sandybridge.so -rwxr-xr-x 1 root root 1009080 Dez 18 22:07 libggml-cpu-skylakex.so -rwxr-xr-x 1 root root 636536 Dez 18 22:07 libggml-cpu-sse42.so -rwxr-xr-x 1 root root 632472 Dez 18 22:07 libggml-cpu-x64.so drwxr-xr-x 2 root root 4096 Dez 18 22:07 vulkan /usr/local/lib/ollama/cuda_v12: total 2477724 lrwxrwxrwx 1 root root 23 Dez 18 22:26 libcublasLt.so.12 -> libcublasLt.so.12.8.4.1 -rwxr-xr-x 1 root root 751771728 Jul 8 2015 libcublasLt.so.12.8.4.1 lrwxrwxrwx 1 root root 21 Dez 18 22:26 libcublas.so.12 -> libcublas.so.12.8.4.1 -rwxr-xr-x 1 root root 116388640 Jul 8 2015 libcublas.so.12.8.4.1 lrwxrwxrwx 1 root root 20 Dez 18 22:26 libcudart.so.12 -> libcudart.so.12.8.90 -rwxr-xr-x 1 root root 728800 Jul 8 2015 libcudart.so.12.8.90 -rwxr-xr-x 1 root root 1668281616 Dez 18 22:26 libggml-cuda.so /usr/local/lib/ollama/cuda_v13: total 949156 lrwxrwxrwx 1 root root 23 Dez 18 22:22 libcublasLt.so.13 -> libcublasLt.so.13.1.0.3 -rwxr-xr-x 1 root root 541595600 Jul 8 2015 libcublasLt.so.13.1.0.3 lrwxrwxrwx 1 root root 21 Dez 18 22:22 libcublas.so.13 -> libcublas.so.13.1.0.3 -rwxr-xr-x 1 root root 54177976 Jul 8 2015 libcublas.so.13.1.0.3 lrwxrwxrwx 1 root root 20 Dez 18 22:22 libcudart.so.13 -> libcudart.so.13.0.96 -rwxr-xr-x 1 root root 704288 Jul 8 2015 libcudart.so.13.0.96 -rwxr-xr-x 1 root root 375444752 Dez 18 22:22 libggml-cuda.so /usr/local/lib/ollama/vulkan: total 55364 -rwxr-xr-x 1 root root 52220200 Dez 18 22:07 libggml-vulkan.so lrwxrwxrwx 1 root root 20 Dez 18 22:07 libvulkan.so.1 -> libvulkan.so.1.4.321 -rwxr-xr-x 1 root root 4466776 Dez 18 22:06 libvulkan.so.1.4.321 ``` How did you install ollama? I recommend re-installing with: ``` curl -fsSL https://ollama.com/install.sh | sh ```
Author
Owner

@Tusenka commented on GitHub (Jan 5, 2026):

@rick-github , thank you so much!
I was able to run ollama by reinstalling it and installing drivers for AMD:
sudo tar zxf ollama-linux-amd64-rocm.tgz -C /usr

<!-- gh-comment-id:3709079296 --> @Tusenka commented on GitHub (Jan 5, 2026): @rick-github , thank you so much! I was able to run ollama by reinstalling it and installing drivers for AMD: ` sudo tar zxf ollama-linux-amd64-rocm.tgz -C /usr`
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70994