[GH-ISSUE #13156] Vulkan Backhand on R5 3500u #55215

Closed
opened 2026-04-29 08:31:28 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @leokernel on GitHub (Nov 19, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13156

What is the issue?

Im' trying to run ollama with vulkan backhand on a machine running proxmox ve.
Ollama runs in an lxc priviledged container with the dev/dri/ devices passed trough, vulkaninfo seems to work but ollama always enters low vram mode... i guess the problem is that ollama cant read the vram capacity.
i tried with a dummy x session but it didnt work.

Relevant log output

FROM VULKANINFO 
Devices:

========

GPU0:

        apiVersion         = 1.4.305

        driverVersion      = 25.0.7

        vendorID           = 0x1002

        deviceID           = 0x15d8

        deviceType         = PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU

        deviceName         = AMD Radeon Vega 8 Graphics (RADV RAVEN)

        driverID           = DRIVER_ID_MESA_RADV

        driverName         = radv

        driverInfo         = Mesa 25.0.7-0ubuntu0.24.04.2

        conformanceVersion = 1.4.0.0

        deviceUUID         = 00000000-0500-0000-0000-000000000000

        driverUUID         = 414d442d-4d45-5341-2d44-525600000000

GPU1:

        apiVersion         = 1.4.305

        driverVersion      = 0.0.1

        vendorID           = 0x10005

        deviceID           = 0x0000

        deviceType         = PHYSICAL_DEVICE_TYPE_CPU

        deviceName         = llvmpipe (LLVM 20.1.2, 256 bits)

        driverID           = DRIVER_ID_MESA_LLVMPIPE

        driverName         = llvmpipe

        driverInfo         = Mesa 25.0.7-0ubuntu0.24.04.2 (LLVM 20.1.2)

        conformanceVersion = 1.3.1.1

        deviceUUID         = 6d657361-3235-2e30-2e37-2d3075627500

        driverUUID         = 6c6c766d-7069-7065-5555-494400000000

FROM OLLAMA

root@ollama:~# OLLAMA_VULKAN=1 ollama serve

Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.

Your new public key is: 



ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINj3TMe5hPHV4OAY+uv+f4tQwQhSTIz6qaH7zTEKdxVK



time=2025-11-18T16:00:10.952+01:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"

time=2025-11-18T16:00:10.952+01:00 level=INFO source=images.go:522 msg="total blobs: 0"

time=2025-11-18T16:00:10.952+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"

time=2025-11-18T16:00:10.952+01:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.12.11)"

time=2025-11-18T16:00:10.953+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."

time=2025-11-18T16:00:10.954+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36187"

time=2025-11-18T16:00:10.992+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36067"

time=2025-11-18T16:00:11.024+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 40965"

time=2025-11-18T16:00:11.122+01:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="3.9 GiB" available="3.8 GiB"

time=2025-11-18T16:00:11.122+01:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.12.11

Originally created by @leokernel on GitHub (Nov 19, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13156 ### What is the issue? Im' trying to run ollama with vulkan backhand on a machine running proxmox ve. Ollama runs in an lxc priviledged container with the dev/dri/ devices passed trough, vulkaninfo seems to work but ollama always enters low vram mode... i guess the problem is that ollama cant read the vram capacity. i tried with a dummy x session but it didnt work. ### Relevant log output ```shell FROM VULKANINFO Devices: ======== GPU0: apiVersion = 1.4.305 driverVersion = 25.0.7 vendorID = 0x1002 deviceID = 0x15d8 deviceType = PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU deviceName = AMD Radeon Vega 8 Graphics (RADV RAVEN) driverID = DRIVER_ID_MESA_RADV driverName = radv driverInfo = Mesa 25.0.7-0ubuntu0.24.04.2 conformanceVersion = 1.4.0.0 deviceUUID = 00000000-0500-0000-0000-000000000000 driverUUID = 414d442d-4d45-5341-2d44-525600000000 GPU1: apiVersion = 1.4.305 driverVersion = 0.0.1 vendorID = 0x10005 deviceID = 0x0000 deviceType = PHYSICAL_DEVICE_TYPE_CPU deviceName = llvmpipe (LLVM 20.1.2, 256 bits) driverID = DRIVER_ID_MESA_LLVMPIPE driverName = llvmpipe driverInfo = Mesa 25.0.7-0ubuntu0.24.04.2 (LLVM 20.1.2) conformanceVersion = 1.3.1.1 deviceUUID = 6d657361-3235-2e30-2e37-2d3075627500 driverUUID = 6c6c766d-7069-7065-5555-494400000000 FROM OLLAMA root@ollama:~# OLLAMA_VULKAN=1 ollama serve Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINj3TMe5hPHV4OAY+uv+f4tQwQhSTIz6qaH7zTEKdxVK time=2025-11-18T16:00:10.952+01:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-11-18T16:00:10.952+01:00 level=INFO source=images.go:522 msg="total blobs: 0" time=2025-11-18T16:00:10.952+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-11-18T16:00:10.952+01:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.12.11)" time=2025-11-18T16:00:10.953+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-18T16:00:10.954+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36187" time=2025-11-18T16:00:10.992+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36067" time=2025-11-18T16:00:11.024+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 40965" time=2025-11-18T16:00:11.122+01:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="3.9 GiB" available="3.8 GiB" time=2025-11-18T16:00:11.122+01:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.12.11
GiteaMirror added the bug label 2026-04-29 08:31:28 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 19, 2025):

ollama is not detecting the devices. Run with OLLAMA_DEBUG=2 to get more information about device discovery.

<!-- gh-comment-id:3553395826 --> @rick-github commented on GitHub (Nov 19, 2025): ollama is not detecting the devices. Run with `OLLAMA_DEBUG=2` to get more information about device discovery.
Author
Owner

@leokernel commented on GitHub (Nov 19, 2025):

root@ollama:~# OLLAMA_DEBUG=2 OLLAMA_VULKAN=1 ollama serve time=2025-11-19T23:04:38.350+01:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-11-19T23:04:38.352+01:00 level=INFO source=images.go:522 msg="total blobs: 0" time=2025-11-19T23:04:38.352+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-11-19T23:04:38.352+01:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.12.11)" time=2025-11-19T23:04:38.352+01:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler" time=2025-11-19T23:04:38.352+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-19T23:04:38.352+01:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extraEnvs=map[] time=2025-11-19T23:04:38.353+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36995" time=2025-11-19T23:04:38.353+01:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 time=2025-11-19T23:04:38.369+01:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-19T23:04:38.369+01:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:36995" time=2025-11-19T23:04:38.375+01:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string time=2025-11-19T23:04:38.375+01:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string time=2025-11-19T23:04:38.375+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-11-19T23:04:38.376+01:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so time=2025-11-19T23:04:38.385+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12 time=2025-11-19T23:04:38.387+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=12.08629ms time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=752ns time=2025-11-19T23:04:38.388+01:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" devices=[] time=2025-11-19T23:04:38.388+01:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=35.322288ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[] time=2025-11-19T23:04:38.388+01:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extraEnvs=map[] time=2025-11-19T23:04:38.389+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 33291" time=2025-11-19T23:04:38.389+01:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 time=2025-11-19T23:04:38.407+01:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-19T23:04:38.407+01:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:33291" time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-11-19T23:04:38.410+01:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so time=2025-11-19T23:04:38.423+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v13 time=2025-11-19T23:04:38.424+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=14.720219ms time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=521ns time=2025-11-19T23:04:38.425+01:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" devices=[] time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=37.299323ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extra_envs=map[] time=2025-11-19T23:04:38.425+01:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" extraEnvs=map[] time=2025-11-19T23:04:38.425+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 44321" time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/rocm OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/rocm time=2025-11-19T23:04:38.443+01:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-19T23:04:38.444+01:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:44321" time=2025-11-19T23:04:38.446+01:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string time=2025-11-19T23:04:38.446+01:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-11-19T23:04:38.447+01:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so time=2025-11-19T23:04:38.460+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/rocm ggml_cuda_init: failed to initialize ROCm: no ROCm-capable device is detected load_backend: loaded ROCm backend from /usr/local/lib/ollama/rocm/libggml-hip.so time=2025-11-19T23:04:38.511+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=64.945028ms time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=731ns time=2025-11-19T23:04:38.512+01:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" devices=[] time=2025-11-19T23:04:38.512+01:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=86.510899ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" extra_envs=map[] time=2025-11-19T23:04:38.512+01:00 level=DEBUG source=runner.go:116 msg="evluating which if any devices to filter out" initial_count=0 time=2025-11-19T23:04:38.512+01:00 level=TRACE source=runner.go:156 msg="supported GPU library combinations before filtering" supported=map[] time=2025-11-19T23:04:38.512+01:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=159.773335ms time=2025-11-19T23:04:38.513+01:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="3.9 GiB" available="3.8 GiB" time=2025-11-19T23:04:38.513+01:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" this is the output, for what i understand it still doesnt find the gpu even though after installing it returned AMD GPU READY, i also add this if it can be helpful..
root@ollama:~# ls -l /dev/dri total 0 drwxr-xr-x 2 root root 80 nov 19 07:30 by-path crw-rw---- 1 root video 226, 1 nov 19 07:30 card1 crw-rw---- 1 root render 226, 128 nov 19 07:30 renderD128 root@ollama:~#

<!-- gh-comment-id:3554822055 --> @leokernel commented on GitHub (Nov 19, 2025): `root@ollama:~# OLLAMA_DEBUG=2 OLLAMA_VULKAN=1 ollama serve time=2025-11-19T23:04:38.350+01:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-11-19T23:04:38.352+01:00 level=INFO source=images.go:522 msg="total blobs: 0" time=2025-11-19T23:04:38.352+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-11-19T23:04:38.352+01:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.12.11)" time=2025-11-19T23:04:38.352+01:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler" time=2025-11-19T23:04:38.352+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-19T23:04:38.352+01:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extraEnvs=map[] time=2025-11-19T23:04:38.353+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36995" time=2025-11-19T23:04:38.353+01:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 time=2025-11-19T23:04:38.369+01:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-19T23:04:38.369+01:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:36995" time=2025-11-19T23:04:38.375+01:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string time=2025-11-19T23:04:38.375+01:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string time=2025-11-19T23:04:38.375+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-11-19T23:04:38.376+01:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so time=2025-11-19T23:04:38.385+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12 time=2025-11-19T23:04:38.387+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=12.08629ms time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=752ns time=2025-11-19T23:04:38.388+01:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" devices=[] time=2025-11-19T23:04:38.388+01:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=35.322288ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[] time=2025-11-19T23:04:38.388+01:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extraEnvs=map[] time=2025-11-19T23:04:38.389+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 33291" time=2025-11-19T23:04:38.389+01:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 time=2025-11-19T23:04:38.407+01:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-19T23:04:38.407+01:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:33291" time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-11-19T23:04:38.410+01:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so time=2025-11-19T23:04:38.423+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v13 time=2025-11-19T23:04:38.424+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=14.720219ms time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=521ns time=2025-11-19T23:04:38.425+01:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" devices=[] time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=37.299323ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extra_envs=map[] time=2025-11-19T23:04:38.425+01:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" extraEnvs=map[] time=2025-11-19T23:04:38.425+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 44321" time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/rocm OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/rocm time=2025-11-19T23:04:38.443+01:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-19T23:04:38.444+01:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:44321" time=2025-11-19T23:04:38.446+01:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string time=2025-11-19T23:04:38.446+01:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-11-19T23:04:38.447+01:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so time=2025-11-19T23:04:38.460+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/rocm ggml_cuda_init: failed to initialize ROCm: no ROCm-capable device is detected load_backend: loaded ROCm backend from /usr/local/lib/ollama/rocm/libggml-hip.so time=2025-11-19T23:04:38.511+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=64.945028ms time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=731ns time=2025-11-19T23:04:38.512+01:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" devices=[] time=2025-11-19T23:04:38.512+01:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=86.510899ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" extra_envs=map[] time=2025-11-19T23:04:38.512+01:00 level=DEBUG source=runner.go:116 msg="evluating which if any devices to filter out" initial_count=0 time=2025-11-19T23:04:38.512+01:00 level=TRACE source=runner.go:156 msg="supported GPU library combinations before filtering" supported=map[] time=2025-11-19T23:04:38.512+01:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=159.773335ms time=2025-11-19T23:04:38.513+01:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="3.9 GiB" available="3.8 GiB" time=2025-11-19T23:04:38.513+01:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" ` this is the output, for what i understand it still doesnt find the gpu even though after installing it returned AMD GPU READY, i also add this if it can be helpful.. `root@ollama:~# ls -l /dev/dri total 0 drwxr-xr-x 2 root root 80 nov 19 07:30 by-path crw-rw---- 1 root video 226, 1 nov 19 07:30 card1 crw-rw---- 1 root render 226, 128 nov 19 07:30 renderD128 root@ollama:~# `
Author
Owner

@rick-github commented on GitHub (Nov 19, 2025):

If you could preserve line breaks that would be helpful.

<!-- gh-comment-id:3554830575 --> @rick-github commented on GitHub (Nov 19, 2025): If you could preserve line breaks that would be helpful.
Author
Owner

@leokernel commented on GitHub (Nov 19, 2025):

im sorry im a total noob, what do you mean?

<!-- gh-comment-id:3554833068 --> @leokernel commented on GitHub (Nov 19, 2025): im sorry im a total noob, what do you mean?
Author
Owner

@rick-github commented on GitHub (Nov 19, 2025):

It's a wall of text with no line breaks so it makes it difficult to read. The log you added in the first post had line breaks, if you could do that for the new log it would make it easier to read.

<!-- gh-comment-id:3554838773 --> @rick-github commented on GitHub (Nov 19, 2025): It's a wall of text with no line breaks so it makes it difficult to read. The log you added in the first post had line breaks, if you could do that for the new log it would make it easier to read.
Author
Owner

@leokernel commented on GitHub (Nov 19, 2025):

root@ollama:~# OLLAMA_DEBUG=2 OLLAMA_VULKAN=1 ollama serve
time=2025-11-19T23:04:38.350+01:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-11-19T23:04:38.352+01:00 level=INFO source=images.go:522 msg="total blobs: 0"
time=2025-11-19T23:04:38.352+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-11-19T23:04:38.352+01:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.12.11)"
time=2025-11-19T23:04:38.352+01:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler"
time=2025-11-19T23:04:38.352+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-11-19T23:04:38.352+01:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extraEnvs=map[]
time=2025-11-19T23:04:38.353+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36995"
time=2025-11-19T23:04:38.353+01:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12
time=2025-11-19T23:04:38.369+01:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-11-19T23:04:38.369+01:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:36995"
time=2025-11-19T23:04:38.375+01:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string
time=2025-11-19T23:04:38.375+01:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string
time=2025-11-19T23:04:38.375+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0
time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default=""
time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
time=2025-11-19T23:04:38.376+01:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
time=2025-11-19T23:04:38.385+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12
time=2025-11-19T23:04:38.387+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=12.08629ms
time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=752ns
time=2025-11-19T23:04:38.388+01:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" devices=[]
time=2025-11-19T23:04:38.388+01:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=35.322288ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[]
time=2025-11-19T23:04:38.388+01:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extraEnvs=map[]
time=2025-11-19T23:04:38.389+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 33291"
time=2025-11-19T23:04:38.389+01:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13
time=2025-11-19T23:04:38.407+01:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-11-19T23:04:38.407+01:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:33291"
time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string
time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string
time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0
time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default=""
time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
time=2025-11-19T23:04:38.410+01:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
time=2025-11-19T23:04:38.423+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v13
time=2025-11-19T23:04:38.424+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0
time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0
time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0
time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0
time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0
time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=14.720219ms
time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=521ns
time=2025-11-19T23:04:38.425+01:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" devices=[]
time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=37.299323ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extra_envs=map[]
time=2025-11-19T23:04:38.425+01:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" extraEnvs=map[]
time=2025-11-19T23:04:38.425+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 44321"
time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/rocm OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/rocm
time=2025-11-19T23:04:38.443+01:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-11-19T23:04:38.444+01:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:44321"
time=2025-11-19T23:04:38.446+01:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string
time=2025-11-19T23:04:38.446+01:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string
time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0
time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default=""
time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
time=2025-11-19T23:04:38.447+01:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
time=2025-11-19T23:04:38.460+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/rocm
ggml_cuda_init: failed to initialize ROCm: no ROCm-capable device is detected
load_backend: loaded ROCm backend from /usr/local/lib/ollama/rocm/libggml-hip.so
time=2025-11-19T23:04:38.511+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=64.945028ms
time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=731ns
time=2025-11-19T23:04:38.512+01:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" devices=[]
time=2025-11-19T23:04:38.512+01:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=86.510899ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" extra_envs=map[]
time=2025-11-19T23:04:38.512+01:00 level=DEBUG source=runner.go:116 msg="evluating which if any devices to filter out" initial_count=0
time=2025-11-19T23:04:38.512+01:00 level=TRACE source=runner.go:156 msg="supported GPU library combinations before filtering" supported=map[]
time=2025-11-19T23:04:38.512+01:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=159.773335ms
time=2025-11-19T23:04:38.513+01:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="3.9 GiB" available="3.8 GiB"
time=2025-11-19T23:04:38.513+01:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"

<!-- gh-comment-id:3554846906 --> @leokernel commented on GitHub (Nov 19, 2025): root@ollama:~# OLLAMA_DEBUG=2 OLLAMA_VULKAN=1 ollama serve time=2025-11-19T23:04:38.350+01:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-11-19T23:04:38.352+01:00 level=INFO source=images.go:522 msg="total blobs: 0" time=2025-11-19T23:04:38.352+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-11-19T23:04:38.352+01:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.12.11)" time=2025-11-19T23:04:38.352+01:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler" time=2025-11-19T23:04:38.352+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-19T23:04:38.352+01:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extraEnvs=map[] time=2025-11-19T23:04:38.353+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36995" time=2025-11-19T23:04:38.353+01:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 time=2025-11-19T23:04:38.369+01:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-19T23:04:38.369+01:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:36995" time=2025-11-19T23:04:38.375+01:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string time=2025-11-19T23:04:38.375+01:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string time=2025-11-19T23:04:38.375+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-11-19T23:04:38.376+01:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-11-19T23:04:38.376+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so time=2025-11-19T23:04:38.385+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12 time=2025-11-19T23:04:38.387+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=12.08629ms time=2025-11-19T23:04:38.387+01:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=752ns time=2025-11-19T23:04:38.388+01:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" devices=[] time=2025-11-19T23:04:38.388+01:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=35.322288ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[] time=2025-11-19T23:04:38.388+01:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extraEnvs=map[] time=2025-11-19T23:04:38.389+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 33291" time=2025-11-19T23:04:38.389+01:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 time=2025-11-19T23:04:38.407+01:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-19T23:04:38.407+01:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:33291" time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-11-19T23:04:38.410+01:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-11-19T23:04:38.410+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so time=2025-11-19T23:04:38.423+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v13 time=2025-11-19T23:04:38.424+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-11-19T23:04:38.424+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=14.720219ms time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=521ns time=2025-11-19T23:04:38.425+01:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" devices=[] time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=37.299323ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extra_envs=map[] time=2025-11-19T23:04:38.425+01:00 level=TRACE source=runner.go:421 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" extraEnvs=map[] time=2025-11-19T23:04:38.425+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 44321" time=2025-11-19T23:04:38.425+01:00 level=DEBUG source=server.go:393 msg=subprocess OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/rocm OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/rocm time=2025-11-19T23:04:38.443+01:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-19T23:04:38.444+01:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:44321" time=2025-11-19T23:04:38.446+01:00 level=DEBUG source=gguf.go:590 msg=general.architecture type=string time=2025-11-19T23:04:38.446+01:00 level=DEBUG source=gguf.go:590 msg=tokenizer.ggml.model type=string time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.file_type default=0 time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-11-19T23:04:38.447+01:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2025-11-19T23:04:38.447+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so time=2025-11-19T23:04:38.460+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/rocm ggml_cuda_init: failed to initialize ROCm: no ROCm-capable device is detected load_backend: loaded ROCm backend from /usr/local/lib/ollama/rocm/libggml-hip.so time=2025-11-19T23:04:38.511+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.pooling_type default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.expert_count default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.block_count default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.embedding_length default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.key_length default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=ggml.go:276 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=runner.go:1373 msg="dummy model load took" duration=64.945028ms time=2025-11-19T23:04:38.511+01:00 level=DEBUG source=runner.go:1378 msg="gathering device infos took" duration=731ns time=2025-11-19T23:04:38.512+01:00 level=TRACE source=runner.go:448 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" devices=[] time=2025-11-19T23:04:38.512+01:00 level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=86.510899ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" extra_envs=map[] time=2025-11-19T23:04:38.512+01:00 level=DEBUG source=runner.go:116 msg="evluating which if any devices to filter out" initial_count=0 time=2025-11-19T23:04:38.512+01:00 level=TRACE source=runner.go:156 msg="supported GPU library combinations before filtering" supported=map[] time=2025-11-19T23:04:38.512+01:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=159.773335ms time=2025-11-19T23:04:38.513+01:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="3.9 GiB" available="3.8 GiB" time=2025-11-19T23:04:38.513+01:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
Author
Owner

@leokernel commented on GitHub (Nov 19, 2025):

Im sure there is a better way to share the logs via github, as i said before total noob here haha, thank you for messin with me

<!-- gh-comment-id:3554851527 --> @leokernel commented on GitHub (Nov 19, 2025): Im sure there is a better way to share the logs via github, as i said before total noob here haha, thank you for messin with me
Author
Owner

@rick-github commented on GitHub (Nov 19, 2025):

If you installed used the curl method from the ollama downloads page, this looks like https://github.com/ollama/ollama/issues/13104: the Vulkan libraries were accidentally left out of the 0.12.11 tarball. Try installing 0.13.0.

<!-- gh-comment-id:3554873407 --> @rick-github commented on GitHub (Nov 19, 2025): If you installed used the `curl` method from the ollama downloads page, this looks like https://github.com/ollama/ollama/issues/13104: the Vulkan libraries were accidentally left out of the 0.12.11 tarball. Try installing [0.13.0](https://github.com/ollama/ollama/releases/tag/v0.13.0).
Author
Owner

@leokernel commented on GitHub (Nov 19, 2025):

This worked!!! omg i wanna give you a hug i spent so much time trying to make rocm work and then i tried vulkan and didnt work either..
i wanna hug you

<!-- gh-comment-id:3554933572 --> @leokernel commented on GitHub (Nov 19, 2025): This worked!!! omg i wanna give you a hug i spent so much time trying to make rocm work and then i tried vulkan and didnt work either.. i wanna hug you
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55215