[GH-ISSUE #13023] Intel Iris Xe Graphics (16GB) not detected by Ollama v0.12.10 on Windows 11 despite Vulkan/DXGI+PDH support #70684

Open
opened 2026-05-04 22:33:59 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @deep1305 on GitHub (Nov 9, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13023

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Ollama v0.12.10 fails to detect my Intel Iris Xe Graphics (integrated GPU with 16GB shared memory) on Windows 11, despite the changelog mentioning "Add Vulkan memory detection for Intel GPU using DXGI+PDH". The system falls back to 100% CPU mode with total vram="0 B" and offloaded 0/49 layers to GPU.

Vulkan is properly installed and vulkaninfo correctly lists the Intel GPU, but Ollama never loads a Vulkan backend library and only uses CPU.

Expected Behavior

  • Intel Iris Xe should be detected via Vulkan/DXGI+PDH
  • Model layers should offload to GPU
  • ollama ps should show GPU percentage

Actual Behavior

  • 100% CPU mode
  • total vram="0 B"
  • offloaded 0/49 layers to GPU
  • No Vulkan backend library loaded

Relevant log output

time=2025-11-06T18:33:02.776-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-11-06T18:33:02.776-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.7 GiB" available="23.1 GiB"
time=2025-11-06T18:33:02.776-05:00 level=INFO source=routes.go:1619 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"


load_backend: loaded CPU backend from C:\Users\smart\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll

time=2025-11-06T18:33:10.143-05:00 level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU"
time=2025-11-06T18:33:10.144-05:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2025-11-06T18:33:10.144-05:00 level=INFO source=ggml.go:494 msg="offloaded 0/49 layers to GPU"

OS

Windows

GPU

Intel

CPU

No response

Ollama version

0.12.10

Originally created by @deep1305 on GitHub (Nov 9, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13023 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Ollama v0.12.10 fails to detect my Intel Iris Xe Graphics (integrated GPU with 16GB shared memory) on Windows 11, despite the changelog mentioning "Add Vulkan memory detection for Intel GPU using DXGI+PDH". The system falls back to 100% CPU mode with total vram="0 B" and offloaded 0/49 layers to GPU. Vulkan is properly installed and vulkaninfo correctly lists the Intel GPU, but Ollama never loads a Vulkan backend library and only uses CPU. ## Expected Behavior - Intel Iris Xe should be detected via Vulkan/DXGI+PDH - Model layers should offload to GPU - `ollama ps` should show GPU percentage ## Actual Behavior - 100% CPU mode - total vram="0 B" - offloaded 0/49 layers to GPU - No Vulkan backend library loaded ### Relevant log output ```shell time=2025-11-06T18:33:02.776-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-06T18:33:02.776-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.7 GiB" available="23.1 GiB" time=2025-11-06T18:33:02.776-05:00 level=INFO source=routes.go:1619 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" load_backend: loaded CPU backend from C:\Users\smart\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll time=2025-11-06T18:33:10.143-05:00 level=INFO source=ggml.go:482 msg="offloading 0 repeating layers to GPU" time=2025-11-06T18:33:10.144-05:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2025-11-06T18:33:10.144-05:00 level=INFO source=ggml.go:494 msg="offloaded 0/49 layers to GPU" ``` ### OS Windows ### GPU Intel ### CPU _No response_ ### Ollama version 0.12.10
GiteaMirror added the bugneeds more info labels 2026-05-04 22:33:59 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 9, 2025):

The Vulkan backend is not yet enabled in production releases. If you want to use the Vulkan backend, install the Vulkan SDK and set VULKAN_SDK in your environment, then follow the developer instructions. In a future release, Vulkan support will be included in the binary release as well. Please file issues if you run into any problems.

<!-- gh-comment-id:3507383739 --> @rick-github commented on GitHub (Nov 9, 2025): The Vulkan backend is not yet enabled in production releases. If you want to use the Vulkan backend, install the [Vulkan SDK](https://vulkan.lunarg.com/) and set VULKAN_SDK in your environment, then follow the [developer instructions](https://github.com/ollama/ollama/blob/main/docs/development.md). In a future release, Vulkan support will be included in the binary release as well. Please file issues if you run into any problems.
Author
Owner

@pdevine commented on GitHub (Nov 12, 2025):

@deep1305 were you able to get it to work? We will enable it by default soon; still trying to get people to try it out and report bugs.

<!-- gh-comment-id:3519302550 --> @pdevine commented on GitHub (Nov 12, 2025): @deep1305 were you able to get it to work? We will enable it by default soon; still trying to get people to try it out and report bugs.
Author
Owner

@ndragon798 commented on GitHub (Nov 12, 2025):

Hey @pdevine I just tested building and running using the latest vulkan sdk and my intel B50. Everything works great on Fedora 42

time=2025-11-12T10:10:42.881-05:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2025-11-12T10:10:42.882-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-11-12T10:10:42.882-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/home/nathan/.cache/go-build/16/16d3b3dc5dca6bf7f584731b8b542153ff61e0c555ff1cd863327f882730942c-d/ollama runner --ollama-engine --port 41727"
time=2025-11-12T10:10:44.817-05:00 level=INFO source=types.go:42 msg="inference compute" id=868012e2-0000-0000-0e00-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Intel(R) Arc(tm) Pro B50 Graphics (BMG G21)" libdirs=ollama driver=0.0 pci_id=0000:0e:00.0 type=discrete total="15.9 GiB" available="12.9 GiB"
time=2025-11-12T10:10:44.818-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="15.9 GiB" threshold="20.0 GiB"

Is there anything you want tested?

<!-- gh-comment-id:3522452640 --> @ndragon798 commented on GitHub (Nov 12, 2025): Hey @pdevine I just tested building and running using the latest vulkan sdk and my intel B50. Everything works great on Fedora 42 ``` time=2025-11-12T10:10:42.881-05:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.0.0)" time=2025-11-12T10:10:42.882-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-12T10:10:42.882-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/home/nathan/.cache/go-build/16/16d3b3dc5dca6bf7f584731b8b542153ff61e0c555ff1cd863327f882730942c-d/ollama runner --ollama-engine --port 41727" time=2025-11-12T10:10:44.817-05:00 level=INFO source=types.go:42 msg="inference compute" id=868012e2-0000-0000-0e00-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Intel(R) Arc(tm) Pro B50 Graphics (BMG G21)" libdirs=ollama driver=0.0 pci_id=0000:0e:00.0 type=discrete total="15.9 GiB" available="12.9 GiB" time=2025-11-12T10:10:44.818-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="15.9 GiB" threshold="20.0 GiB" ``` Is there anything you want tested?
Author
Owner

@dhiltgen commented on GitHub (Nov 14, 2025):

In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server

<!-- gh-comment-id:3530298392 --> @dhiltgen commented on GitHub (Nov 14, 2025): In 0.12.11 Vulkan is now included in the official binaries, but still experimental. To enable, set OLLAMA_VULKAN=1 for the server. https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-do-i-configure-ollama-server
Author
Owner

@ndragon798 commented on GitHub (Nov 18, 2025):

@dhiltgen do I still need to set a vulkan_sdk env var? I'm unable to get my b50 recognized with .12.11 with the OLLAMA_VULKAN=1 env var set.

Nov 18 11:35:24 Nathan-PC ollama[185675]: time=2025-11-18T11:35:24.720-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Nov 18 11:35:24 Nathan-PC ollama[185675]: time=2025-11-18T11:35:24.722-05:00 level=INFO source=images.go:522 msg="total blobs: 13"
Nov 18 11:35:24 Nathan-PC ollama[185675]: time=2025-11-18T11:35:24.722-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Nov 18 11:35:24 Nathan-PC ollama[185675]: time=2025-11-18T11:35:24.722-05:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.12.11)"
Nov 18 11:35:24 Nathan-PC ollama[185675]: time=2025-11-18T11:35:24.723-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Nov 18 11:35:24 Nathan-PC ollama[185675]: time=2025-11-18T11:35:24.724-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34607"
Nov 18 11:35:24 Nathan-PC ollama[185675]: time=2025-11-18T11:35:24.890-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36321"
Nov 18 11:35:25 Nathan-PC ollama[185675]: time=2025-11-18T11:35:25.048-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="62.7 GiB" available="20.6 GiB"
Nov 18 11:35:25 Nathan-PC ollama[185675]: time=2025-11-18T11:35:25.048-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
Nov 18 11:37:17 Nathan-PC ollama[185675]: [GIN] 2025/11/18 - 11:37:17 | 200 |      84.189µs |       127.0.0.1 | GET      "/api/version"
<!-- gh-comment-id:3548537880 --> @ndragon798 commented on GitHub (Nov 18, 2025): @dhiltgen do I still need to set a vulkan_sdk env var? I'm unable to get my b50 recognized with .12.11 with the OLLAMA_VULKAN=1 env var set. ``` Nov 18 11:35:24 Nathan-PC ollama[185675]: time=2025-11-18T11:35:24.720-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Nov 18 11:35:24 Nathan-PC ollama[185675]: time=2025-11-18T11:35:24.722-05:00 level=INFO source=images.go:522 msg="total blobs: 13" Nov 18 11:35:24 Nathan-PC ollama[185675]: time=2025-11-18T11:35:24.722-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" Nov 18 11:35:24 Nathan-PC ollama[185675]: time=2025-11-18T11:35:24.722-05:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.12.11)" Nov 18 11:35:24 Nathan-PC ollama[185675]: time=2025-11-18T11:35:24.723-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Nov 18 11:35:24 Nathan-PC ollama[185675]: time=2025-11-18T11:35:24.724-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34607" Nov 18 11:35:24 Nathan-PC ollama[185675]: time=2025-11-18T11:35:24.890-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36321" Nov 18 11:35:25 Nathan-PC ollama[185675]: time=2025-11-18T11:35:25.048-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="62.7 GiB" available="20.6 GiB" Nov 18 11:35:25 Nathan-PC ollama[185675]: time=2025-11-18T11:35:25.048-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" Nov 18 11:37:17 Nathan-PC ollama[185675]: [GIN] 2025/11/18 - 11:37:17 | 200 | 84.189µs | 127.0.0.1 | GET "/api/version" ```
Author
Owner

@rick-github commented on GitHub (Nov 18, 2025):

The Linux tarball for 0.12.11 was accidentally released without the Vulkan libraries (#13104). 0.12.12 will fix this. If you want to try Vulkan in the meantime, you can build the binary or install the libraries.

<!-- gh-comment-id:3548568541 --> @rick-github commented on GitHub (Nov 18, 2025): The Linux tarball for 0.12.11 was accidentally released without the Vulkan libraries ([#13104](https://github.com/ollama/ollama/issues/13104)). 0.12.12 will fix this. If you want to try Vulkan in the meantime, you can build the binary or install the [libraries](https://github.com/rick-github/assets/raw/refs/heads/main/vulkan.tgz).
Author
Owner

@dhiltgen commented on GitHub (Dec 5, 2025):

@ndragon798 please update to the latest version, or the 0.13.1 RC https://github.com/ollama/ollama/releases and see if it correctly discovers your GPU. If not, please run the server with OLLAMA_DEBUG=2 set and share the logs so we can see what's going wrong during GPU discovery.

@deep1305 are you still having trouble getting it to detect your GPU, or is your problem resolved now?

<!-- gh-comment-id:3614894062 --> @dhiltgen commented on GitHub (Dec 5, 2025): @ndragon798 please update to the latest version, or the 0.13.1 RC https://github.com/ollama/ollama/releases and see if it correctly discovers your GPU. If not, please run the server with OLLAMA_DEBUG=2 set and share the logs so we can see what's going wrong during GPU discovery. @deep1305 are you still having trouble getting it to detect your GPU, or is your problem resolved now?
Author
Owner

@hw762 commented on GitHub (Dec 31, 2025):

I have a similar issue, also with an Intel GPU (Intel(R) UHD Graphics (CML GT2)):

Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.288-05:00 level=INFO source=routes.go:1554 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.290-05:00 level=INFO source=images.go:493 msg="total blobs: 12"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.291-05:00 level=INFO source=images.go:500 msg="total unused blobs removed: 0"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.292-05:00 level=INFO source=routes.go:1607 msg="Listening on 127.0.0.1:11434 (version 0.13.5)"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.292-05:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.294-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.295-05:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extraEnvs=map[]
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.298-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41351"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.298-05:00 level=DEBUG source=server.go:430 msg=subprocess PATH=/home/prototype/.local/bin:/home/prototype/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/home/prototype/.dotnet/tools OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.331-05:00 level=INFO source=runner.go:1405 msg="starting ollama engine"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.332-05:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:41351"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.341-05:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.341-05:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.341-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.341-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.341-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.file_type default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.342-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.name default=""
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.342-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.description default=""
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.342-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.342-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.345-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.block_count default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.pooling_type default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.expert_count default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.pre default=""
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.block_count default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.embedding_length default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.head_count default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.head_count_kv default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.key_length default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.dimension_count default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.freq_base default=100000
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.scaling.factor default=1
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=14.639116ms
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=1.677µs
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.356-05:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" devices=[]
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.356-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=61.45168ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[]
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.356-05:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extraEnvs=map[]
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.357-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 39499"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.357-05:00 level=DEBUG source=server.go:430 msg=subprocess PATH=/home/prototype/.local/bin:/home/prototype/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/home/prototype/.dotnet/tools OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.389-05:00 level=INFO source=runner.go:1405 msg="starting ollama engine"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.390-05:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:39499"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.file_type default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.name default=""
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.description default=""
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.403-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v13
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.411-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.block_count default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.pooling_type default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.expert_count default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.pre default=""
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.block_count default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.embedding_length default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.head_count default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.head_count_kv default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.key_length default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.dimension_count default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.freq_base default=100000
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.413-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.scaling.factor default=1
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.413-05:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=14.073903ms
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.413-05:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=1.55µs
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.413-05:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" devices=[]
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.413-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=57.460278ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extra_envs=map[]
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.414-05:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/vulkan]" extraEnvs=map[]
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.415-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36889"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.415-05:00 level=DEBUG source=server.go:430 msg=subprocess PATH=/home/prototype/.local/bin:/home/prototype/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/home/prototype/.dotnet/tools OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/vulkan OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/vulkan
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.447-05:00 level=INFO source=runner.go:1405 msg="starting ollama engine"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.448-05:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:36889"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.file_type default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.name default=""
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.description default=""
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.460-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/vulkan
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.block_count default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.pooling_type default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.expert_count default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.pre default=""
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.block_count default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.embedding_length default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.head_count default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.head_count_kv default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.key_length default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.dimension_count default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.freq_base default=100000
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.scaling.factor default=1
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=12.713981ms
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=1.909µs
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.470-05:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/vulkan]" devices=[]
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.470-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=56.246277ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/vulkan]" extra_envs=map[]
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.470-05:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.470-05:00 level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[]
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.470-05:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=177.760676ms
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.471-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="38.8 GiB" available="33.3 GiB"
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.471-05:00 level=INFO source=routes.go:1648 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"

It's a few years old, I don't know if it has any compatibility issue.

<!-- gh-comment-id:3701420310 --> @hw762 commented on GitHub (Dec 31, 2025): I have a similar issue, also with an Intel GPU (Intel(R) UHD Graphics (CML GT2)): ``` Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.288-05:00 level=INFO source=routes.go:1554 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.290-05:00 level=INFO source=images.go:493 msg="total blobs: 12" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.291-05:00 level=INFO source=images.go:500 msg="total unused blobs removed: 0" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.292-05:00 level=INFO source=routes.go:1607 msg="Listening on 127.0.0.1:11434 (version 0.13.5)" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.292-05:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.294-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.295-05:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extraEnvs=map[] Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.298-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41351" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.298-05:00 level=DEBUG source=server.go:430 msg=subprocess PATH=/home/prototype/.local/bin:/home/prototype/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/home/prototype/.dotnet/tools OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.331-05:00 level=INFO source=runner.go:1405 msg="starting ollama engine" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.332-05:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:41351" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.341-05:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.341-05:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.341-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.341-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.341-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.file_type default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.342-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.name default="" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.342-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.description default="" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.342-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.342-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.345-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.block_count default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.pooling_type default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.expert_count default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.pre default="" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.block_count default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.embedding_length default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.head_count default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.head_count_kv default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.key_length default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.dimension_count default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.freq_base default=100000 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.scaling.factor default=1 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=14.639116ms Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.355-05:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=1.677µs Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.356-05:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" devices=[] Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.356-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=61.45168ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[] Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.356-05:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extraEnvs=map[] Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.357-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 39499" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.357-05:00 level=DEBUG source=server.go:430 msg=subprocess PATH=/home/prototype/.local/bin:/home/prototype/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/home/prototype/.dotnet/tools OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.389-05:00 level=INFO source=runner.go:1405 msg="starting ollama engine" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.390-05:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:39499" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.file_type default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.name default="" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.description default="" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.399-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.403-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v13 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.411-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.block_count default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.pooling_type default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.expert_count default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.pre default="" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.block_count default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.embedding_length default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.head_count default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.head_count_kv default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.key_length default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.dimension_count default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.412-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.freq_base default=100000 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.413-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.scaling.factor default=1 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.413-05:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=14.073903ms Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.413-05:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=1.55µs Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.413-05:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" devices=[] Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.413-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=57.460278ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extra_envs=map[] Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.414-05:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/vulkan]" extraEnvs=map[] Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.415-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36889" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.415-05:00 level=DEBUG source=server.go:430 msg=subprocess PATH=/home/prototype/.local/bin:/home/prototype/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/home/prototype/.dotnet/tools OLLAMA_VULKAN=1 OLLAMA_DEBUG=2 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/vulkan OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/vulkan Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.447-05:00 level=INFO source=runner.go:1405 msg="starting ollama engine" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.448-05:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:36889" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.file_type default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.name default="" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.description default="" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.457-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.460-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/vulkan Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.block_count default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.pooling_type default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.expert_count default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.468-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=tokenizer.ggml.pre default="" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.block_count default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.embedding_length default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.head_count default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.head_count_kv default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.key_length default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.dimension_count default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.freq_base default=100000 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=ggml.go:282 msg="key with type not found" key=llama.rope.scaling.factor default=1 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=12.713981ms Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.469-05:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=1.909µs Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.470-05:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/vulkan]" devices=[] Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.470-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=56.246277ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/vulkan]" extra_envs=map[] Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.470-05:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.470-05:00 level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[] Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.470-05:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=177.760676ms Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.471-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="38.8 GiB" available="33.3 GiB" Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.471-05:00 level=INFO source=routes.go:1648 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" ``` It's a few years old, I don't know if it has any compatibility issue.
Author
Owner

@deep1305 commented on GitHub (Dec 31, 2025):

@dhiltgen Yes, I am still facing the issue of enabling vulkan for LLM models

<!-- gh-comment-id:3701915293 --> @deep1305 commented on GitHub (Dec 31, 2025): @dhiltgen Yes, I am still facing the issue of enabling vulkan for LLM models
Author
Owner

@hw762 commented on GitHub (Dec 31, 2025):

I had a quick look at the Vulkan backend code for device enumeration, and I checked my vulkaninfo output. My GPU has most of the extensions required (or seems to imply it's required as I didn't read the thing in full), so I don't know why it's not detected:

vulkaninfo.txt

<!-- gh-comment-id:3702379200 --> @hw762 commented on GitHub (Dec 31, 2025): I had a quick look at the Vulkan backend code for device enumeration, and I checked my `vulkaninfo` output. My GPU has most of the extensions required (or seems to imply it's required as I didn't read the thing in full), so I don't know why it's not detected: [vulkaninfo.txt](https://github.com/user-attachments/files/24395664/vulkaninfo.txt)
Author
Owner

@rick-github commented on GitHub (Dec 31, 2025):

Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.342-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.345-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12
Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)

The ollama server is not loading any acceleration backends, CPU or GPU. What's the output of the following commands:

ls -lR /usr/local/lib/ollama
lscpu
<!-- gh-comment-id:3702394108 --> @rick-github commented on GitHub (Dec 31, 2025): ``` Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.342-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.345-05:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12 Dec 30 23:28:04 Prototype ollama[140924]: time=2025-12-30T23:28:04.354-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) ``` The ollama server is not loading any acceleration backends, CPU or GPU. What's the output of the following commands: ``` ls -lR /usr/local/lib/ollama lscpu ```
Author
Owner

@hw762 commented on GitHub (Dec 31, 2025):

For me:

prototype@Prototype:~$ ls -lR /usr/local/lib/ollama
lscpu
/usr/local/lib/ollama:
total 6468
drwxr-xr-x. 2 root root     188 Dec 18 16:26 cuda_v12
drwxr-xr-x. 2 root root    4096 Dec 18 16:22 cuda_v13
lrwxrwxrwx. 1 root root      17 Dec 18 16:07 libggml-base.so -> libggml-base.so.0
lrwxrwxrwx. 1 root root      21 Dec 18 16:07 libggml-base.so.0 -> libggml-base.so.0.0.0
-rwxr-xr-x. 1 root root  744056 Dec 18 16:07 libggml-base.so.0.0.0
-rwxr-xr-x. 1 root root  873912 Dec 18 16:07 libggml-cpu-alderlake.so
-rwxr-xr-x. 1 root root  873912 Dec 18 16:07 libggml-cpu-haswell.so
-rwxr-xr-x. 1 root root 1009080 Dec 18 16:07 libggml-cpu-icelake.so
-rwxr-xr-x. 1 root root  820728 Dec 18 16:07 libggml-cpu-sandybridge.so
-rwxr-xr-x. 1 root root 1009080 Dec 18 16:07 libggml-cpu-skylakex.so
-rwxr-xr-x. 1 root root  636536 Dec 18 16:07 libggml-cpu-sse42.so
-rwxr-xr-x. 1 root root  632472 Dec 18 16:07 libggml-cpu-x64.so
drwxr-xr-x. 2 root root      97 Dec 18 16:07 vulkan

/usr/local/lib/ollama/cuda_v12:
total 2477716
lrwxrwxrwx. 1 root root         23 Dec 18 16:26 libcublasLt.so.12 -> libcublasLt.so.12.8.4.1
-rwxr-xr-x. 1 root root  751771728 Jul  7  2015 libcublasLt.so.12.8.4.1
lrwxrwxrwx. 1 root root         21 Dec 18 16:26 libcublas.so.12 -> libcublas.so.12.8.4.1
-rwxr-xr-x. 1 root root  116388640 Jul  7  2015 libcublas.so.12.8.4.1
lrwxrwxrwx. 1 root root         20 Dec 18 16:26 libcudart.so.12 -> libcudart.so.12.8.90
-rwxr-xr-x. 1 root root     728800 Jul  7  2015 libcudart.so.12.8.90
-rwxr-xr-x. 1 root root 1668281616 Dec 18 16:26 libggml-cuda.so

/usr/local/lib/ollama/cuda_v13:
total 949152
lrwxrwxrwx. 1 root root        23 Dec 18 16:22 libcublasLt.so.13 -> libcublasLt.so.13.1.0.3
-rwxr-xr-x. 1 root root 541595600 Jul  7  2015 libcublasLt.so.13.1.0.3
lrwxrwxrwx. 1 root root        21 Dec 18 16:22 libcublas.so.13 -> libcublas.so.13.1.0.3
-rwxr-xr-x. 1 root root  54177976 Jul  7  2015 libcublas.so.13.1.0.3
lrwxrwxrwx. 1 root root        20 Dec 18 16:22 libcudart.so.13 -> libcudart.so.13.0.96
-rwxr-xr-x. 1 root root    704288 Jul  7  2015 libcudart.so.13.0.96
-rwxr-xr-x. 1 root root 375444752 Dec 18 16:22 libggml-cuda.so

/usr/local/lib/ollama/vulkan:
total 55364
-rwxr-xr-x. 1 root root 52220200 Dec 18 16:07 libggml-vulkan.so
lrwxrwxrwx. 1 root root       20 Dec 18 16:07 libvulkan.so.1 -> libvulkan.so.1.4.321
-rwxr-xr-x. 1 root root  4466776 Dec 18 16:06 libvulkan.so.1.4.321
Architecture:                x86_64
  CPU op-mode(s):            32-bit, 64-bit
  Address sizes:             39 bits physical, 48 bits virtual
  Byte Order:                Little Endian
CPU(s):                      8
  On-line CPU(s) list:       0-7
Vendor ID:                   GenuineIntel
  Model name:                Intel(R) Core(TM) i7-10510U CPU @ 1.80GHz
    CPU family:              6
    Model:                   142
    Thread(s) per core:      2
    Core(s) per socket:      4
    Socket(s):               1
    Stepping:                12
    CPU(s) scaling MHz:      22%
    CPU max MHz:             4900.0000
    CPU min MHz:             400.0000
    BogoMIPS:                4599.93
    Flags:                   fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36
                              clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdt
                             scp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology n
                             onstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est
                              tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popc
                             nt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefet
                             ch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpr
                             iority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms inv
                             pcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xs
                             aves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi
                              md_clear flush_l1d arch_capabilities
Virtualization features:     
  Virtualization:            VT-x
Caches (sum of all):         
  L1d:                       128 KiB (4 instances)
  L1i:                       128 KiB (4 instances)
  L2:                        1 MiB (4 instances)
  L3:                        8 MiB (1 instance)
NUMA:                        
  NUMA node(s):              1
  NUMA node0 CPU(s):         0-7
Vulnerabilities:             
  Gather data sampling:      Mitigation; Microcode
  Indirect target selection: Mitigation; Aligned branch/return thunks
  Itlb multihit:             KVM: Mitigation: Split huge pages
  L1tf:                      Not affected
  Mds:                       Not affected
  Meltdown:                  Not affected
  Mmio stale data:           Mitigation; Clear CPU buffers; SMT vulnerable
  Old microcode:             Not affected
  Reg file data sampling:    Not affected
  Retbleed:                  Mitigation; Enhanced IBRS
  Spec rstack overflow:      Not affected
  Spec store bypass:         Mitigation; Speculative Store Bypass disabled via prctl
  Spectre v1:                Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:                Mitigation; Enhanced / Automatic IBRS; IBPB conditional; PBRSB-eIBRS SW
                              sequence; BHI SW loop, KVM SW loop
  Srbds:                     Mitigation; Microcode
  Tsa:                       Not affected
  Tsx async abort:           Not affected
  Vmscape:                   Mitigation; IBPB before exit to userspace
<!-- gh-comment-id:3702675701 --> @hw762 commented on GitHub (Dec 31, 2025): For me: ``` prototype@Prototype:~$ ls -lR /usr/local/lib/ollama lscpu /usr/local/lib/ollama: total 6468 drwxr-xr-x. 2 root root 188 Dec 18 16:26 cuda_v12 drwxr-xr-x. 2 root root 4096 Dec 18 16:22 cuda_v13 lrwxrwxrwx. 1 root root 17 Dec 18 16:07 libggml-base.so -> libggml-base.so.0 lrwxrwxrwx. 1 root root 21 Dec 18 16:07 libggml-base.so.0 -> libggml-base.so.0.0.0 -rwxr-xr-x. 1 root root 744056 Dec 18 16:07 libggml-base.so.0.0.0 -rwxr-xr-x. 1 root root 873912 Dec 18 16:07 libggml-cpu-alderlake.so -rwxr-xr-x. 1 root root 873912 Dec 18 16:07 libggml-cpu-haswell.so -rwxr-xr-x. 1 root root 1009080 Dec 18 16:07 libggml-cpu-icelake.so -rwxr-xr-x. 1 root root 820728 Dec 18 16:07 libggml-cpu-sandybridge.so -rwxr-xr-x. 1 root root 1009080 Dec 18 16:07 libggml-cpu-skylakex.so -rwxr-xr-x. 1 root root 636536 Dec 18 16:07 libggml-cpu-sse42.so -rwxr-xr-x. 1 root root 632472 Dec 18 16:07 libggml-cpu-x64.so drwxr-xr-x. 2 root root 97 Dec 18 16:07 vulkan /usr/local/lib/ollama/cuda_v12: total 2477716 lrwxrwxrwx. 1 root root 23 Dec 18 16:26 libcublasLt.so.12 -> libcublasLt.so.12.8.4.1 -rwxr-xr-x. 1 root root 751771728 Jul 7 2015 libcublasLt.so.12.8.4.1 lrwxrwxrwx. 1 root root 21 Dec 18 16:26 libcublas.so.12 -> libcublas.so.12.8.4.1 -rwxr-xr-x. 1 root root 116388640 Jul 7 2015 libcublas.so.12.8.4.1 lrwxrwxrwx. 1 root root 20 Dec 18 16:26 libcudart.so.12 -> libcudart.so.12.8.90 -rwxr-xr-x. 1 root root 728800 Jul 7 2015 libcudart.so.12.8.90 -rwxr-xr-x. 1 root root 1668281616 Dec 18 16:26 libggml-cuda.so /usr/local/lib/ollama/cuda_v13: total 949152 lrwxrwxrwx. 1 root root 23 Dec 18 16:22 libcublasLt.so.13 -> libcublasLt.so.13.1.0.3 -rwxr-xr-x. 1 root root 541595600 Jul 7 2015 libcublasLt.so.13.1.0.3 lrwxrwxrwx. 1 root root 21 Dec 18 16:22 libcublas.so.13 -> libcublas.so.13.1.0.3 -rwxr-xr-x. 1 root root 54177976 Jul 7 2015 libcublas.so.13.1.0.3 lrwxrwxrwx. 1 root root 20 Dec 18 16:22 libcudart.so.13 -> libcudart.so.13.0.96 -rwxr-xr-x. 1 root root 704288 Jul 7 2015 libcudart.so.13.0.96 -rwxr-xr-x. 1 root root 375444752 Dec 18 16:22 libggml-cuda.so /usr/local/lib/ollama/vulkan: total 55364 -rwxr-xr-x. 1 root root 52220200 Dec 18 16:07 libggml-vulkan.so lrwxrwxrwx. 1 root root 20 Dec 18 16:07 libvulkan.so.1 -> libvulkan.so.1.4.321 -rwxr-xr-x. 1 root root 4466776 Dec 18 16:06 libvulkan.so.1.4.321 Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) i7-10510U CPU @ 1.80GHz CPU family: 6 Model: 142 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 Stepping: 12 CPU(s) scaling MHz: 22% CPU max MHz: 4900.0000 CPU min MHz: 400.0000 BogoMIPS: 4599.93 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdt scp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology n onstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popc nt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefet ch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpr iority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms inv pcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xs aves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities Virtualization features: Virtualization: VT-x Caches (sum of all): L1d: 128 KiB (4 instances) L1i: 128 KiB (4 instances) L2: 1 MiB (4 instances) L3: 8 MiB (1 instance) NUMA: NUMA node(s): 1 NUMA node0 CPU(s): 0-7 Vulnerabilities: Gather data sampling: Mitigation; Microcode Indirect target selection: Mitigation; Aligned branch/return thunks Itlb multihit: KVM: Mitigation: Split huge pages L1tf: Not affected Mds: Not affected Meltdown: Not affected Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Old microcode: Not affected Reg file data sampling: Not affected Retbleed: Mitigation; Enhanced IBRS Spec rstack overflow: Not affected Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop Srbds: Mitigation; Microcode Tsa: Not affected Tsx async abort: Not affected Vmscape: Mitigation; IBPB before exit to userspace ```
Author
Owner

@rick-github commented on GitHub (Mar 19, 2026):

Sorry this fell through the cracks.

@hw762 Despite having backends available, the runner was unable to initialize any devices. Try running the server with OLLAMA_DEBUG=2 to show more information during device discovery.

@deep1305 Similarly, enable more debugging and post the log.

<!-- gh-comment-id:4091777668 --> @rick-github commented on GitHub (Mar 19, 2026): Sorry this fell through the cracks. @hw762 Despite having backends available, the runner was unable to initialize any devices. Try running the server with `OLLAMA_DEBUG=2` to show more information during device discovery. @deep1305 Similarly, enable more debugging and post the log.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70684