[GH-ISSUE #13498] [ollama-cuda13-bin] nvidia bootstrap discovery not working anymore - Arch Linux #8901

Closed
opened 2026-04-12 21:42:15 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @ovflowd on GitHub (Dec 16, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13498

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Ollama fails to identify any GPU whatsoever. This seems to be a recent regression.

Relevant log output

Ollama Service Logs

Dec 16 13:48:06 ANGELTHESIS systemd[1]: Started Ollama Service.
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.347+01:00 level=INFO source=routes.go:1554 msg="server config" env="map[CUDA_VISIBLE_DEVICES:1 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES:-1 HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/mnt/Data/Models/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.347+01:00 level=INFO source=images.go:522 msg="total blobs: 33"
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.347+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=INFO source=routes.go:1607 msg="Listening on [::]:11434 (version 0.13.4)"
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler"
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=1
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=WARN source=runner.go:485 msg="user overrode visible devices" HIP_VISIBLE_DEVICES=-1
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again"
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs=[/usr/lib/ollama] extraEnvs=map[]
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42033"
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/bin OLLAMA_MODELS=/mnt/Data/Models/ollama CUDA_VISIBLE_DEVICES=1 CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_LAUNCH_BLOCKING=0 CUDA_MODULE_LOADING=LAZY CUDA_MODULE_DATA_LOADING=LAZY CUDA_CACHE_MAXSIZE=2147483648 CUDA_CACHE_PATH=/var/cache/cuda CUDA_LOG_FILE=/var/log/cuda.log HIP_VISIBLE_DEVICES=-1 OLLAMA_HOST=http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE=24h OLLAMA_LOAD_TIMEOUT=5m OLLAMA_CONTEXT_LENGTH=4096 OLLAMA_MAX_LOADED_MODELS=0 OLLAMA_GPU_OVERHEAD=0 OLLAMA_MAX_QUEUE=512 OLLAMA_NUM_PARALLEL=1 OLLAMA_NOHISTORY=false OLLAMA_NOPRUNE=false OLLAMA_FLASH_ATTENTION=true OLLAMA_SCHED_SPREAD=false OLLAMA_DEBUG=2 LD_LIBRARY_PATH=/usr/lib/ollama OLLAMA_LIBRARY_PATH=/usr/lib/ollama
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.352+01:00 level=INFO source=runner.go:1405 msg="starting ollama engine"
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.353+01:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:42033"
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=general.alignment default=32
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=general.alignment default=32
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=general.file_type default=0
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=general.name default=""
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=general.description default=""
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.block_count default=0
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.pooling_type default=0
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.expert_count default=0
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.pre default=""
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.block_count default=0
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.embedding_length default=0
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.attention.head_count default=0
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.attention.head_count_kv default=0
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.attention.key_length default=0
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.rope.dimension_count default=0
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.rope.freq_base default=100000
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.rope.scaling.factor default=1
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=1.279879ms
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=390ns
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH=[/usr/lib/ollama] devices=[]
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=12.44253ms OLLAMA_LIBRARY_PATH=[/usr/lib/ollama] extra_envs=map[]
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[]
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=12.54612ms
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="42.8 GiB" available="33.6 GiB"
Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=INFO source=routes.go:1648 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"

ollama.conf

# CUDA Environment Variables for Ollama Optimization
CUDA_VISIBLE_DEVICES=1
CUDA_DEVICE_ORDER=PCI_BUS_ID
CUDA_LAUNCH_BLOCKING=0
CUDA_MODULE_LOADING=LAZY
CUDA_MODULE_DATA_LOADING=LAZY
CUDA_CACHE_MAXSIZE=2147483648
CUDA_CACHE_PATH=/var/cache/cuda
CUDA_LOG_FILE=/var/log/cuda.log

# AMD Environment Variables
HIP_VISIBLE_DEVICES=-1

# Ollama Environment Variables
OLLAMA_MODELS=/mnt/Data/Models/ollama
OLLAMA_HOST="http://0.0.0.0:11434"
OLLAMA_KEEP_ALIVE="24h"
OLLAMA_LOAD_TIMEOUT="5m"
OLLAMA_CONTEXT_LENGTH=4096
OLLAMA_MAX_LOADED_MODELS=0
OLLAMA_GPU_OVERHEAD=0
OLLAMA_MAX_QUEUE=512
OLLAMA_NUM_PARALLEL=1
OLLAMA_NOHISTORY=false
OLLAMA_NOPRUNE=false
OLLAMA_FLASH_ATTENTION=true
OLLAMA_SCHED_SPREAD=false
OLLAMA_DEBUG=2

nvidia-smi

Tue Dec 16 13:51:26 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.119.02             Driver Version: 580.119.02     CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 5070 Ti     Off |   00000000:01:00.0 Off |                  N/A |
|  0%   36C    P8              4W /  280W |      50MiB /  16303MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA RTX 2000 Ada Gene...    Off |   00000000:03:00.0 Off |                  Off |
| 30%   36C    P8              5W /   70W |       5MiB /  16380MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.13.4

Originally created by @ovflowd on GitHub (Dec 16, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13498 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Ollama fails to identify **any** GPU whatsoever. This seems to be a recent regression. ### Relevant log output #### Ollama Service Logs ``` Dec 16 13:48:06 ANGELTHESIS systemd[1]: Started Ollama Service. Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.347+01:00 level=INFO source=routes.go:1554 msg="server config" env="map[CUDA_VISIBLE_DEVICES:1 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES:-1 HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/mnt/Data/Models/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.347+01:00 level=INFO source=images.go:522 msg="total blobs: 33" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.347+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=INFO source=routes.go:1607 msg="Listening on [::]:11434 (version 0.13.4)" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=DEBUG source=sched.go:120 msg="starting llm scheduler" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=1 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=WARN source=runner.go:485 msg="user overrode visible devices" HIP_VISIBLE_DEVICES=-1 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs=[/usr/lib/ollama] extraEnvs=map[] Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42033" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.348+01:00 level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/bin OLLAMA_MODELS=/mnt/Data/Models/ollama CUDA_VISIBLE_DEVICES=1 CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_LAUNCH_BLOCKING=0 CUDA_MODULE_LOADING=LAZY CUDA_MODULE_DATA_LOADING=LAZY CUDA_CACHE_MAXSIZE=2147483648 CUDA_CACHE_PATH=/var/cache/cuda CUDA_LOG_FILE=/var/log/cuda.log HIP_VISIBLE_DEVICES=-1 OLLAMA_HOST=http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE=24h OLLAMA_LOAD_TIMEOUT=5m OLLAMA_CONTEXT_LENGTH=4096 OLLAMA_MAX_LOADED_MODELS=0 OLLAMA_GPU_OVERHEAD=0 OLLAMA_MAX_QUEUE=512 OLLAMA_NUM_PARALLEL=1 OLLAMA_NOHISTORY=false OLLAMA_NOPRUNE=false OLLAMA_FLASH_ATTENTION=true OLLAMA_SCHED_SPREAD=false OLLAMA_DEBUG=2 LD_LIBRARY_PATH=/usr/lib/ollama OLLAMA_LIBRARY_PATH=/usr/lib/ollama Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.352+01:00 level=INFO source=runner.go:1405 msg="starting ollama engine" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.353+01:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:42033" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=DEBUG source=gguf.go:589 msg=general.architecture type=string Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=DEBUG source=gguf.go:589 msg=tokenizer.ggml.model type=string Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=general.alignment default=32 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=general.alignment default=32 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=general.file_type default=0 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=general.name default="" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=general.description default="" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.359+01:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.block_count default=0 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.pooling_type default=0 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.expert_count default=0 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=tokenizer.ggml.pre default="" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.block_count default=0 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.embedding_length default=0 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.attention.head_count default=0 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.attention.head_count_kv default=0 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.attention.key_length default=0 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.rope.dimension_count default=0 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.rope.freq_base default=100000 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=ggml.go:281 msg="key with type not found" key=llama.rope.scaling.factor default=1 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=runner.go:1380 msg="dummy model load took" duration=1.279879ms Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=runner.go:1385 msg="gathering device infos took" duration=390ns Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH=[/usr/lib/ollama] devices=[] Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=12.44253ms OLLAMA_LIBRARY_PATH=[/usr/lib/ollama] extra_envs=map[] Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0 Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[] Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=12.54612ms Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="42.8 GiB" available="33.6 GiB" Dec 16 13:48:06 ANGELTHESIS ollama[41154]: time=2025-12-16T13:48:06.360+01:00 level=INFO source=routes.go:1648 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" ``` #### `ollama.conf` ``` # CUDA Environment Variables for Ollama Optimization CUDA_VISIBLE_DEVICES=1 CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_LAUNCH_BLOCKING=0 CUDA_MODULE_LOADING=LAZY CUDA_MODULE_DATA_LOADING=LAZY CUDA_CACHE_MAXSIZE=2147483648 CUDA_CACHE_PATH=/var/cache/cuda CUDA_LOG_FILE=/var/log/cuda.log # AMD Environment Variables HIP_VISIBLE_DEVICES=-1 # Ollama Environment Variables OLLAMA_MODELS=/mnt/Data/Models/ollama OLLAMA_HOST="http://0.0.0.0:11434" OLLAMA_KEEP_ALIVE="24h" OLLAMA_LOAD_TIMEOUT="5m" OLLAMA_CONTEXT_LENGTH=4096 OLLAMA_MAX_LOADED_MODELS=0 OLLAMA_GPU_OVERHEAD=0 OLLAMA_MAX_QUEUE=512 OLLAMA_NUM_PARALLEL=1 OLLAMA_NOHISTORY=false OLLAMA_NOPRUNE=false OLLAMA_FLASH_ATTENTION=true OLLAMA_SCHED_SPREAD=false OLLAMA_DEBUG=2 ``` #### `nvidia-smi` ``` Tue Dec 16 13:51:26 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.119.02 Driver Version: 580.119.02 CUDA Version: 13.0 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 5070 Ti Off | 00000000:01:00.0 Off | N/A | | 0% 36C P8 4W / 280W | 50MiB / 16303MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA RTX 2000 Ada Gene... Off | 00000000:03:00.0 Off | Off | | 30% 36C P8 5W / 70W | 5MiB / 16380MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.13.4
GiteaMirror added the installbug labels 2026-04-12 21:42:15 -05:00
Author
Owner

@dhiltgen commented on GitHub (Dec 16, 2025):

Based on the logs, it seems like you aren't running the official binaries from Ollama, but another packaged distribution. Is that correct? What happens if you unset CUDA_VISIBLE_DEVICES and HIP_VISIBLE_DEVICES. If you are using another packaged binary, are you able to try the official binaries from https://github.com/ollama/ollama/releases to see if they have the same discovery problem on your system?

Can you share what the contents of /usr/lib/ollama are on your system with an ls -l

<!-- gh-comment-id:3661712051 --> @dhiltgen commented on GitHub (Dec 16, 2025): Based on the logs, it seems like you aren't running the official binaries from Ollama, but another packaged distribution. Is that correct? What happens if you unset `CUDA_VISIBLE_DEVICES` and `HIP_VISIBLE_DEVICES`. If you are using another packaged binary, are you able to try the official binaries from https://github.com/ollama/ollama/releases to see if they have the same discovery problem on your system? Can you share what the contents of `/usr/lib/ollama` are on your system with an `ls -l`
Author
Owner

@ovflowd commented on GitHub (Dec 16, 2025):

Hey @dhiltgen to my understanding this AUR package downloads the official release binaries for Ollama. You can check that on the PKGBUILD of ollama-cuda13-bin https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=ollama-bin#n23

Unsetting those variables also has no effect.

<!-- gh-comment-id:3661827701 --> @ovflowd commented on GitHub (Dec 16, 2025): Hey @dhiltgen to my understanding this AUR package downloads the official release binaries for Ollama. You can check that on the PKGBUILD of `ollama-cuda13-bin` https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=ollama-bin#n23 Unsetting those variables also has no effect.
Author
Owner

@ovflowd commented on GitHub (Dec 16, 2025):

ls -l results on:

❯ ls -l /usr/lib/ollama
total 955168
lrwxrwxrwx 1 root root        23 Dec 16 13:35 libcublasLt.so.13 -> libcublasLt.so.13.1.0.3
-rwxr-xr-x 1 root root 541595600 Dec 16 13:35 libcublasLt.so.13.1.0.3
lrwxrwxrwx 1 root root        21 Dec 16 13:35 libcublas.so.13 -> libcublas.so.13.1.0.3
-rwxr-xr-x 1 root root  54177976 Dec 16 13:35 libcublas.so.13.1.0.3
lrwxrwxrwx 1 root root        20 Dec 16 13:35 libcudart.so.13 -> libcudart.so.13.0.96
-rwxr-xr-x 1 root root    704288 Dec 16 13:35 libcudart.so.13.0.96
-rwxr-xr-x 1 root root    739960 Dec 16 13:35 libggml-base.so
-rwxr-xr-x 1 root root    873880 Dec 16 13:35 libggml-cpu-alderlake.so
-rwxr-xr-x 1 root root    873880 Dec 16 13:35 libggml-cpu-haswell.so
-rwxr-xr-x 1 root root   1004952 Dec 16 13:35 libggml-cpu-icelake.so
-rwxr-xr-x 1 root root    820728 Dec 16 13:35 libggml-cpu-sandybridge.so
-rwxr-xr-x 1 root root   1009048 Dec 16 13:35 libggml-cpu-skylakex.so
-rwxr-xr-x 1 root root    636536 Dec 16 13:35 libggml-cpu-sse42.so
-rwxr-xr-x 1 root root    632472 Dec 16 13:35 libggml-cpu-x64.so
-rwxr-xr-x 1 root root 374981904 Dec 16 13:35 libggml-cuda.so
<!-- gh-comment-id:3661838657 --> @ovflowd commented on GitHub (Dec 16, 2025): ls -l results on: ``` ❯ ls -l /usr/lib/ollama total 955168 lrwxrwxrwx 1 root root 23 Dec 16 13:35 libcublasLt.so.13 -> libcublasLt.so.13.1.0.3 -rwxr-xr-x 1 root root 541595600 Dec 16 13:35 libcublasLt.so.13.1.0.3 lrwxrwxrwx 1 root root 21 Dec 16 13:35 libcublas.so.13 -> libcublas.so.13.1.0.3 -rwxr-xr-x 1 root root 54177976 Dec 16 13:35 libcublas.so.13.1.0.3 lrwxrwxrwx 1 root root 20 Dec 16 13:35 libcudart.so.13 -> libcudart.so.13.0.96 -rwxr-xr-x 1 root root 704288 Dec 16 13:35 libcudart.so.13.0.96 -rwxr-xr-x 1 root root 739960 Dec 16 13:35 libggml-base.so -rwxr-xr-x 1 root root 873880 Dec 16 13:35 libggml-cpu-alderlake.so -rwxr-xr-x 1 root root 873880 Dec 16 13:35 libggml-cpu-haswell.so -rwxr-xr-x 1 root root 1004952 Dec 16 13:35 libggml-cpu-icelake.so -rwxr-xr-x 1 root root 820728 Dec 16 13:35 libggml-cpu-sandybridge.so -rwxr-xr-x 1 root root 1009048 Dec 16 13:35 libggml-cpu-skylakex.so -rwxr-xr-x 1 root root 636536 Dec 16 13:35 libggml-cpu-sse42.so -rwxr-xr-x 1 root root 632472 Dec 16 13:35 libggml-cpu-x64.so -rwxr-xr-x 1 root root 374981904 Dec 16 13:35 libggml-cuda.so ```
Author
Owner

@ovflowd commented on GitHub (Dec 16, 2025):

I'm unsure now if those should be owned by ollama, but checking the chmod, ollama should have access to them.

<!-- gh-comment-id:3661840548 --> @ovflowd commented on GitHub (Dec 16, 2025): I'm unsure now if those should be owned by ollama, but checking the chmod, ollama should have access to them.
Author
Owner

@dhiltgen commented on GitHub (Dec 16, 2025):

My suspicion is if you switch to the Ollama official release binaries things will start working.

To try to root cause the Arch Linux packaging glitch, try running ldd on the libraries and see if there's a missing dependency. Based on the logs, it seems like the dlopen of libggml-*.so fails, and my suspicion would be due to missing or incorrect dependencies.

<!-- gh-comment-id:3661931168 --> @dhiltgen commented on GitHub (Dec 16, 2025): My suspicion is if you switch to the Ollama official release binaries things will start working. To try to root cause the Arch Linux packaging glitch, try running `ldd` on the libraries and see if there's a missing dependency. Based on the logs, it seems like the dlopen of libggml-*.so fails, and my suspicion would be due to missing or incorrect dependencies.
Author
Owner

@dhiltgen commented on GitHub (Dec 16, 2025):

I have a feeling it's due to this PR https://github.com/ollama/ollama/pull/13469 and probably not picking up the versioned libraries.

% ls -l lib/ollama/libggml-base.so*
lrwxr-xr-x  1 daniel  wheel      17 Dec 15 21:14 lib/ollama/libggml-base.so -> libggml-base.so.0
lrwxr-xr-x  1 daniel  wheel      21 Dec 15 21:14 lib/ollama/libggml-base.so.0 -> libggml-base.so.0.0.0
-rwxr-xr-x  1 daniel  wheel  739960 Dec 15 21:14 lib/ollama/libggml-base.so.0.0.0

Should be an easy fix, but I'm not sure where to submit that on Arch.

<!-- gh-comment-id:3661990884 --> @dhiltgen commented on GitHub (Dec 16, 2025): I have a feeling it's due to this PR https://github.com/ollama/ollama/pull/13469 and probably not picking up the versioned libraries. ``` % ls -l lib/ollama/libggml-base.so* lrwxr-xr-x 1 daniel wheel 17 Dec 15 21:14 lib/ollama/libggml-base.so -> libggml-base.so.0 lrwxr-xr-x 1 daniel wheel 21 Dec 15 21:14 lib/ollama/libggml-base.so.0 -> libggml-base.so.0.0.0 -rwxr-xr-x 1 daniel wheel 739960 Dec 15 21:14 lib/ollama/libggml-base.so.0.0.0 ``` Should be an easy fix, but I'm not sure where to submit that on Arch.
Author
Owner

@ovflowd commented on GitHub (Dec 16, 2025):

Let me try that, the initial reason why I switched to these AUR repackages was due to the official ArchLinux repackaging being quite outdated. But seems to be up-to date now.

<!-- gh-comment-id:3662034808 --> @ovflowd commented on GitHub (Dec 16, 2025): Let me try that, the initial reason why I switched to these AUR repackages was due to the official ArchLinux repackaging being quite outdated. But seems to be up-to date now.
Author
Owner

@ovflowd commented on GitHub (Dec 16, 2025):

❯ sudo pacman -S ollama ollama-cuda
resolving dependencies...
looking for conflicting packages...

Package (2)                       New Version  Net Change   Download Size

cachyos-extra-znver4/ollama       0.13.3-1.1     40.26 MiB       9.83 MiB
cachyos-extra-znver4/ollama-cuda  0.13.3-1.1   1710.18 MiB     211.03 MiB

Total Download Size:    220.86 MiB
Total Installed Size:  1750.44 MiB

:: Proceed with installation? [Y/n] n

Yup, still heavily dated. I assume the issue could be from https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=ollama-bin#n101?

<!-- gh-comment-id:3662077069 --> @ovflowd commented on GitHub (Dec 16, 2025): ``` ❯ sudo pacman -S ollama ollama-cuda resolving dependencies... looking for conflicting packages... Package (2) New Version Net Change Download Size cachyos-extra-znver4/ollama 0.13.3-1.1 40.26 MiB 9.83 MiB cachyos-extra-znver4/ollama-cuda 0.13.3-1.1 1710.18 MiB 211.03 MiB Total Download Size: 220.86 MiB Total Installed Size: 1750.44 MiB :: Proceed with installation? [Y/n] n ``` Yup, still heavily dated. I assume the issue could be from https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=ollama-bin#n101?
Author
Owner

@ovflowd commented on GitHub (Dec 16, 2025):

Just OOC, are you saying the fix would be to add .0 to https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=ollama-bin#n42?

<!-- gh-comment-id:3662084716 --> @ovflowd commented on GitHub (Dec 16, 2025): Just OOC, are you saying the fix would be to add `.0` to https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=ollama-bin#n42?
Author
Owner

@ovflowd commented on GitHub (Dec 16, 2025):

Hmm, from Ollama release, that's what's inside:

❯ ls -la .cache/paru/clone/ollama-bin/src/lib/ollama
total 6464
drwxr-xr-x 1 cwunder cwunder     460 Dec 16 20:36 .
drwxr-xr-x 1 cwunder cwunder      12 Dec 16 20:36 ..
drwxr-xr-x 1 cwunder cwunder     252 Dec 16 06:32 cuda_v12
drwxr-xr-x 1 cwunder cwunder     252 Dec 16 06:28 cuda_v13
lrwxrwxrwx 1 cwunder cwunder      17 Dec 16 06:14 libggml-base.so -> libggml-base.so.0
lrwxrwxrwx 1 cwunder cwunder      21 Dec 16 06:14 libggml-base.so.0 -> libggml-base.so.0.0.0
-rwxr-xr-x 1 cwunder cwunder  739960 Dec 16 06:14 libggml-base.so.0.0.0
-rwxr-xr-x 1 cwunder cwunder  873880 Dec 16 06:14 libggml-cpu-alderlake.so
-rwxr-xr-x 1 cwunder cwunder  873880 Dec 16 06:14 libggml-cpu-haswell.so
-rwxr-xr-x 1 cwunder cwunder 1004952 Dec 16 06:14 libggml-cpu-icelake.so
-rwxr-xr-x 1 cwunder cwunder  820728 Dec 16 06:14 libggml-cpu-sandybridge.so
-rwxr-xr-x 1 cwunder cwunder 1009048 Dec 16 06:14 libggml-cpu-skylakex.so
-rwxr-xr-x 1 cwunder cwunder  636536 Dec 16 06:14 libggml-cpu-sse42.so
-rwxr-xr-x 1 cwunder cwunder  632472 Dec 16 06:14 libggml-cpu-x64.so
drwxr-xr-x 1 cwunder cwunder     102 Dec 16 06:15 vulkan

Seems indeed more of a PKGBUILD problem doing incorrect symlinking.

<!-- gh-comment-id:3662106533 --> @ovflowd commented on GitHub (Dec 16, 2025): Hmm, from Ollama release, that's what's inside: ``` ❯ ls -la .cache/paru/clone/ollama-bin/src/lib/ollama total 6464 drwxr-xr-x 1 cwunder cwunder 460 Dec 16 20:36 . drwxr-xr-x 1 cwunder cwunder 12 Dec 16 20:36 .. drwxr-xr-x 1 cwunder cwunder 252 Dec 16 06:32 cuda_v12 drwxr-xr-x 1 cwunder cwunder 252 Dec 16 06:28 cuda_v13 lrwxrwxrwx 1 cwunder cwunder 17 Dec 16 06:14 libggml-base.so -> libggml-base.so.0 lrwxrwxrwx 1 cwunder cwunder 21 Dec 16 06:14 libggml-base.so.0 -> libggml-base.so.0.0.0 -rwxr-xr-x 1 cwunder cwunder 739960 Dec 16 06:14 libggml-base.so.0.0.0 -rwxr-xr-x 1 cwunder cwunder 873880 Dec 16 06:14 libggml-cpu-alderlake.so -rwxr-xr-x 1 cwunder cwunder 873880 Dec 16 06:14 libggml-cpu-haswell.so -rwxr-xr-x 1 cwunder cwunder 1004952 Dec 16 06:14 libggml-cpu-icelake.so -rwxr-xr-x 1 cwunder cwunder 820728 Dec 16 06:14 libggml-cpu-sandybridge.so -rwxr-xr-x 1 cwunder cwunder 1009048 Dec 16 06:14 libggml-cpu-skylakex.so -rwxr-xr-x 1 cwunder cwunder 636536 Dec 16 06:14 libggml-cpu-sse42.so -rwxr-xr-x 1 cwunder cwunder 632472 Dec 16 06:14 libggml-cpu-x64.so drwxr-xr-x 1 cwunder cwunder 102 Dec 16 06:15 vulkan ``` Seems indeed more of a PKGBUILD problem doing incorrect symlinking.
Author
Owner

@ovflowd commented on GitHub (Dec 16, 2025):

Which backtracks to what you shared on the linked PR.

<!-- gh-comment-id:3662115524 --> @ovflowd commented on GitHub (Dec 16, 2025): Which backtracks to what you shared on the linked PR.
Author
Owner

@ovflowd commented on GitHub (Dec 16, 2025):

diff --git a/PKGBUILD b/PKGBUILD
index 972ca24..58e66d6 100644
--- a/PKGBUILD
+++ b/PKGBUILD
@@ -38,8 +38,12 @@ package_ollama-bin() {
     cd "${srcdir}/" || exit
 
     install -Dm755 "./bin/ollama" "${pkgdir}/usr/bin/ollama"
+    install -dm755 "${pkgdir}/usr/lib/ollama"
 
-    for lib in 'libggml-base.so' \
+    cp -P "./lib/ollama/libggml-base.so"* "${pkgdir}/usr/lib/ollama/"
+    chmod 755 "${pkgdir}/usr/lib/ollama/libggml-base.so."*
+
+    for lib in \
         'libggml-cpu-alderlake.so' \
         'libggml-cpu-haswell.so' \
         'libggml-cpu-icelake.so' \

This seems to do the trick. Can ack that it is working and detecting the GPU.

❯ ls -la /usr/lib/ollama
total 955176
drwxr-xr-x 1 root root       668 Dec 16 20:51 .
drwxr-xr-x 1 root root    210442 Dec 16 20:51 ..
lrwxrwxrwx 1 root root        23 Dec 16 20:49 libcublasLt.so.13 -> libcublasLt.so.13.1.0.3
-rwxr-xr-x 1 root root 541595600 Dec 16 20:49 libcublasLt.so.13.1.0.3
lrwxrwxrwx 1 root root        21 Dec 16 20:49 libcublas.so.13 -> libcublas.so.13.1.0.3
-rwxr-xr-x 1 root root  54177976 Dec 16 20:49 libcublas.so.13.1.0.3
lrwxrwxrwx 1 root root        20 Dec 16 20:49 libcudart.so.13 -> libcudart.so.13.0.96
-rwxr-xr-x 1 root root    704288 Dec 16 20:49 libcudart.so.13.0.96
lrwxrwxrwx 1 root root        17 Dec 16 20:49 libggml-base.so -> libggml-base.so.0
lrwxrwxrwx 1 root root        21 Dec 16 20:49 libggml-base.so.0 -> libggml-base.so.0.0.0
-rwxr-xr-x 1 root root    739960 Dec 16 20:49 libggml-base.so.0.0.0
-rwxr-xr-x 1 root root    873880 Dec 16 20:49 libggml-cpu-alderlake.so
-rwxr-xr-x 1 root root    873880 Dec 16 20:49 libggml-cpu-haswell.so
-rwxr-xr-x 1 root root   1004952 Dec 16 20:49 libggml-cpu-icelake.so
-rwxr-xr-x 1 root root    820728 Dec 16 20:49 libggml-cpu-sandybridge.so
-rwxr-xr-x 1 root root   1009048 Dec 16 20:49 libggml-cpu-skylakex.so
-rwxr-xr-x 1 root root    636536 Dec 16 20:49 libggml-cpu-sse42.so
-rwxr-xr-x 1 root root    632472 Dec 16 20:49 libggml-cpu-x64.so
-rwxr-xr-x 1 root root 374981904 Dec 16 20:49 libggml-cuda.so

We can close this issue as it has nothing to do with upstream ollama. Thanks for the guidance!

<!-- gh-comment-id:3662145837 --> @ovflowd commented on GitHub (Dec 16, 2025): ``` diff --git a/PKGBUILD b/PKGBUILD index 972ca24..58e66d6 100644 --- a/PKGBUILD +++ b/PKGBUILD @@ -38,8 +38,12 @@ package_ollama-bin() { cd "${srcdir}/" || exit install -Dm755 "./bin/ollama" "${pkgdir}/usr/bin/ollama" + install -dm755 "${pkgdir}/usr/lib/ollama" - for lib in 'libggml-base.so' \ + cp -P "./lib/ollama/libggml-base.so"* "${pkgdir}/usr/lib/ollama/" + chmod 755 "${pkgdir}/usr/lib/ollama/libggml-base.so."* + + for lib in \ 'libggml-cpu-alderlake.so' \ 'libggml-cpu-haswell.so' \ 'libggml-cpu-icelake.so' \ ``` This seems to do the trick. Can ack that it is working and detecting the GPU. ``` ❯ ls -la /usr/lib/ollama total 955176 drwxr-xr-x 1 root root 668 Dec 16 20:51 . drwxr-xr-x 1 root root 210442 Dec 16 20:51 .. lrwxrwxrwx 1 root root 23 Dec 16 20:49 libcublasLt.so.13 -> libcublasLt.so.13.1.0.3 -rwxr-xr-x 1 root root 541595600 Dec 16 20:49 libcublasLt.so.13.1.0.3 lrwxrwxrwx 1 root root 21 Dec 16 20:49 libcublas.so.13 -> libcublas.so.13.1.0.3 -rwxr-xr-x 1 root root 54177976 Dec 16 20:49 libcublas.so.13.1.0.3 lrwxrwxrwx 1 root root 20 Dec 16 20:49 libcudart.so.13 -> libcudart.so.13.0.96 -rwxr-xr-x 1 root root 704288 Dec 16 20:49 libcudart.so.13.0.96 lrwxrwxrwx 1 root root 17 Dec 16 20:49 libggml-base.so -> libggml-base.so.0 lrwxrwxrwx 1 root root 21 Dec 16 20:49 libggml-base.so.0 -> libggml-base.so.0.0.0 -rwxr-xr-x 1 root root 739960 Dec 16 20:49 libggml-base.so.0.0.0 -rwxr-xr-x 1 root root 873880 Dec 16 20:49 libggml-cpu-alderlake.so -rwxr-xr-x 1 root root 873880 Dec 16 20:49 libggml-cpu-haswell.so -rwxr-xr-x 1 root root 1004952 Dec 16 20:49 libggml-cpu-icelake.so -rwxr-xr-x 1 root root 820728 Dec 16 20:49 libggml-cpu-sandybridge.so -rwxr-xr-x 1 root root 1009048 Dec 16 20:49 libggml-cpu-skylakex.so -rwxr-xr-x 1 root root 636536 Dec 16 20:49 libggml-cpu-sse42.so -rwxr-xr-x 1 root root 632472 Dec 16 20:49 libggml-cpu-x64.so -rwxr-xr-x 1 root root 374981904 Dec 16 20:49 libggml-cuda.so ``` We can close this issue as it has nothing to do with upstream ollama. Thanks for the guidance!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8901