[GH-ISSUE #14527] Support for AMD Radeon 760M (RDNA 3 iGPU) GPU Offloading #55937

Closed
opened 2026-04-29 09:58:20 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @StigThomsen on GitHub (Mar 1, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14527

Summary:
Ollama does not detect or support GPU offloading for the AMD Radeon 760M (RDNA 3 integrated GPU), despite ROCm 7.2 and Vulkan being correctly installed and functional. Ollama silently falls back to CPU mode with no errors, even when OLLAMA_VULKAN=1 and other relevant environment variables are set.
Expected Behavior:

Ollama should detect the Radeon 760M (RDNA 3 iGPU) and offload layers to the GPU when OLLAMA_VULKAN=1 is enabled.
Logs should indicate successful GPU detection and layer offloading (e.g., offloaded X/41 layers to GPU).
Actual Behavior:

Ollama logs show devices=[] and total_vram="0 B", indicating no GPU was detected.
Ollama falls back to CPU-only mode (id=cpu) with no errors or warnings about GPU incompatibility.

Environment Details

OS: Ubuntu 24.04 (Noble Numbat)
Kernel: 6.8.0-101-generic
CPU: AMD Ryzen 5 7640HS (6 cores/12 threads)
GPU: AMD Radeon 760M (RDNA 3, integrated)
RAM: 32GB
ROCm: 7.2.0
Vulkan: Working (confirmed with vulkaninfo | grep "GPU id")
Ollama Version: 0.17.4

Ollama startup logs (snips)
level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/vulkan]" devices=[]
level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096
level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="25.2 GiB" available="23.7 GiB"

vulkaninfo | grep "GPU id"
'DISPLAY' environment variable not set... skipping surface info
GPU id = 0 (AMD Radeon Graphics (RADV PHOENIX))
GPU id = 1 (llvmpipe (LLVM 20.1.2, 256 bits))
GPU id = 0 (AMD Radeon Graphics (RADV PHOENIX))
GPU id = 1 (llvmpipe (LLVM 20.1.2, 256 bits))
GPU id = 0 (AMD Radeon Graphics (RADV PHOENIX))
GPU id = 1 (llvmpipe (LLVM 20.1.2, 256 bits))
GPU id = 0 (AMD Radeon Graphics (RADV PHOENIX))
GPU id = 1 (llvmpipe (LLVM 20.1.2, 256 bits))

ROCm-SMI Output (GPU Detection)
$ rocm-smi
========================= ROCm System Management Interface =========================
=================================== Concise Info ===================================
GPU[0] : get_power_cap, Not supported on the given system
GPU Temp (DieEdge) AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% Mem%
0 37.0c 4.185W None 2400Mhz 0% auto Unsupported 1% 0%

$ rocm-smi --showmeminfo vram
========================= ROCm System Management Interface =========================
=============================== Memory Usage (Bytes) ===============================
GPU[0] : VRAM Total Memory (B): 6442450944
GPU[0] : VRAM Total Used Memory (B): 72192000

Ollama Environment Variables
[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_DEBUG=2"
Environment="AMD_LOG_LEVEL=3"
Environment="OLLAMA_AMD_GPU=1"
Environment="OLLAMA_VULKAN=1"

Additional Context

The Radeon 760M is an RDNA 3 integrated GPU with 6GB of shared VRAM (confirmed via rocm-smi).
ROCm and Vulkan are working correctly and detect the GPU.
Ollama’s GPU discovery process does not recognize the Radeon 760M as a compatible device, even with OLLAMA_VULKAN=1.

Originally created by @StigThomsen on GitHub (Mar 1, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14527 Summary: Ollama does not detect or support GPU offloading for the AMD Radeon 760M (RDNA 3 integrated GPU), despite ROCm 7.2 and Vulkan being correctly installed and functional. Ollama silently falls back to CPU mode with no errors, even when OLLAMA_VULKAN=1 and other relevant environment variables are set. Expected Behavior: Ollama should detect the Radeon 760M (RDNA 3 iGPU) and offload layers to the GPU when OLLAMA_VULKAN=1 is enabled. Logs should indicate successful GPU detection and layer offloading (e.g., offloaded X/41 layers to GPU). Actual Behavior: Ollama logs show devices=[] and total_vram="0 B", indicating no GPU was detected. Ollama falls back to CPU-only mode (id=cpu) with no errors or warnings about GPU incompatibility. Environment Details OS: Ubuntu 24.04 (Noble Numbat) Kernel: 6.8.0-101-generic CPU: AMD Ryzen 5 7640HS (6 cores/12 threads) GPU: AMD Radeon 760M (RDNA 3, integrated) RAM: 32GB ROCm: 7.2.0 Vulkan: Working (confirmed with vulkaninfo | grep "GPU id") Ollama Version: 0.17.4 Ollama startup logs (snips) level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/vulkan]" devices=[] level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="25.2 GiB" available="23.7 GiB" vulkaninfo | grep "GPU id" 'DISPLAY' environment variable not set... skipping surface info GPU id = 0 (AMD Radeon Graphics (RADV PHOENIX)) GPU id = 1 (llvmpipe (LLVM 20.1.2, 256 bits)) GPU id = 0 (AMD Radeon Graphics (RADV PHOENIX)) GPU id = 1 (llvmpipe (LLVM 20.1.2, 256 bits)) GPU id = 0 (AMD Radeon Graphics (RADV PHOENIX)) GPU id = 1 (llvmpipe (LLVM 20.1.2, 256 bits)) GPU id = 0 (AMD Radeon Graphics (RADV PHOENIX)) GPU id = 1 (llvmpipe (LLVM 20.1.2, 256 bits)) ROCm-SMI Output (GPU Detection) $ rocm-smi ========================= ROCm System Management Interface ========================= =================================== Concise Info =================================== GPU[0] : get_power_cap, Not supported on the given system GPU Temp (DieEdge) AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% Mem% 0 37.0c 4.185W None 2400Mhz 0% auto Unsupported 1% 0% $ rocm-smi --showmeminfo vram ========================= ROCm System Management Interface ========================= =============================== Memory Usage (Bytes) =============================== GPU[0] : VRAM Total Memory (B): 6442450944 GPU[0] : VRAM Total Used Memory (B): 72192000 Ollama Environment Variables [Service] Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="OLLAMA_DEBUG=2" Environment="AMD_LOG_LEVEL=3" Environment="OLLAMA_AMD_GPU=1" Environment="OLLAMA_VULKAN=1" Additional Context The Radeon 760M is an RDNA 3 integrated GPU with 6GB of shared VRAM (confirmed via rocm-smi). ROCm and Vulkan are working correctly and detect the GPU. Ollama’s GPU discovery process does not recognize the Radeon 760M as a compatible device, even with OLLAMA_VULKAN=1.
GiteaMirror added the feature request label 2026-04-29 09:58:20 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 1, 2026):

Set OLLAMA_DEBUG=2 in the server environment and post the log from the start up to and including the line that contains inference compute.

<!-- gh-comment-id:3979854629 --> @rick-github commented on GitHub (Mar 1, 2026): Set `OLLAMA_DEBUG=2` in the server environment and post the log from the start up to and including the line that contains `inference compute`.
Author
Owner

@StigThomsen commented on GitHub (Mar 1, 2026):

My pleasure :-)

Mar 01 11:06:25 ai-server systemd[1]: Started ollama.service - Ollama Service.
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.107Z level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG-4 OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.107Z level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.107Z level=INFO source=images.go:473 msg="total blobs: 17"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.107Z level=INFO source=images.go:480 msg="total unused blobs removed: 0"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.108Z level=INFO source=routes.go:1718 msg="Listening on [::]:11434 (version 0.17.4)"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.108Z level=DEBUG source=sched.go:147 msg="starting llm scheduler"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.108Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.108Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extraEnvs=map[]
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.109Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 42233"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.109Z level=DEBUG source=server.go:432 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_DEBUG=2 OLLAMA_AMD_GPU=1 OLLAMA_VULKAN=1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.117Z level=INFO source=runner.go:1411 msg="starting ollama engine"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.118Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:42233"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.120Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.120Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.120Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.120Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.121Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.121Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default=""
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.121Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default=""
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.121Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.121Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.122Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v13
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default=""
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=runner.go:1386 msg="dummy model load took" duration=3.093638ms
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=runner.go:1391 msg="gathering device infos took" duration=291ns
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" devices=[]
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.124Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=15.025827ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extra_envs=map[]
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.124Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" extraEnvs=map[]
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.124Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38811"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.124Z level=DEBUG source=server.go:432 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_DEBUG=2 OLLAMA_AMD_GPU=1 OLLAMA_VULKAN=1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/rocm OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/rocm
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.132Z level=INFO source=runner.go:1411 msg="starting ollama engine"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.133Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:38811"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.135Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.135Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.135Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.135Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.135Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.136Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default=""
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.136Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default=""
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.136Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.136Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.137Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/rocm
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.138Z level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.138Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default=""
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=runner.go:1386 msg="dummy model load took" duration=3.854374ms
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=runner.go:1391 msg="gathering device infos took" duration=681ns
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" devices=[]
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=15.693312ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" extra_envs=map[]
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/vulkan]" extraEnvs=map[]
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34175"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=server.go:432 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_DEBUG=2 OLLAMA_AMD_GPU=1 OLLAMA_VULKAN=1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/vulkan OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/vulkan
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.148Z level=INFO source=runner.go:1411 msg="starting ollama engine"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.148Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:34175"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default=""
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default=""
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.152Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/vulkan
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default=""
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=runner.go:1386 msg="dummy model load took" duration=2.765596ms
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=runner.go:1391 msg="gathering device infos took" duration=200ns
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.154Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/vulkan]" devices=[]
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.154Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=14.350126ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/vulkan]" extra_envs=map[]
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.154Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extraEnvs=map[]
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.154Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 43309"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.154Z level=DEBUG source=server.go:432 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_DEBUG=2 OLLAMA_AMD_GPU=1 OLLAMA_VULKAN=1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.163Z level=INFO source=runner.go:1411 msg="starting ollama engine"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.163Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:43309"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default=""
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default=""
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.166Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default=""
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=runner.go:1386 msg="dummy model load took" duration=3.807558ms
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=runner.go:1391 msg="gathering device infos took" duration=220ns
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" devices=[]
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.169Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=14.880079ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[]
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.169Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.169Z level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[]
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.169Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=60.604196ms
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.169Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="25.2 GiB" available="23.8 GiB"
Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.169Z level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096
<!-- gh-comment-id:3979928536 --> @StigThomsen commented on GitHub (Mar 1, 2026): My pleasure :-) ``` Mar 01 11:06:25 ai-server systemd[1]: Started ollama.service - Ollama Service. Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.107Z level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG-4 OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.107Z level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.107Z level=INFO source=images.go:473 msg="total blobs: 17" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.107Z level=INFO source=images.go:480 msg="total unused blobs removed: 0" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.108Z level=INFO source=routes.go:1718 msg="Listening on [::]:11434 (version 0.17.4)" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.108Z level=DEBUG source=sched.go:147 msg="starting llm scheduler" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.108Z level=INFO source=runner.go:67 msg="discovering available GPUs..." Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.108Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extraEnvs=map[] Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.109Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 42233" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.109Z level=DEBUG source=server.go:432 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_DEBUG=2 OLLAMA_AMD_GPU=1 OLLAMA_VULKAN=1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.117Z level=INFO source=runner.go:1411 msg="starting ollama engine" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.118Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:42233" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.120Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.120Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.120Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.120Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.121Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.121Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default="" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.121Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default="" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.121Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.121Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.122Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v13 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default="" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=runner.go:1386 msg="dummy model load took" duration=3.093638ms Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=DEBUG source=runner.go:1391 msg="gathering device infos took" duration=291ns Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.123Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" devices=[] Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.124Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=15.025827ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extra_envs=map[] Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.124Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" extraEnvs=map[] Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.124Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38811" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.124Z level=DEBUG source=server.go:432 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_DEBUG=2 OLLAMA_AMD_GPU=1 OLLAMA_VULKAN=1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/rocm OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/rocm Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.132Z level=INFO source=runner.go:1411 msg="starting ollama engine" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.133Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:38811" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.135Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.135Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.135Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.135Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.135Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.136Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default="" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.136Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default="" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.136Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.136Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.137Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/rocm Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.138Z level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.138Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default="" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=runner.go:1386 msg="dummy model load took" duration=3.854374ms Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=runner.go:1391 msg="gathering device infos took" duration=681ns Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" devices=[] Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=15.693312ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/rocm]" extra_envs=map[] Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/vulkan]" extraEnvs=map[] Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34175" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.139Z level=DEBUG source=server.go:432 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_DEBUG=2 OLLAMA_AMD_GPU=1 OLLAMA_VULKAN=1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/vulkan OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/vulkan Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.148Z level=INFO source=runner.go:1411 msg="starting ollama engine" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.148Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:34175" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default="" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default="" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.151Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.152Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/vulkan Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default="" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=runner.go:1386 msg="dummy model load took" duration=2.765596ms Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.153Z level=DEBUG source=runner.go:1391 msg="gathering device infos took" duration=200ns Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.154Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/vulkan]" devices=[] Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.154Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=14.350126ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/vulkan]" extra_envs=map[] Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.154Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extraEnvs=map[] Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.154Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 43309" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.154Z level=DEBUG source=server.go:432 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_DEBUG=2 OLLAMA_AMD_GPU=1 OLLAMA_VULKAN=1 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.163Z level=INFO source=runner.go:1411 msg="starting ollama engine" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.163Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:43309" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default="" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default="" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.165Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.166Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default="" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=runner.go:1386 msg="dummy model load took" duration=3.807558ms Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=DEBUG source=runner.go:1391 msg="gathering device infos took" duration=220ns Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.168Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" devices=[] Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.169Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=14.880079ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[] Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.169Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0 Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.169Z level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[] Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.169Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=60.604196ms Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.169Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="25.2 GiB" available="23.8 GiB" Mar 01 11:06:25 ai-server ollama[83615]: time=2026-03-01T11:06:25.169Z level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096 ```
Author
Owner

@StigThomsen commented on GitHub (Mar 2, 2026):

After hours and hours of debugging, I got to the root cause being : libggml-hip.so is compiled as CUDA, not HIP
nm -D /usr/local/lib/ollama/rocm/libggml-hip.so | grep -E "T ggml_backend"

T ggml_backend_cuda_buffer_type
T ggml_backend_cuda_get_device_count
T ggml_backend_cuda_get_device_description
T ggml_backend_cuda_get_device_memory
T ggml_backend_cuda_host_buffer_type
T ggml_backend_cuda_init
T ggml_backend_cuda_reg ← CUDA, not HIP
T ggml_backend_cuda_split_buffer_type
T ggml_backend_init
T ggml_backend_is_cuda

I figured this couldn't be on purpose. So I re-installed Ollama and there it was, full on GPU offloading.

Case closed, AMD 760M is supported through ROCm.

<!-- gh-comment-id:3983448702 --> @StigThomsen commented on GitHub (Mar 2, 2026): After hours and hours of debugging, I got to the root cause being : libggml-hip.so is compiled as CUDA, not HIP nm -D /usr/local/lib/ollama/rocm/libggml-hip.so | grep -E "T ggml_backend" T ggml_backend_cuda_buffer_type T ggml_backend_cuda_get_device_count T ggml_backend_cuda_get_device_description T ggml_backend_cuda_get_device_memory T ggml_backend_cuda_host_buffer_type T ggml_backend_cuda_init T ggml_backend_cuda_reg ← CUDA, not HIP T ggml_backend_cuda_split_buffer_type T ggml_backend_init T ggml_backend_is_cuda I figured this couldn't be on purpose. So I re-installed Ollama and there it was, full on GPU offloading. Case closed, AMD 760M is supported through ROCm.
Author
Owner

@singlerider commented on GitHub (Mar 10, 2026):

@StigThomsen

So I re-installed Ollama and there it was, full on GPU offloading.

edit
I got it working after re-installing in-place finished completely.

<!-- gh-comment-id:4029766695 --> @singlerider commented on GitHub (Mar 10, 2026): @StigThomsen >So I re-installed Ollama and there it was, full on GPU offloading. *edit* I got it working after re-installing in-place finished completely.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55937