[GH-ISSUE #13070] Failure during GPU discovery. Causing system-wide crashed 6700 xt #70714

Open
opened 2026-05-04 22:43:00 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @Ay1tsMe on GitHub (Nov 13, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13070

What is the issue?

If I have the ollama systemd service running, whether I am using it or not, eventually it will cause a system wide crash. I believe this is a similar issue to https://github.com/ollama/ollama/issues/12708

I'm running Ollama on Arch through a systemd service with the HSA_OVERRIDE_GFX_VERSION=10.3.0 to make it use the GPU. My gpu is 6700 xt which I believe is not natively supported.

Relevant log output

Nov 13 08:40:41 adamDesktopLinux systemd[1]: Started Ollama Service.
Nov 13 08:40:42 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:42.089+08:00 level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION:10.3.0 HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Nov 13 08:40:42 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:42.095+08:00 level=INFO source=images.go:522 msg="total blobs: 20"
Nov 13 08:40:42 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:42.095+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Nov 13 08:40:42 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:42.096+08:00 level=INFO source=routes.go:1578 msg="Listening on [::]:11434 (version 0.12.10)"
Nov 13 08:40:42 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:42.100+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Nov 13 08:40:42 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:42.103+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45795"
Nov 13 08:40:50 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:50.371+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42007"
Nov 13 08:40:57 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:57.180+08:00 level=INFO source=types.go:42 msg="inference compute" id=0 filter_id=0 library=ROCm compute=gfx1030 name=ROCm0 description="AMD Radeon RX 6700 XT" libdirs=ollama driver=60443.48 pci_id=0000:0b:00.0 type=discrete total="12.0 GiB" available="11.9 GiB"
Nov 13 08:40:57 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:57.180+08:00 level=INFO source=routes.go:1619 msg="entering low vram mode" "total vram"="12.0 GiB" threshold="20.0 GiB"
Nov 13 10:01:42 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:42.938+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38535"
Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.940+08:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:0] error="failed to finish discovery before timeout"
Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.941+08:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values"
Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.990+08:00 level=INFO source=server.go:215 msg="enabling flash attention"
Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.991+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /var/lib/ollama/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f --port 42553"
Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.991+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=37 requested=-1
Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.991+08:00 level=INFO source=server.go:658 msg="system memory" total="31.2 GiB" free="24.8 GiB" free_swap="0 B"
Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.991+08:00 level=INFO source=server.go:665 msg="gpu memory" id=0 library=ROCm available="11.5 GiB" free="11.9 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.998+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine"
Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.999+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:42553"
Nov 13 10:01:46 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:46.002+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 13 10:01:46 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:46.020+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3 file_type=Q4_K_M name="Qwen3 8B" description="" num_tensors=399 num_key_values=29
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: ggml_cuda_init: found 1 ROCm devices:
Nov 13 10:01:52 adamDesktopLinux ollama[1067]:   Device 0: AMD Radeon RX 6700 XT, gfx1030 (0x1030), VMM: no, Wave Size: 32, ID: 0
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.283+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.934+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU"
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=ggml.go:494 msg="offloaded 37/37 layers to GPU"
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=device.go:212 msg="model weights" device=ROCm0 size="4.5 GiB"
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="333.8 MiB"
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=device.go:223 msg="kv cache" device=ROCm0 size="576.0 MiB"
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=device.go:234 msg="compute graph" device=ROCm0 size="113.7 MiB"
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="8.0 MiB"
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=device.go:244 msg="total memory" size="5.5 GiB"
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=sched.go:500 msg="loaded runners" count=1
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.979+08:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
Nov 13 10:01:57 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:57.990+08:00 level=INFO source=server.go:1289 msg="llama runner started in 12.00 seconds"
Nov 13 10:02:05 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:02:05 | 200 | 22.811687056s |       10.1.1.85 | POST     "/api/generate"
Nov 13 10:02:30 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:02:30 | 200 |  4.418546425s |       10.1.1.85 | POST     "/api/generate"
Nov 13 10:04:24 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:04:24 | 200 |  5.257841748s |       10.1.1.85 | POST     "/api/generate"
Nov 13 10:04:50 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:04:50 | 200 |  9.411218392s |       10.1.1.85 | POST     "/api/generate"
Nov 13 10:09:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:09:51.233+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36099"
Nov 13 10:09:54 adamDesktopLinux ollama[1067]: time=2025-11-13T10:09:54.236+08:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:0] error="failed to finish discovery before timeout"
Nov 13 10:09:54 adamDesktopLinux ollama[1067]: time=2025-11-13T10:09:54.236+08:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values"
Nov 13 10:09:54 adamDesktopLinux ollama[1067]: time=2025-11-13T10:09:54.236+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39289"
Nov 13 10:09:55 adamDesktopLinux ollama[1067]: time=2025-11-13T10:09:55.983+08:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:0] error="failed to finish discovery before timeout"
Nov 13 10:09:55 adamDesktopLinux ollama[1067]: time=2025-11-13T10:09:55.983+08:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values"
Nov 13 10:10:48 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:48.871+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45841"
Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.874+08:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:0] error="failed to finish discovery before timeout"
Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.874+08:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values"
Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.925+08:00 level=INFO source=server.go:215 msg="enabling flash attention"
Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.925+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /var/lib/ollama/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f --port 40737"
Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.925+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=37 requested=-1
Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.925+08:00 level=INFO source=server.go:658 msg="system memory" total="31.2 GiB" free="26.8 GiB" free_swap="0 B"
Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.925+08:00 level=INFO source=server.go:665 msg="gpu memory" id=0 library=ROCm available="6.1 GiB" free="6.5 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.933+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine"
Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.933+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:40737"
Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.937+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.955+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3 file_type=Q4_K_M name="Qwen3 8B" description="" num_tensors=399 num_key_values=29
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: ggml_cuda_init: found 1 ROCm devices:
Nov 13 10:10:58 adamDesktopLinux ollama[1067]:   Device 0: AMD Radeon RX 6700 XT, gfx1030 (0x1030), VMM: no, Wave Size: 32, ID: 0
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.212+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.537+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU"
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=ggml.go:494 msg="offloaded 37/37 layers to GPU"
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=device.go:212 msg="model weights" device=ROCm0 size="4.5 GiB"
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="333.8 MiB"
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=device.go:223 msg="kv cache" device=ROCm0 size="576.0 MiB"
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=device.go:234 msg="compute graph" device=ROCm0 size="113.7 MiB"
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="8.0 MiB"
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=device.go:244 msg="total memory" size="5.5 GiB"
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=sched.go:500 msg="loaded runners" count=1
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.585+08:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
Nov 13 10:10:59 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:59.338+08:00 level=INFO source=server.go:1289 msg="llama runner started in 7.41 seconds"
Nov 13 10:11:06 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:11:06 | 200 | 17.595164693s |       10.1.1.85 | POST     "/api/generate"
Nov 13 10:11:45 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:11:45 | 200 |  8.718632209s |       10.1.1.85 | POST     "/api/generate"
Nov 13 10:12:21 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:12:21 | 200 |  3.921160707s |       10.1.1.85 | POST     "/api/generate"
Nov 13 10:12:50 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:12:50 | 200 |  7.253989195s |       10.1.1.85 | POST     "/api/generate"
Nov 13 10:13:18 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:13:18 | 200 |  6.972786869s |       10.1.1.85 | POST     "/api/generate"
Nov 13 10:13:44 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:13:44 | 200 |  3.620059969s |       10.1.1.85 | POST     "/api/generate"
Nov 13 10:14:02 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:14:02 | 200 |  4.122772056s |       10.1.1.85 | POST     "/api/generate"
Nov 13 10:14:22 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:14:22 | 200 |  5.313989744s |       10.1.1.85 | POST     "/api/generate"
Nov 13 10:14:51 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:14:51 | 200 |  5.530596584s |       10.1.1.85 | POST     "/api/generate"
Nov 13 10:19:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:19:51.739+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39105"
Nov 13 10:19:54 adamDesktopLinux ollama[1067]: time=2025-11-13T10:19:54.741+08:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:0] error="failed to finish discovery before timeout"
Nov 13 10:19:54 adamDesktopLinux ollama[1067]: time=2025-11-13T10:19:54.742+08:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values"
Nov 13 10:19:54 adamDesktopLinux ollama[1067]: time=2025-11-13T10:19:54.742+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37959"
Nov 13 10:19:56 adamDesktopLinux ollama[1067]: time=2025-11-13T10:19:56.490+08:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:0] error="failed to finish discovery before timeout"
Nov 13 10:19:56 adamDesktopLinux ollama[1067]: time=2025-11-13T10:19:56.490+08:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values"
Nov 13 10:20:09 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:09.583+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36529"
Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.585+08:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:0] error="failed to finish discovery before timeout"
Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.585+08:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values"
Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.636+08:00 level=INFO source=server.go:215 msg="enabling flash attention"
Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.636+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /var/lib/ollama/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f --port 39655"
Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.636+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=37 requested=-1
Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.636+08:00 level=INFO source=server.go:658 msg="system memory" total="31.2 GiB" free="26.5 GiB" free_swap="0 B"
Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.636+08:00 level=INFO source=server.go:665 msg="gpu memory" id=0 library=ROCm available="6.1 GiB" free="6.5 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.644+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine"
Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.644+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:39655"
Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.648+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.667+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3 file_type=Q4_K_M name="Qwen3 8B" description="" num_tensors=399 num_key_values=29
Nov 13 10:20:18 adamDesktopLinux ollama[1067]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Nov 13 10:20:18 adamDesktopLinux ollama[1067]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 13 10:20:18 adamDesktopLinux ollama[1067]: ggml_cuda_init: found 1 ROCm devices:
Nov 13 10:20:18 adamDesktopLinux ollama[1067]:   Device 0: AMD Radeon RX 6700 XT, gfx1030 (0x1030), VMM: no, Wave Size: 32, ID: 0
Nov 13 10:20:18 adamDesktopLinux ollama[1067]: load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so
Nov 13 10:20:18 adamDesktopLinux ollama[1067]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
Nov 13 10:20:18 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:18.855+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Nov 13 10:20:19 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:19.174+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.12.10

Originally created by @Ay1tsMe on GitHub (Nov 13, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13070 ### What is the issue? If I have the ollama systemd service running, whether I am using it or not, eventually it will cause a system wide crash. I believe this is a similar issue to https://github.com/ollama/ollama/issues/12708 I'm running Ollama on Arch through a systemd service with the `HSA_OVERRIDE_GFX_VERSION=10.3.0` to make it use the GPU. My gpu is 6700 xt which I believe is not natively supported. ### Relevant log output ```shell Nov 13 08:40:41 adamDesktopLinux systemd[1]: Started Ollama Service. Nov 13 08:40:42 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:42.089+08:00 level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION:10.3.0 HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Nov 13 08:40:42 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:42.095+08:00 level=INFO source=images.go:522 msg="total blobs: 20" Nov 13 08:40:42 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:42.095+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" Nov 13 08:40:42 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:42.096+08:00 level=INFO source=routes.go:1578 msg="Listening on [::]:11434 (version 0.12.10)" Nov 13 08:40:42 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:42.100+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Nov 13 08:40:42 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:42.103+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45795" Nov 13 08:40:50 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:50.371+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42007" Nov 13 08:40:57 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:57.180+08:00 level=INFO source=types.go:42 msg="inference compute" id=0 filter_id=0 library=ROCm compute=gfx1030 name=ROCm0 description="AMD Radeon RX 6700 XT" libdirs=ollama driver=60443.48 pci_id=0000:0b:00.0 type=discrete total="12.0 GiB" available="11.9 GiB" Nov 13 08:40:57 adamDesktopLinux ollama[1067]: time=2025-11-13T08:40:57.180+08:00 level=INFO source=routes.go:1619 msg="entering low vram mode" "total vram"="12.0 GiB" threshold="20.0 GiB" Nov 13 10:01:42 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:42.938+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38535" Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.940+08:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:0] error="failed to finish discovery before timeout" Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.941+08:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values" Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.990+08:00 level=INFO source=server.go:215 msg="enabling flash attention" Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.991+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /var/lib/ollama/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f --port 42553" Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.991+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=37 requested=-1 Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.991+08:00 level=INFO source=server.go:658 msg="system memory" total="31.2 GiB" free="24.8 GiB" free_swap="0 B" Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.991+08:00 level=INFO source=server.go:665 msg="gpu memory" id=0 library=ROCm available="11.5 GiB" free="11.9 GiB" minimum="457.0 MiB" overhead="0 B" Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.998+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine" Nov 13 10:01:45 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:45.999+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:42553" Nov 13 10:01:46 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:46.002+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 13 10:01:46 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:46.020+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3 file_type=Q4_K_M name="Qwen3 8B" description="" num_tensors=399 num_key_values=29 Nov 13 10:01:52 adamDesktopLinux ollama[1067]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Nov 13 10:01:52 adamDesktopLinux ollama[1067]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Nov 13 10:01:52 adamDesktopLinux ollama[1067]: ggml_cuda_init: found 1 ROCm devices: Nov 13 10:01:52 adamDesktopLinux ollama[1067]: Device 0: AMD Radeon RX 6700 XT, gfx1030 (0x1030), VMM: no, Wave Size: 32, ID: 0 Nov 13 10:01:52 adamDesktopLinux ollama[1067]: load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so Nov 13 10:01:52 adamDesktopLinux ollama[1067]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.283+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.934+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU" Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=ggml.go:494 msg="offloaded 37/37 layers to GPU" Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=device.go:212 msg="model weights" device=ROCm0 size="4.5 GiB" Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="333.8 MiB" Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=device.go:223 msg="kv cache" device=ROCm0 size="576.0 MiB" Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=device.go:234 msg="compute graph" device=ROCm0 size="113.7 MiB" Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="8.0 MiB" Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=device.go:244 msg="total memory" size="5.5 GiB" Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=sched.go:500 msg="loaded runners" count=1 Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.978+08:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" Nov 13 10:01:52 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:52.979+08:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" Nov 13 10:01:57 adamDesktopLinux ollama[1067]: time=2025-11-13T10:01:57.990+08:00 level=INFO source=server.go:1289 msg="llama runner started in 12.00 seconds" Nov 13 10:02:05 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:02:05 | 200 | 22.811687056s | 10.1.1.85 | POST "/api/generate" Nov 13 10:02:30 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:02:30 | 200 | 4.418546425s | 10.1.1.85 | POST "/api/generate" Nov 13 10:04:24 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:04:24 | 200 | 5.257841748s | 10.1.1.85 | POST "/api/generate" Nov 13 10:04:50 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:04:50 | 200 | 9.411218392s | 10.1.1.85 | POST "/api/generate" Nov 13 10:09:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:09:51.233+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36099" Nov 13 10:09:54 adamDesktopLinux ollama[1067]: time=2025-11-13T10:09:54.236+08:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:0] error="failed to finish discovery before timeout" Nov 13 10:09:54 adamDesktopLinux ollama[1067]: time=2025-11-13T10:09:54.236+08:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values" Nov 13 10:09:54 adamDesktopLinux ollama[1067]: time=2025-11-13T10:09:54.236+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39289" Nov 13 10:09:55 adamDesktopLinux ollama[1067]: time=2025-11-13T10:09:55.983+08:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:0] error="failed to finish discovery before timeout" Nov 13 10:09:55 adamDesktopLinux ollama[1067]: time=2025-11-13T10:09:55.983+08:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values" Nov 13 10:10:48 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:48.871+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45841" Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.874+08:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:0] error="failed to finish discovery before timeout" Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.874+08:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values" Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.925+08:00 level=INFO source=server.go:215 msg="enabling flash attention" Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.925+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /var/lib/ollama/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f --port 40737" Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.925+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=37 requested=-1 Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.925+08:00 level=INFO source=server.go:658 msg="system memory" total="31.2 GiB" free="26.8 GiB" free_swap="0 B" Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.925+08:00 level=INFO source=server.go:665 msg="gpu memory" id=0 library=ROCm available="6.1 GiB" free="6.5 GiB" minimum="457.0 MiB" overhead="0 B" Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.933+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine" Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.933+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:40737" Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.937+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 13 10:10:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:51.955+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3 file_type=Q4_K_M name="Qwen3 8B" description="" num_tensors=399 num_key_values=29 Nov 13 10:10:58 adamDesktopLinux ollama[1067]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Nov 13 10:10:58 adamDesktopLinux ollama[1067]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Nov 13 10:10:58 adamDesktopLinux ollama[1067]: ggml_cuda_init: found 1 ROCm devices: Nov 13 10:10:58 adamDesktopLinux ollama[1067]: Device 0: AMD Radeon RX 6700 XT, gfx1030 (0x1030), VMM: no, Wave Size: 32, ID: 0 Nov 13 10:10:58 adamDesktopLinux ollama[1067]: load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so Nov 13 10:10:58 adamDesktopLinux ollama[1067]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.212+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.537+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU" Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=ggml.go:494 msg="offloaded 37/37 layers to GPU" Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=device.go:212 msg="model weights" device=ROCm0 size="4.5 GiB" Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="333.8 MiB" Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=device.go:223 msg="kv cache" device=ROCm0 size="576.0 MiB" Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=device.go:234 msg="compute graph" device=ROCm0 size="113.7 MiB" Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="8.0 MiB" Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=device.go:244 msg="total memory" size="5.5 GiB" Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=sched.go:500 msg="loaded runners" count=1 Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.578+08:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" Nov 13 10:10:58 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:58.585+08:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" Nov 13 10:10:59 adamDesktopLinux ollama[1067]: time=2025-11-13T10:10:59.338+08:00 level=INFO source=server.go:1289 msg="llama runner started in 7.41 seconds" Nov 13 10:11:06 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:11:06 | 200 | 17.595164693s | 10.1.1.85 | POST "/api/generate" Nov 13 10:11:45 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:11:45 | 200 | 8.718632209s | 10.1.1.85 | POST "/api/generate" Nov 13 10:12:21 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:12:21 | 200 | 3.921160707s | 10.1.1.85 | POST "/api/generate" Nov 13 10:12:50 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:12:50 | 200 | 7.253989195s | 10.1.1.85 | POST "/api/generate" Nov 13 10:13:18 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:13:18 | 200 | 6.972786869s | 10.1.1.85 | POST "/api/generate" Nov 13 10:13:44 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:13:44 | 200 | 3.620059969s | 10.1.1.85 | POST "/api/generate" Nov 13 10:14:02 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:14:02 | 200 | 4.122772056s | 10.1.1.85 | POST "/api/generate" Nov 13 10:14:22 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:14:22 | 200 | 5.313989744s | 10.1.1.85 | POST "/api/generate" Nov 13 10:14:51 adamDesktopLinux ollama[1067]: [GIN] 2025/11/13 - 10:14:51 | 200 | 5.530596584s | 10.1.1.85 | POST "/api/generate" Nov 13 10:19:51 adamDesktopLinux ollama[1067]: time=2025-11-13T10:19:51.739+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39105" Nov 13 10:19:54 adamDesktopLinux ollama[1067]: time=2025-11-13T10:19:54.741+08:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:0] error="failed to finish discovery before timeout" Nov 13 10:19:54 adamDesktopLinux ollama[1067]: time=2025-11-13T10:19:54.742+08:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values" Nov 13 10:19:54 adamDesktopLinux ollama[1067]: time=2025-11-13T10:19:54.742+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37959" Nov 13 10:19:56 adamDesktopLinux ollama[1067]: time=2025-11-13T10:19:56.490+08:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:0] error="failed to finish discovery before timeout" Nov 13 10:19:56 adamDesktopLinux ollama[1067]: time=2025-11-13T10:19:56.490+08:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values" Nov 13 10:20:09 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:09.583+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36529" Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.585+08:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama ]" extra_envs=map[ROCR_VISIBLE_DEVICES:0] error="failed to finish discovery before timeout" Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.585+08:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values" Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.636+08:00 level=INFO source=server.go:215 msg="enabling flash attention" Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.636+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /var/lib/ollama/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f --port 39655" Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.636+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=37 requested=-1 Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.636+08:00 level=INFO source=server.go:658 msg="system memory" total="31.2 GiB" free="26.5 GiB" free_swap="0 B" Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.636+08:00 level=INFO source=server.go:665 msg="gpu memory" id=0 library=ROCm available="6.1 GiB" free="6.5 GiB" minimum="457.0 MiB" overhead="0 B" Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.644+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine" Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.644+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:39655" Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.648+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 13 10:20:12 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:12.667+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3 file_type=Q4_K_M name="Qwen3 8B" description="" num_tensors=399 num_key_values=29 Nov 13 10:20:18 adamDesktopLinux ollama[1067]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Nov 13 10:20:18 adamDesktopLinux ollama[1067]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Nov 13 10:20:18 adamDesktopLinux ollama[1067]: ggml_cuda_init: found 1 ROCm devices: Nov 13 10:20:18 adamDesktopLinux ollama[1067]: Device 0: AMD Radeon RX 6700 XT, gfx1030 (0x1030), VMM: no, Wave Size: 32, ID: 0 Nov 13 10:20:18 adamDesktopLinux ollama[1067]: load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so Nov 13 10:20:18 adamDesktopLinux ollama[1067]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so Nov 13 10:20:18 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:18.855+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Nov 13 10:20:19 adamDesktopLinux ollama[1067]: time=2025-11-13T10:20:19.174+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.12.10
GiteaMirror added the amdbuglinux labels 2026-05-04 22:43:01 -05:00
Author
Owner

@dhiltgen commented on GitHub (Nov 13, 2025):

A system crash typically indicates a driver bug. There have been some reports of recent AMD GPU drivers having stability issues. You might want to try downgrading (or upgrading) your driver and see if it's more reliable.

<!-- gh-comment-id:3528663931 --> @dhiltgen commented on GitHub (Nov 13, 2025): A system crash typically indicates a driver bug. There have been some reports of recent AMD GPU drivers having stability issues. You might want to try downgrading (or upgrading) your driver and see if it's more reliable.
Author
Owner

@dhiltgen commented on GitHub (Nov 13, 2025):

You might also consider trying the new experimental Vulkan support in 0.12.11 by setting OLLAMA_VULKAN=1 and not setting the HSA_OVERRIDE_GFX_VERSION variable so ROCm wont run on the GPU.

<!-- gh-comment-id:3528670616 --> @dhiltgen commented on GitHub (Nov 13, 2025): You might also consider trying the new experimental Vulkan support in 0.12.11 by setting OLLAMA_VULKAN=1 and not setting the HSA_OVERRIDE_GFX_VERSION variable so ROCm wont run on the GPU.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70714