[GH-ISSUE #12774] 0.12.5-rocm AMD Vega 10 "failure during GPU discovery" ... error="runner crashed". It worked on 0.12.3-rocm #8473

Closed
opened 2026-04-12 21:09:57 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @diekmann on GitHub (Oct 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12774

What is the issue?

I have a Radeon Vega Frontier Edition AMD GPU.

Using ollama, I get the following error, that the GPU is not used by ollama. Indeed, each model I load is slow, running on the CPU, and the GPU is bored:

$ podman run --detach --group-add keep-groups --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama docker.io/ollama/ollama:rocm
274175fa02fcafaf785dff2daaa46e595912c5fa2dd3c61fd56772a96943b1f9
$ podman logs ollama
time=2025-10-24T21:12:12.072Z level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-10-24T21:12:12.074Z level=INFO source=images.go:522 msg="total blobs: 35"
time=2025-10-24T21:12:12.074Z level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-24T21:12:12.074Z level=INFO source=routes.go:1564 msg="Listening on [::]:11434 (version 0.12.6)"
time=2025-10-24T21:12:12.074Z level=INFO source=runner.go:80 msg="discovering available GPUs..."
time=2025-10-24T21:12:13.835Z level=INFO source=runner.go:545 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs="[GGML_CUDA_INIT=1 ROCR_VISIBLE_DEVICES=GPU-0215002080b029a4]" error="runner crashed"
time=2025-10-24T21:12:13.835Z level=INFO source=types.go:129 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="62.6 GiB" available="59.6 GiB"
time=2025-10-24T21:12:13.835Z level=INFO source=routes.go:1605 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"

I see this behavior on docker.io/ollama/ollama:0.12.5-rocm and 0.12.6-rocm. I haven't tried further 0.12 releases.

Everything works absolutely fine on 0.11:

$ podman run --detach --group-add keep-groups --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama docker.io/ollama/ollama:0.11.11-rocm
6e628c33deef668d71314a9dda835b32948ad2a17f2ba80a9aa9ef55b36b70ca
$ podman logs ollama
time=2025-10-24T21:30:28.175Z level=INFO source=routes.go:1332 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-10-24T21:30:28.177Z level=INFO source=images.go:477 msg="total blobs: 35"
time=2025-10-24T21:30:28.177Z level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-10-24T21:30:28.177Z level=INFO source=routes.go:1385 msg="Listening on [::]:11434 (version 0.11.11)"
time=2025-10-24T21:30:28.177Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-10-24T21:30:28.179Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2025-10-24T21:30:28.181Z level=INFO source=amd_linux.go:390 msg="amdgpu is supported" gpu=GPU-0215002080b029a4 gpu_type=gfx900
time=2025-10-24T21:30:28.181Z level=INFO source=types.go:131 msg="inference compute" id=GPU-0215002080b029a4 library=rocm variant="" compute=gfx900 driver=0.0 name=1002:6863 total="16.0 GiB" available="15.6 GiB"
time=2025-10-24T21:30:28.181Z level=INFO source=routes.go:1426 msg="entering low vram mode" "total vram"="16.0 GiB" threshold="20.0 GiB"

Ollama ps confirms that a random model can completely run on this GPU with ollama 0.11:

$ podman exec -it ollama ollama ps
NAME          ID              SIZE     PROCESSOR    CONTEXT    UNTIL              
gemma3:12b    f4031aab637d    11 GB    100% GPU     8192       2 minutes from now

and radeontop confirms the GPU usage.

I'm using Ubuntu 24.04.3 LTS with 6.14.0-33-generic. No custom patches.

lspci identifies my GPU as VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Vega 10 XTX [Radeon Vega Frontier Edition].

I know this is a rather rare card. But I've been using ollama for several months now without any issues and with perfect GPU acceleration. Only after upgrading to the newest version, the GPU no longer works.

Here is the dmesg:

$ sudo dmesg -T | grep amdgpu
[Fri Oct 24 23:05:26 2025] [drm] amdgpu kernel modesetting enabled.
[Fri Oct 24 23:05:26 2025] amdgpu: Virtual CRAT table created for CPU
[Fri Oct 24 23:05:26 2025] amdgpu: Topology: Add CPU node
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: enabling device (0006 -> 0007)
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 0 <soc15_common>
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 1 <gmc_v9_0>
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 2 <vega10_ih>
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 3 <psp>
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 4 <powerplay>
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 5 <dm>
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 6 <gfx_v9_0>
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 7 <sdma_v4_0>
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 8 <uvd_v7_0>
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 9 <vce_v4_0>
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: Fetched VBIOS from VFCT
[Fri Oct 24 23:05:26 2025] amdgpu: ATOM BIOS: 113-D0501100-109
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: vgaarb: deactivate vga console
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: Trusted Memory Zone (TMZ) feature not supported
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: MEM ECC is not presented.
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: SRAM ECC is not presented.
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: VRAM: 16368M 0x000000F400000000 - 0x000000F7FEFFFFFF (16368M used)
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: GART: 512M 0x0000000000000000 - 0x000000001FFFFFFF
[Fri Oct 24 23:05:26 2025] [drm] amdgpu: 16368M of VRAM memory ready
[Fri Oct 24 23:05:26 2025] [drm] amdgpu: 32032M of GTT memory ready.
[Fri Oct 24 23:05:26 2025] amdgpu: hwmgr_sw_init smu backed is vega10_smu
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: reserve 0x400000 from 0xf7fec00000 for PSP TMR
[Fri Oct 24 23:05:26 2025] snd_hda_intel 0000:03:00.1: bound 0000:03:00.0 (ops amdgpu_dm_audio_component_bind_ops [amdgpu])
[Fri Oct 24 23:05:26 2025] amdgpu: HMM registered 16368MB device memory
[Fri Oct 24 23:05:26 2025] kfd kfd: amdgpu: Allocated 3969056 bytes on gart
[Fri Oct 24 23:05:26 2025] kfd kfd: amdgpu: Total number of KFD nodes to be created: 1
[Fri Oct 24 23:05:26 2025] amdgpu: Virtual CRAT table created for GPU
[Fri Oct 24 23:05:26 2025] amdgpu: Topology: Add dGPU node [0x6863:0x1002]
[Fri Oct 24 23:05:26 2025] kfd kfd: amdgpu: added device 1002:6863
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: SE 4, SH per SE 1, CU per SH 16, active_cu_number 64
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring gfx uses VM inv eng 0 on hub 0
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring comp_1.0.0 uses VM inv eng 1 on hub 0
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring comp_1.1.0 uses VM inv eng 4 on hub 0
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring comp_1.2.0 uses VM inv eng 5 on hub 0
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring comp_1.3.0 uses VM inv eng 6 on hub 0
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring comp_1.0.1 uses VM inv eng 7 on hub 0
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring comp_1.1.1 uses VM inv eng 8 on hub 0
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring comp_1.2.1 uses VM inv eng 9 on hub 0
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring comp_1.3.1 uses VM inv eng 10 on hub 0
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring kiq_0.2.1.0 uses VM inv eng 11 on hub 0
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring sdma0 uses VM inv eng 0 on hub 8
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring page0 uses VM inv eng 1 on hub 8
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring sdma1 uses VM inv eng 4 on hub 8
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring page1 uses VM inv eng 5 on hub 8
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring uvd_0 uses VM inv eng 6 on hub 8
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring uvd_enc_0.0 uses VM inv eng 7 on hub 8
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring uvd_enc_0.1 uses VM inv eng 8 on hub 8
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring vce0 uses VM inv eng 9 on hub 8
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring vce1 uses VM inv eng 10 on hub 8
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring vce2 uses VM inv eng 11 on hub 8
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: Runtime PM not available
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: [drm] Registered 6 planes with drm panic
[Fri Oct 24 23:05:26 2025] [drm] Initialized amdgpu 3.61.0 for 0000:03:00.0 on minor 2
[Fri Oct 24 23:05:26 2025] fbcon: amdgpudrmfb (fb0) is primary device
[Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: [drm] fb0: amdgpudrmfb frame buffer device

Relevant log output


OS

Linux

GPU

AMD

CPU

Intel

Ollama version

0.12.6-rocm

Originally created by @diekmann on GitHub (Oct 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12774 ### What is the issue? I have a Radeon Vega Frontier Edition AMD GPU. Using ollama, I get the following error, that the GPU is not used by ollama. Indeed, each model I load is slow, running on the CPU, and the GPU is bored: ``` $ podman run --detach --group-add keep-groups --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama docker.io/ollama/ollama:rocm 274175fa02fcafaf785dff2daaa46e595912c5fa2dd3c61fd56772a96943b1f9 $ podman logs ollama time=2025-10-24T21:12:12.072Z level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-10-24T21:12:12.074Z level=INFO source=images.go:522 msg="total blobs: 35" time=2025-10-24T21:12:12.074Z level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-24T21:12:12.074Z level=INFO source=routes.go:1564 msg="Listening on [::]:11434 (version 0.12.6)" time=2025-10-24T21:12:12.074Z level=INFO source=runner.go:80 msg="discovering available GPUs..." time=2025-10-24T21:12:13.835Z level=INFO source=runner.go:545 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs="[GGML_CUDA_INIT=1 ROCR_VISIBLE_DEVICES=GPU-0215002080b029a4]" error="runner crashed" time=2025-10-24T21:12:13.835Z level=INFO source=types.go:129 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="62.6 GiB" available="59.6 GiB" time=2025-10-24T21:12:13.835Z level=INFO source=routes.go:1605 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" ``` I see this behavior on `docker.io/ollama/ollama:0.12.5-rocm` and `0.12.6-rocm`. I haven't tried further 0.12 releases. Everything works absolutely fine on 0.11: ``` $ podman run --detach --group-add keep-groups --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama docker.io/ollama/ollama:0.11.11-rocm 6e628c33deef668d71314a9dda835b32948ad2a17f2ba80a9aa9ef55b36b70ca $ podman logs ollama time=2025-10-24T21:30:28.175Z level=INFO source=routes.go:1332 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-10-24T21:30:28.177Z level=INFO source=images.go:477 msg="total blobs: 35" time=2025-10-24T21:30:28.177Z level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-10-24T21:30:28.177Z level=INFO source=routes.go:1385 msg="Listening on [::]:11434 (version 0.11.11)" time=2025-10-24T21:30:28.177Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-10-24T21:30:28.179Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/download/linux-drivers.html" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-10-24T21:30:28.181Z level=INFO source=amd_linux.go:390 msg="amdgpu is supported" gpu=GPU-0215002080b029a4 gpu_type=gfx900 time=2025-10-24T21:30:28.181Z level=INFO source=types.go:131 msg="inference compute" id=GPU-0215002080b029a4 library=rocm variant="" compute=gfx900 driver=0.0 name=1002:6863 total="16.0 GiB" available="15.6 GiB" time=2025-10-24T21:30:28.181Z level=INFO source=routes.go:1426 msg="entering low vram mode" "total vram"="16.0 GiB" threshold="20.0 GiB" ``` Ollama `ps` confirms that a random model can completely run on this GPU with ollama 0.11: ``` $ podman exec -it ollama ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL gemma3:12b f4031aab637d 11 GB 100% GPU 8192 2 minutes from now ``` and `radeontop` confirms the GPU usage. I'm using Ubuntu 24.04.3 LTS with 6.14.0-33-generic. No custom patches. `lspci` identifies my GPU as VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Vega 10 XTX [Radeon Vega Frontier Edition]. I know this is a rather rare card. But I've been using ollama for several months now without any issues and with perfect GPU acceleration. Only after upgrading to the newest version, the GPU no longer works. Here is the `dmesg`: ``` $ sudo dmesg -T | grep amdgpu [Fri Oct 24 23:05:26 2025] [drm] amdgpu kernel modesetting enabled. [Fri Oct 24 23:05:26 2025] amdgpu: Virtual CRAT table created for CPU [Fri Oct 24 23:05:26 2025] amdgpu: Topology: Add CPU node [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: enabling device (0006 -> 0007) [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 0 <soc15_common> [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 1 <gmc_v9_0> [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 2 <vega10_ih> [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 3 <psp> [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 4 <powerplay> [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 5 <dm> [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 6 <gfx_v9_0> [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 7 <sdma_v4_0> [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 8 <uvd_v7_0> [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: detected ip block number 9 <vce_v4_0> [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: Fetched VBIOS from VFCT [Fri Oct 24 23:05:26 2025] amdgpu: ATOM BIOS: 113-D0501100-109 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: vgaarb: deactivate vga console [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: Trusted Memory Zone (TMZ) feature not supported [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: MEM ECC is not presented. [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: SRAM ECC is not presented. [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: VRAM: 16368M 0x000000F400000000 - 0x000000F7FEFFFFFF (16368M used) [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: GART: 512M 0x0000000000000000 - 0x000000001FFFFFFF [Fri Oct 24 23:05:26 2025] [drm] amdgpu: 16368M of VRAM memory ready [Fri Oct 24 23:05:26 2025] [drm] amdgpu: 32032M of GTT memory ready. [Fri Oct 24 23:05:26 2025] amdgpu: hwmgr_sw_init smu backed is vega10_smu [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: reserve 0x400000 from 0xf7fec00000 for PSP TMR [Fri Oct 24 23:05:26 2025] snd_hda_intel 0000:03:00.1: bound 0000:03:00.0 (ops amdgpu_dm_audio_component_bind_ops [amdgpu]) [Fri Oct 24 23:05:26 2025] amdgpu: HMM registered 16368MB device memory [Fri Oct 24 23:05:26 2025] kfd kfd: amdgpu: Allocated 3969056 bytes on gart [Fri Oct 24 23:05:26 2025] kfd kfd: amdgpu: Total number of KFD nodes to be created: 1 [Fri Oct 24 23:05:26 2025] amdgpu: Virtual CRAT table created for GPU [Fri Oct 24 23:05:26 2025] amdgpu: Topology: Add dGPU node [0x6863:0x1002] [Fri Oct 24 23:05:26 2025] kfd kfd: amdgpu: added device 1002:6863 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: SE 4, SH per SE 1, CU per SH 16, active_cu_number 64 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring gfx uses VM inv eng 0 on hub 0 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring comp_1.0.0 uses VM inv eng 1 on hub 0 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring comp_1.1.0 uses VM inv eng 4 on hub 0 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring comp_1.2.0 uses VM inv eng 5 on hub 0 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring comp_1.3.0 uses VM inv eng 6 on hub 0 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring comp_1.0.1 uses VM inv eng 7 on hub 0 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring comp_1.1.1 uses VM inv eng 8 on hub 0 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring comp_1.2.1 uses VM inv eng 9 on hub 0 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring comp_1.3.1 uses VM inv eng 10 on hub 0 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring kiq_0.2.1.0 uses VM inv eng 11 on hub 0 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring sdma0 uses VM inv eng 0 on hub 8 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring page0 uses VM inv eng 1 on hub 8 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring sdma1 uses VM inv eng 4 on hub 8 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring page1 uses VM inv eng 5 on hub 8 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring uvd_0 uses VM inv eng 6 on hub 8 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring uvd_enc_0.0 uses VM inv eng 7 on hub 8 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring uvd_enc_0.1 uses VM inv eng 8 on hub 8 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring vce0 uses VM inv eng 9 on hub 8 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring vce1 uses VM inv eng 10 on hub 8 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: ring vce2 uses VM inv eng 11 on hub 8 [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: amdgpu: Runtime PM not available [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: [drm] Registered 6 planes with drm panic [Fri Oct 24 23:05:26 2025] [drm] Initialized amdgpu 3.61.0 for 0000:03:00.0 on minor 2 [Fri Oct 24 23:05:26 2025] fbcon: amdgpudrmfb (fb0) is primary device [Fri Oct 24 23:05:26 2025] amdgpu 0000:03:00.0: [drm] fb0: amdgpudrmfb frame buffer device ``` ### Relevant log output ```shell ``` ### OS Linux ### GPU AMD ### CPU Intel ### Ollama version 0.12.6-rocm
GiteaMirror added the bug label 2026-04-12 21:09:57 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 24, 2025):

As of 0.12.5 Vega 10 is not supported by ROCm. The soon-to-be-released Vulkan support will work with this GPU. If you would like to try it out, check out the repo and build the project. Otherwise stick with 0.12.4 and wait for the Vulkan support.

<!-- gh-comment-id:3445175423 --> @rick-github commented on GitHub (Oct 24, 2025): As of [0.12.5](https://github.com/ollama/ollama/releases/tag/v0.12.5) Vega 10 is not supported by ROCm. The soon-to-be-released Vulkan support will work with this GPU. If you would like to try it out, check out the repo and build the project. Otherwise stick with 0.12.4 and wait for the Vulkan support.
Author
Owner

@diekmann commented on GitHub (Oct 25, 2025):

Thanks! The changelog is really great! Downgrading to docker.io/ollama/ollama:0.12.3-rocm indeed works.

(I will wait for a container build to test Vulkan)

<!-- gh-comment-id:3446334019 --> @diekmann commented on GitHub (Oct 25, 2025): Thanks! The changelog is really great! Downgrading to docker.io/ollama/ollama:0.12.3-rocm indeed works. (I will wait for a container build to test Vulkan)
Author
Owner

@GreenShadows commented on GitHub (Nov 19, 2025):

Vega 10 should work under the generic gfx90x flag, which is meant to support all Vega-based GPUs, contrary to gfx906 = Radeon VII(Vega20)

https://rocm.nightlies.amd.com/v2/gfx90X-dcgpu/

<!-- gh-comment-id:3551928477 --> @GreenShadows commented on GitHub (Nov 19, 2025): Vega 10 should work under the generic gfx90x flag, which is meant to support all Vega-based GPUs, contrary to gfx906 = Radeon VII(Vega20) https://rocm.nightlies.amd.com/v2/gfx90X-dcgpu/
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8473