[GH-ISSUE #8473] HSA_OVERRIDE_GFX_VERSION_0 while running on only one GPU #5454

Open
opened 2026-04-12 16:41:09 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @occasional-contributor on GitHub (Jan 18, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8473

What is the issue?

I am running ollama:rocm in a docker container on Ubuntu 24.04. My GPU is an RX 6600 (gfx1032). Everything works fine when I run ollama using

docker run -d \
    --device /dev/kfd \
    --device /dev/dri \
    -v ollama:/root/.ollama \
    -p 11434:11434 \
    --restart unless-stopped \
    --env HSA_OVERRIDE_GFX_VERSION="10.3.0" \
    --env OLLAMA_KEEP_ALIVE="-1" \
    --name ollama-rocm \
    ollama/ollama:rocm
time=2025-01-18T01:24:54.470Z level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.5-0-g32bd37a-dirty)"
time=2025-01-18T01:24:54.470Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 rocm_avx cpu cpu_avx]"
time=2025-01-18T01:24:54.470Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-18T01:24:54.472Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2025-01-18T01:24:54.472Z level=INFO source=amd_linux.go:391 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=10.3.0
time=2025-01-18T01:24:54.472Z level=INFO source=types.go:131 msg="inference compute" id=0 library=rocm variant="" compute=gfx1032 driver=0.0 name=1002:73ff total="8.0 GiB" available="8.0 GiB"

However, when I run using

docker run -d \
    --device /dev/kfd \
    --device /dev/dri \
    -v ollama:/root/.ollama \
    -p 11434:11434 \
    --restart unless-stopped \
    --env HSA_OVERRIDE_GFX_VERSION_0="10.3.0" \
    --env OLLAMA_KEEP_ALIVE="-1" \
    --name ollama-rocm \
    ollama/ollama:rocm

ollama runs only on CPU:

time=2025-01-18T01:25:58.373Z level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.5-0-g32bd37a-dirty)"
time=2025-01-18T01:25:58.373Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 rocm_avx cpu cpu_avx]"
time=2025-01-18T01:25:58.373Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-18T01:25:58.375Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2025-01-18T01:25:58.378Z level=WARN source=amd_linux.go:378 msg="amdgpu is not supported (supported types:[gfx1030 gfx1100 gfx1101 gfx1102 gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942])" gpu_type=gfx1032 gpu=0 library=/usr/lib/ollama
time=2025-01-18T01:25:58.378Z level=WARN source=amd_linux.go:385 msg="See https://github.com/ollama/ollama/blob/main/docs/gpu.md#overrides for HSA_OVERRIDE_GFX_VERSION usage"
time=2025-01-18T01:25:58.378Z level=INFO source=amd_linux.go:404 msg="no compatible amdgpu devices detected"
time=2025-01-18T01:25:58.378Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
time=2025-01-18T01:25:58.378Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="62.7 GiB" available="61.2 GiB"

I am trying this because eventually I want to get newer AMD GPUs and use them concurrently for ollama.
Is this not supported when running ollama in docker?

If I want to use this RX 6600 and an RX 7800 in the same system, how should I do it with docker?

OS

Linux

GPU

AMD

CPU

Intel

Ollama version

0.5.5-0-g32bd37a-dirty

Originally created by @occasional-contributor on GitHub (Jan 18, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8473 ### What is the issue? I am running `ollama:rocm` in a docker container on Ubuntu 24.04. My GPU is an RX 6600 (`gfx1032`). Everything works fine when I run `ollama` using ```bash docker run -d \ --device /dev/kfd \ --device /dev/dri \ -v ollama:/root/.ollama \ -p 11434:11434 \ --restart unless-stopped \ --env HSA_OVERRIDE_GFX_VERSION="10.3.0" \ --env OLLAMA_KEEP_ALIVE="-1" \ --name ollama-rocm \ ollama/ollama:rocm ``` ``` time=2025-01-18T01:24:54.470Z level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.5-0-g32bd37a-dirty)" time=2025-01-18T01:24:54.470Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 rocm_avx cpu cpu_avx]" time=2025-01-18T01:24:54.470Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-18T01:24:54.472Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-01-18T01:24:54.472Z level=INFO source=amd_linux.go:391 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=10.3.0 time=2025-01-18T01:24:54.472Z level=INFO source=types.go:131 msg="inference compute" id=0 library=rocm variant="" compute=gfx1032 driver=0.0 name=1002:73ff total="8.0 GiB" available="8.0 GiB" ``` However, when I run using ```bash docker run -d \ --device /dev/kfd \ --device /dev/dri \ -v ollama:/root/.ollama \ -p 11434:11434 \ --restart unless-stopped \ --env HSA_OVERRIDE_GFX_VERSION_0="10.3.0" \ --env OLLAMA_KEEP_ALIVE="-1" \ --name ollama-rocm \ ollama/ollama:rocm ``` `ollama` runs only on CPU: ``` time=2025-01-18T01:25:58.373Z level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.5-0-g32bd37a-dirty)" time=2025-01-18T01:25:58.373Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 rocm_avx cpu cpu_avx]" time=2025-01-18T01:25:58.373Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-18T01:25:58.375Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-01-18T01:25:58.378Z level=WARN source=amd_linux.go:378 msg="amdgpu is not supported (supported types:[gfx1030 gfx1100 gfx1101 gfx1102 gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942])" gpu_type=gfx1032 gpu=0 library=/usr/lib/ollama time=2025-01-18T01:25:58.378Z level=WARN source=amd_linux.go:385 msg="See https://github.com/ollama/ollama/blob/main/docs/gpu.md#overrides for HSA_OVERRIDE_GFX_VERSION usage" time=2025-01-18T01:25:58.378Z level=INFO source=amd_linux.go:404 msg="no compatible amdgpu devices detected" time=2025-01-18T01:25:58.378Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" time=2025-01-18T01:25:58.378Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="62.7 GiB" available="61.2 GiB" ``` I am trying this because eventually I want to get newer AMD GPUs and use them concurrently for `ollama`. Is this not supported when running `ollama` in `docker`? If I want to use this RX 6600 and an RX 7800 in the same system, how should I do it with `docker`? ### OS Linux ### GPU AMD ### CPU Intel ### Ollama version 0.5.5-0-g32bd37a-dirty
GiteaMirror added the bug label 2026-04-12 16:41:09 -05:00
Author
Owner

@dc740 commented on GitHub (Feb 1, 2025):

HSA_OVERRIDE_GFX_VERSION_x is documented but it's not implemented! I checked the source code and the commit that added it to the documentation did not add an implementation: d7c94e0ca6

In summary:
Variables HSA_OVERRIDE_GFX_VERSION_0, HSA_OVERRIDE_GFX_VERSION_1, etc don't do anything at all. I think this has to be removed (or implemented).

<!-- gh-comment-id:2629149480 --> @dc740 commented on GitHub (Feb 1, 2025): HSA_OVERRIDE_GFX_VERSION_x is documented but it's not implemented! I checked the source code and the commit that added it to the documentation did not add an implementation: https://github.com/ollama/ollama/commit/d7c94e0ca6c39f6c64f74799c0dc8f3f91079edc In summary: Variables HSA_OVERRIDE_GFX_VERSION_0, HSA_OVERRIDE_GFX_VERSION_1, etc don't do anything at all. I think this has to be removed (or implemented).
Author
Owner

@occasional-contributor commented on GitHub (Feb 2, 2025):

After looking at the commit where that was documented and the rest of the source code, I agree. This was never implemented.

<!-- gh-comment-id:2629170773 --> @occasional-contributor commented on GitHub (Feb 2, 2025): After looking at the commit where that was documented and the rest of the source code, I agree. This was never implemented.
Author
Owner

@headcr4sh commented on GitHub (Mar 10, 2025):

I think this is an upstream issue, whereas changes in ROCM may still need to be merged.
See: https://github.com/ROCm/ROCT-Thunk-Interface/pull/104

(not quite sure, though ... feel free to correct me if I am wrong here)

<!-- gh-comment-id:2710437859 --> @headcr4sh commented on GitHub (Mar 10, 2025): I *think* this is an upstream issue, whereas changes in ROCM may still need to be merged. See: https://github.com/ROCm/ROCT-Thunk-Interface/pull/104 (not quite sure, though ... feel free to correct me if I am wrong here)
Author
Owner

@occasional-contributor commented on GitHub (Mar 10, 2025):

I have moved on from my RX 6600. I am using two RX 7800. While this is still an issue, it does not affect me at this time.

<!-- gh-comment-id:2710900819 --> @occasional-contributor commented on GitHub (Mar 10, 2025): I have moved on from my RX 6600. I am using two RX 7800. While this is still an issue, it does not affect me at this time.
Author
Owner

@colin-stubbs commented on GitHub (Mar 28, 2025):

I think this is an upstream issue, whereas changes in ROCM may still need to be merged. See: ROCm/ROCT-Thunk-Interface#104

(not quite sure, though ... feel free to correct me if I am wrong here)

It looks like it... ROCm 6.2.x includes the fix, ROCm 6.1.x doesn't and ollama still seems to use 6.1.x ?

<!-- gh-comment-id:2760493212 --> @colin-stubbs commented on GitHub (Mar 28, 2025): > I _think_ this is an upstream issue, whereas changes in ROCM may still need to be merged. See: [ROCm/ROCT-Thunk-Interface#104](https://github.com/ROCm/ROCT-Thunk-Interface/pull/104) > > (not quite sure, though ... feel free to correct me if I am wrong here) It looks like it... ROCm 6.2.x includes the fix, ROCm 6.1.x doesn't and ollama still seems to use 6.1.x ?
Author
Owner

@colin-stubbs commented on GitHub (Mar 28, 2025):

Perhaps not... ollama's rocm currently includes libhsa-runtime64.so.1.14.60303 which seems to look for env vars in format HSA_OVERRIDE_GFX_VERSION_%d, so it seems to to know to look for those.

Digging into ollama code it seems ollama just doesn't make use of anything other than use HSA_OVERRIDE_GFX_VERSION, e.g. discover/amd_linux.go doesn't look for or do anything with other than plain HSA_OVERRIDE_GFX_VERSION in func AMDGetGPUInfo() ([]RocmGPUInfo, error).

Either way, this is an issue as I'd like to make use of an integrated GPU in a 9950X CPU, as well as a 7800XT external GPU.

The integrated GPU is unfortunately not fully supported as yet, though it has been working well with HSA_OVERRIDE_GFX_VERSION=10.3.0. But having added an external GPU I've now got the issue that I can't use both at the same time via the same ollama process.

Forcing them both to 10.3.0 does not work, nor does forcing them both to 11.0.1. This just leads to the something dumping and the system becoming unresponsive.

[root@ms-a1-01 ~]# dmesg | grep 9950
[    0.145637] smpboot: CPU0: AMD Ryzen 9 9950X 16-Core Processor (family: 0x1a, model: 0x44, stepping: 0x0)
[root@ms-a1-01 ~]# 
[root@ms-a1-01 ~]# lspci | grep VGA
03:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 32 [Radeon RX 7700 XT / 7800 XT] (rev c8)
08:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Granite Ridge [Radeon Graphics] (rev c1)
[root@ms-a1-01 ~]# 
Mar 28 18:50:47 ms-a1-01 ollama[1821]: time=2025-03-28T18:50:47.466+10:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Mar 28 18:50:47 ms-a1-01 ollama[1821]: time=2025-03-28T18:50:47.484+10:00 level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-b4e00d1f602cee35 gpu_type=gfx1101
Mar 28 18:50:47 ms-a1-01 ollama[1821]: time=2025-03-28T18:50:47.484+10:00 level=WARN source=amd_linux.go:376 msg="amdgpu is not supported (supported types:[gfx1010 gfx1012 gfx1030 gfx1100 gfx1101 gfx1102 gfx1151 gfx1200 gfx1201 gfx900 gfx906 gfx908 gfx90a gfx942])" gpu_type=gfx1036 gpu=1 library=/usr/local/lib/ollama/rocm
Mar 28 18:50:47 ms-a1-01 ollama[1821]: time=2025-03-28T18:50:47.484+10:00 level=WARN source=amd_linux.go:383 msg="See https://github.com/ollama/ollama/blob/main/docs/gpu.md#overrides for HSA_OVERRIDE_GFX_VERSION usage"
Mar 28 18:50:47 ms-a1-01 ollama[1821]: time=2025-03-28T18:50:47.488+10:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-b4e00d1f602cee35 library=rocm variant="" compute=gfx1101 driver=6.10 name=1002:747e total="16.0 GiB" available="15.8 GiB"
<!-- gh-comment-id:2760620717 --> @colin-stubbs commented on GitHub (Mar 28, 2025): Perhaps not... ollama's rocm currently includes libhsa-runtime64.so.1.14.60303 which seems to look for env vars in format `HSA_OVERRIDE_GFX_VERSION_%d`, so it seems to to know to look for those. Digging into ollama code it seems ollama just doesn't make use of anything other than use `HSA_OVERRIDE_GFX_VERSION`, e.g. `discover/amd_linux.go` doesn't look for or do anything with other than plain `HSA_OVERRIDE_GFX_VERSION` in `func AMDGetGPUInfo() ([]RocmGPUInfo, error)`. Either way, this is an issue as I'd like to make use of an integrated GPU in a 9950X CPU, as well as a 7800XT external GPU. The integrated GPU is unfortunately not fully supported as yet, though it has been working well with `HSA_OVERRIDE_GFX_VERSION=10.3.0`. But having added an external GPU I've now got the issue that I can't use both at the same time via the same ollama process. Forcing them both to 10.3.0 does not work, nor does forcing them both to 11.0.1. This just leads to the something dumping and the system becoming unresponsive. ``` [root@ms-a1-01 ~]# dmesg | grep 9950 [ 0.145637] smpboot: CPU0: AMD Ryzen 9 9950X 16-Core Processor (family: 0x1a, model: 0x44, stepping: 0x0) [root@ms-a1-01 ~]# [root@ms-a1-01 ~]# lspci | grep VGA 03:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 32 [Radeon RX 7700 XT / 7800 XT] (rev c8) 08:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Granite Ridge [Radeon Graphics] (rev c1) [root@ms-a1-01 ~]# ``` ``` Mar 28 18:50:47 ms-a1-01 ollama[1821]: time=2025-03-28T18:50:47.466+10:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Mar 28 18:50:47 ms-a1-01 ollama[1821]: time=2025-03-28T18:50:47.484+10:00 level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-b4e00d1f602cee35 gpu_type=gfx1101 Mar 28 18:50:47 ms-a1-01 ollama[1821]: time=2025-03-28T18:50:47.484+10:00 level=WARN source=amd_linux.go:376 msg="amdgpu is not supported (supported types:[gfx1010 gfx1012 gfx1030 gfx1100 gfx1101 gfx1102 gfx1151 gfx1200 gfx1201 gfx900 gfx906 gfx908 gfx90a gfx942])" gpu_type=gfx1036 gpu=1 library=/usr/local/lib/ollama/rocm Mar 28 18:50:47 ms-a1-01 ollama[1821]: time=2025-03-28T18:50:47.484+10:00 level=WARN source=amd_linux.go:383 msg="See https://github.com/ollama/ollama/blob/main/docs/gpu.md#overrides for HSA_OVERRIDE_GFX_VERSION usage" Mar 28 18:50:47 ms-a1-01 ollama[1821]: time=2025-03-28T18:50:47.488+10:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-b4e00d1f602cee35 library=rocm variant="" compute=gfx1101 driver=6.10 name=1002:747e total="16.0 GiB" available="15.8 GiB" ```
Author
Owner

@ai-nikolai commented on GitHub (Aug 26, 2025):

@colin-stubbs is there an update with this. What are the recommended ways of dealing with this issue? Is there a table of which gpu should have which HSA_OVERRIDE_GFX_VERSION set?

In conclusion: It would be awesome to have a summary on how to get stuff working on different GPUs, is there a resource somewhere (or could you give advise on that)?


Specifically, this is my case. I have several MI210 and using the latest ROCm + Pytorch, it seems multiple GPUs lead to this error.

[1;36m(VllmWorker TP0 pid=931083) /usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/utils.py:98: UserWarning: failed to open file '/app/afo_tune_device_0_full.csv' for writing; your tuning results will not be saved (Triggered internally at /app/pytorch/aten/src/ATen/hip/tunable/Tunable.cpp:645.)
[1;36m(VllmWorker TP0 pid=931083)   return torch.nn.functional.linear(x, weight, bias)
Memory access fault by GPU node-4 (Agent handle: 0x4419e6b0) on address 0x7f74a1400000. Reason: Unknown.

(I know this is VLLM), however, people have suggested that the error might lie in setting the:

<!-- gh-comment-id:3224704584 --> @ai-nikolai commented on GitHub (Aug 26, 2025): @colin-stubbs is there an update with this. What are the recommended ways of dealing with this issue? Is there a table of which gpu should have which `HSA_OVERRIDE_GFX_VERSION` set? **`In conclusion:`** It would be awesome to have a summary on how to get stuff working on different GPUs, is there a resource somewhere (or could you give advise on that)? --- Specifically, this is my case. I have several MI210 and using the latest ROCm + Pytorch, it seems multiple GPUs lead to this error. ``` [1;36m(VllmWorker TP0 pid=931083) /usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/utils.py:98: UserWarning: failed to open file '/app/afo_tune_device_0_full.csv' for writing; your tuning results will not be saved (Triggered internally at /app/pytorch/aten/src/ATen/hip/tunable/Tunable.cpp:645.) [1;36m(VllmWorker TP0 pid=931083) return torch.nn.functional.linear(x, weight, bias) Memory access fault by GPU node-4 (Agent handle: 0x4419e6b0) on address 0x7f74a1400000. Reason: Unknown. ``` (I know this is VLLM), however, people have suggested that the error might lie in setting the:
Author
Owner

@tkamucheka commented on GitHub (Feb 9, 2026):

I’ve been testing a dual-GPU setup with an RX 7700 XT and an RX 6600 on Linux and wanted to share a finding that might clear up some of the confusion here regarding HSA_OVERRIDE_GFX_VERSION_[N].

I successfully got both GPUs working simultaneously by using 1-based indexing for the override variables. This contradicts the standard 0-indexing that's shown in rocm-smi and the docs here. I'm not sure if this is because my CPU is being treated as device 0, I don't have enough information to say for sure.

Running rocm-enumerate-agents shows the following:

gfx1101
gfx1030

My setup:

OS: Ubuntu 24.04 (6.8.0-47-generic)
ROCm Version: 6.12.12

Agent (rocminfo) ID (rocm-smi) HSA_OVERRIDE_GFX_VERSION_%d Device
1 - N/A i7-9700K
2 0 HSA_OVERRIDE_GFX_VERSION_1 RX 7700 XT
3 1 HSA_OVERRIDE_GFX_VERSION_2 RX 6600

In my case, the configuration that worked was:

# RX 7700 XT (Primary, Agent 2)
export HSA_OVERRIDE_GFX_VERSION_1=11.0.1 
# RX 6600 (Agent 3)
export HSA_OVERRIDE_GFX_VERSION_2=10.3.0

Hopefully, this helps anyone else struggling with this.

<!-- gh-comment-id:3869423536 --> @tkamucheka commented on GitHub (Feb 9, 2026): I’ve been testing a dual-GPU setup with an RX 7700 XT and an RX 6600 on Linux and wanted to share a finding that might clear up some of the confusion here regarding HSA_OVERRIDE_GFX_VERSION_[N]. I successfully got both GPUs working simultaneously by using 1-based indexing for the override variables. This contradicts the standard 0-indexing that's shown in rocm-smi and the docs here. I'm not sure if this is because my CPU is being treated as device 0, I don't have enough information to say for sure. Running `rocm-enumerate-agents` shows the following: ```sh gfx1101 gfx1030 ``` My setup: OS: Ubuntu 24.04 (6.8.0-47-generic) ROCm Version: 6.12.12 | Agent (rocminfo) | ID (rocm-smi) | HSA_OVERRIDE_GFX_VERSION_%d | Device | | ---------------- | -------------- | ---------------------------- | ---------- | | 1 | - | N/A | i7-9700K | | 2 | 0 | HSA_OVERRIDE_GFX_VERSION_1 | RX 7700 XT | | 3 | 1 | HSA_OVERRIDE_GFX_VERSION_2 | RX 6600 | In my case, the configuration that worked was: ```sh # RX 7700 XT (Primary, Agent 2) export HSA_OVERRIDE_GFX_VERSION_1=11.0.1 # RX 6600 (Agent 3) export HSA_OVERRIDE_GFX_VERSION_2=10.3.0 ``` Hopefully, this helps anyone else struggling with this.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5454