[GH-ISSUE #14855] AMD Strix Halo (gfx1151) — ROCm Working Guide: 40 tok/s on 30B models #56095

Closed
opened 2026-04-29 10:15:26 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @queensone on GitHub (Mar 14, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14855

What is the issue?

AMD Strix Halo (gfx1151) — ROCm Working Guide: 40 tok/s on 30B models

Hardware

  • CPU/GPU: AMD Ryzen AI MAX+ (Strix Halo) — Radeon 8060S Graphics (gfx1151)
  • RAM: 128GB unified memory (~110GB available VRAM)
  • OS: Kubuntu 25.10, kernel 6.19
  • Ollama: v0.18.0

TL;DR

ROCm works natively on gfx1151 with HSA_OVERRIDE_GFX_VERSION=11.5.1. No Vulkan needed. Getting 40 tok/s on GLM-4.7-flash q8_0 (30B) and Qwen3.5 35B at full context. Way faster than Vulkan, which hangs or crashes on these models.

What doesn't work

  • Vulkan backend (OLLAMA_VULKAN=1 + HIP_VISIBLE_DEVICES=-1): Hangs on inference for Qwen3.5 models (known issues #14487, #14509). GLM loads but sometimes crashes. GPU shows 100% usage but produces no tokens.
  • ROCm without HSA override: gfx1151 not recognized by default in some ROCm builds.

What works — the override.conf

Create/edit /etc/systemd/system/ollama.service.d/override.conf:

[Service]
Environment="OLLAMA_KEEP_ALIVE=-1"
Environment="OLLAMA_FLASH_ATTENTION=1"
Environment="HSA_OVERRIDE_GFX_VERSION=11.5.1"

Key points:

  • Do NOT set OLLAMA_VULKAN=1
  • Do NOT set HIP_VISIBLE_DEVICES=-1 (this disables ROCm GPU access)
  • HSA_OVERRIDE_GFX_VERSION=11.5.1 tells ROCm to recognize gfx1151

Then reload and restart:

sudo systemctl daemon-reload
sudo systemctl restart ollama

Verification

Ollama logs should show:

inference compute id=0 library=ROCm compute=gfx1151 name=ROCm0 description="Radeon 8060S Graphics" total="111.5 GiB" available="110.5 GiB"

All layers offloaded to GPU:

offloaded 48/48 layers to GPU
model weights device=ROCm0 size="29.3 GiB"

Benchmarks

Model Quant Context tok/s VRAM Used
GLM-4.7-flash Q8_0 (30B) 4K ~40 tok/s ~30 GB
GLM-4.7-flash Q8_0 (30B) 202K ~40 tok/s ~50 GB
Qwen3.5 35B-a3b Q8_0 65K ~40 tok/s ~36 GB

VRAM Tips for 128GB Unified Memory

With unified memory, the GPU shares RAM with the system. Be careful with large context windows:

  • 200K+ context on a 30B model uses ~50-70GB, leaving less for your desktop/browser
  • If VRAM runs out, the amdgpu kernel driver spams Not enough memory for command submission! and can crash the entire system
  • Use Ollama's num_ctx parameter or your frontend's context setting to limit context size
  • A safe starting point: 65K-96K context for 30B models, which uses ~36-45GB

What I tried that failed

  1. Vulkan + HIP_VISIBLE_DEVICES=-1 — worked initially for some models but hung on Qwen3.5 and became unreliable
  2. ROCm without HSA_OVERRIDE_GFX_VERSION — GPU not detected
  3. 202K context with 30B model — works but risky on unified memory, can OOM and crash the system if other apps use too much RAM

Hope this helps other Strix Halo users! This took a full day of troubleshooting to figure out.

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @queensone on GitHub (Mar 14, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14855 ### What is the issue? # AMD Strix Halo (gfx1151) — ROCm Working Guide: 40 tok/s on 30B models ## Hardware - **CPU/GPU**: AMD Ryzen AI MAX+ (Strix Halo) — Radeon 8060S Graphics (gfx1151) - **RAM**: 128GB unified memory (~110GB available VRAM) - **OS**: Kubuntu 25.10, kernel 6.19 - **Ollama**: v0.18.0 ## TL;DR ROCm works natively on gfx1151 with `HSA_OVERRIDE_GFX_VERSION=11.5.1`. No Vulkan needed. Getting **40 tok/s** on GLM-4.7-flash q8_0 (30B) and Qwen3.5 35B at full context. Way faster than Vulkan, which hangs or crashes on these models. ## What doesn't work - **Vulkan backend** (`OLLAMA_VULKAN=1` + `HIP_VISIBLE_DEVICES=-1`): Hangs on inference for Qwen3.5 models (known issues #14487, #14509). GLM loads but sometimes crashes. GPU shows 100% usage but produces no tokens. - **ROCm without HSA override**: gfx1151 not recognized by default in some ROCm builds. ## What works — the override.conf Create/edit `/etc/systemd/system/ollama.service.d/override.conf`: ```ini [Service] Environment="OLLAMA_KEEP_ALIVE=-1" Environment="OLLAMA_FLASH_ATTENTION=1" Environment="HSA_OVERRIDE_GFX_VERSION=11.5.1" ``` **Key points:** - Do NOT set `OLLAMA_VULKAN=1` - Do NOT set `HIP_VISIBLE_DEVICES=-1` (this disables ROCm GPU access) - `HSA_OVERRIDE_GFX_VERSION=11.5.1` tells ROCm to recognize gfx1151 Then reload and restart: ```bash sudo systemctl daemon-reload sudo systemctl restart ollama ``` ## Verification Ollama logs should show: ``` inference compute id=0 library=ROCm compute=gfx1151 name=ROCm0 description="Radeon 8060S Graphics" total="111.5 GiB" available="110.5 GiB" ``` All layers offloaded to GPU: ``` offloaded 48/48 layers to GPU model weights device=ROCm0 size="29.3 GiB" ``` ## Benchmarks | Model | Quant | Context | tok/s | VRAM Used | |-------|-------|---------|-------|-----------| | GLM-4.7-flash | Q8_0 (30B) | 4K | ~40 tok/s | ~30 GB | | GLM-4.7-flash | Q8_0 (30B) | 202K | ~40 tok/s | ~50 GB | | Qwen3.5 | 35B-a3b Q8_0 | 65K | ~40 tok/s | ~36 GB | ## VRAM Tips for 128GB Unified Memory With unified memory, the GPU shares RAM with the system. Be careful with large context windows: - 200K+ context on a 30B model uses ~50-70GB, leaving less for your desktop/browser - If VRAM runs out, the amdgpu kernel driver spams `Not enough memory for command submission!` and can crash the entire system - Use Ollama's `num_ctx` parameter or your frontend's context setting to limit context size - A safe starting point: 65K-96K context for 30B models, which uses ~36-45GB ## What I tried that failed 1. Vulkan + `HIP_VISIBLE_DEVICES=-1` — worked initially for some models but hung on Qwen3.5 and became unreliable 2. ROCm without `HSA_OVERRIDE_GFX_VERSION` — GPU not detected 3. 202K context with 30B model — works but risky on unified memory, can OOM and crash the system if other apps use too much RAM Hope this helps other Strix Halo users! This took a full day of troubleshooting to figure out. ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the documentationamdbug labels 2026-04-29 10:15:27 -05:00
Author
Owner

@queensone commented on GitHub (Mar 15, 2026):

Update: Ollama 0.18 has native gfx1151 support!

Just tested — HSA_OVERRIDE_GFX_VERSION=11.5.1 is no longer needed on Ollama 0.18. ROCm detects the Radeon 8060S (gfx1151) natively
without any override.

Updated override.conf (simplified):
[Service]
Environment="OLLAMA_KEEP_ALIVE=-1"
Environment="OLLAMA_FLASH_ATTENTION=1"

That's it — two lines. If you're on Ollama 0.17 or earlier, you still need HSA_OVERRIDE_GFX_VERSION=11.5.1.


Even simpler now — two lines instead of three!

<!-- gh-comment-id:4062859961 --> @queensone commented on GitHub (Mar 15, 2026): Update: Ollama 0.18 has native gfx1151 support! Just tested — HSA_OVERRIDE_GFX_VERSION=11.5.1 is no longer needed on Ollama 0.18. ROCm detects the Radeon 8060S (gfx1151) natively without any override. Updated override.conf (simplified): [Service] Environment="OLLAMA_KEEP_ALIVE=-1" Environment="OLLAMA_FLASH_ATTENTION=1" That's it — two lines. If you're on Ollama 0.17 or earlier, you still need HSA_OVERRIDE_GFX_VERSION=11.5.1. --- Even simpler now — two lines instead of three!
Author
Owner

@rick-github commented on GitHub (Mar 15, 2026):

That's it — two lines. If you're on Ollama 0.17 or earlier, you still need HSA_OVERRIDE_GFX_VERSION=11.5.1.

ollama has natively supported gfx1151 on linux since 0.5.13.

ollama  | time=2026-03-15T16:48:59.812Z level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)"
ollama  | time=2026-03-15T16:48:59.812Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
ollama  | time=2026-03-15T16:48:59.815Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=0 gpu_type=gfx1151
ollama  | time=2026-03-15T16:48:59.817Z level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1151 driver=6.12 name=1002:1586 total="96.0 GiB" available="95.9 GiB"
<!-- gh-comment-id:4063395872 --> @rick-github commented on GitHub (Mar 15, 2026): > That's it — two lines. If you're on Ollama 0.17 or earlier, you still need HSA_OVERRIDE_GFX_VERSION=11.5.1. ollama has natively supported gfx1151 on linux since 0.5.13. ``` ollama | time=2026-03-15T16:48:59.812Z level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)" ollama | time=2026-03-15T16:48:59.812Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" ollama | time=2026-03-15T16:48:59.815Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=0 gpu_type=gfx1151 ollama | time=2026-03-15T16:48:59.817Z level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1151 driver=6.12 name=1002:1586 total="96.0 GiB" available="95.9 GiB" ```
Author
Owner

@queensone commented on GitHub (Mar 16, 2026):

Thanks for the clarification! The issue on my end was that I had OLLAMA_VULKAN=1 and HIP_VISIBLE_DEVICES=-1 in
my override.conf, which was hiding ROCm entirely. Once I removed those, ROCm detected gfx1151 natively.

So the real fix for anyone stuck on Vulkan with Strix Halo: just remove OLLAMA_VULKAN=1 and
HIP_VISIBLE_DEVICES=-1 from your override. ROCm works out of the box.

Performance went from hanging/unreliable on Vulkan to solid 40 tok/s on ROCm. Benchmarks and config details in
the original post above.

<!-- gh-comment-id:4064607069 --> @queensone commented on GitHub (Mar 16, 2026): Thanks for the clarification! The issue on my end was that I had OLLAMA_VULKAN=1 and HIP_VISIBLE_DEVICES=-1 in my override.conf, which was hiding ROCm entirely. Once I removed those, ROCm detected gfx1151 natively. So the real fix for anyone stuck on Vulkan with Strix Halo: just remove OLLAMA_VULKAN=1 and HIP_VISIBLE_DEVICES=-1 from your override. ROCm works out of the box. Performance went from hanging/unreliable on Vulkan to solid 40 tok/s on ROCm. Benchmarks and config details in the original post above.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#56095