[GH-ISSUE #15482] ROCm on AMD Phoenix APUs (Radeon 780M iGPU) — HSA_OVERRIDE_GFX_VERSION required #35658

Open
opened 2026-04-22 20:19:49 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @rkocosmergon on GitHub (Apr 10, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15482

Sharing a finding that might help others running Ollama on AMD Ryzen 7000/8000 series with integrated graphics (Phoenix APUs, Radeon 780M).

Out of the box, Ollama doesn't detect the iGPU and falls back to CPU. The fix is one environment variable: export HSA_OVERRIDE_GFX_VERSION=11.0.0

This tells ROCm to treat the Phoenix APU as a supported gfx1100 target. Without it, the GPU sits idle — no error, no warning, just silent CPU fallback.

After setting this + installing rocm-libs:

  • llama3.2:3b: ~20 tok/s (was ~1 tok/s on CPU)
  • phi4-mini: 14-20 tok/s
  • 31.4 GiB unified memory available (shared system RAM)

Discovered this on a Hetzner AX42-U (Ryzen 7 PRO 8700GE) where the iGPU isn't even listed in the server specs. Full benchmark writeup: https://cosmergon.com/reports/llm-benchmark-hetzner.html

Might be worth adding a note about Phoenix APUs to the ROCm/AMD section of the Ollama docs?

Originally created by @rkocosmergon on GitHub (Apr 10, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15482 Sharing a finding that might help others running Ollama on AMD Ryzen 7000/8000 series with integrated graphics (Phoenix APUs, Radeon 780M). Out of the box, Ollama doesn't detect the iGPU and falls back to CPU. The fix is one environment variable: export HSA_OVERRIDE_GFX_VERSION=11.0.0 This tells ROCm to treat the Phoenix APU as a supported gfx1100 target. Without it, the GPU sits idle — no error, no warning, just silent CPU fallback. After setting this + installing rocm-libs: - llama3.2:3b: ~20 tok/s (was ~1 tok/s on CPU) - phi4-mini: 14-20 tok/s - 31.4 GiB unified memory available (shared system RAM) Discovered this on a Hetzner AX42-U (Ryzen 7 PRO 8700GE) where the iGPU isn't even listed in the server specs. Full benchmark writeup: https://cosmergon.com/reports/llm-benchmark-hetzner.html Might be worth adding a note about Phoenix APUs to the ROCm/AMD section of the Ollama docs?
Author
Owner
<!-- gh-comment-id:4224592163 --> @rick-github commented on GitHub (Apr 10, 2026): https://github.com/ollama/ollama/blob/main/docs/gpu.mdx#overrides-on-linux:~:text=Radeon%20RX%207600-,gfx1103,-Radeon%20780M
Author
Owner

@rkocosmergon commented on GitHub (Apr 10, 2026):

Thanks Rick — you're right, the GPU docs do list gfx1103 for the 780M. I missed that page during debugging.

The part that tripped me up was the full path from "Hetzner AX42-U with no GPU in specs" to "working ROCm inference": discovering the iGPU via lspci, installing rocm-libs, and finding the right HSA_OVERRIDE value. Leaving this here in case others search for a similar setup.

<!-- gh-comment-id:4224695529 --> @rkocosmergon commented on GitHub (Apr 10, 2026): Thanks Rick — you're right, the GPU docs do list gfx1103 for the 780M. I missed that page during debugging. The part that tripped me up was the full path from "Hetzner AX42-U with no GPU in specs" to "working ROCm inference": discovering the iGPU via lspci, installing rocm-libs, and finding the right HSA_OVERRIDE value. Leaving this here in case others search for a similar setup.
Author
Owner

@PureBlissAK commented on GitHub (Apr 18, 2026):

🤖 Automated Triage & Analysis Report

Issue: #15482
Analyzed: 2026-04-18T18:21:09.936776

Analysis

  • Type: unknown
  • Severity: medium
  • Components: unknown

Implementation Plan

  • Effort: medium
  • Steps:

This issue has been triaged and marked for implementation.

<!-- gh-comment-id:4274307832 --> @PureBlissAK commented on GitHub (Apr 18, 2026): <!-- ollama-issue-orchestrator:v1 issue:15482 --> ## 🤖 Automated Triage & Analysis Report **Issue**: #15482 **Analyzed**: 2026-04-18T18:21:09.936776 ### Analysis - **Type**: unknown - **Severity**: medium - **Components**: unknown ### Implementation Plan - **Effort**: medium - **Steps**: *This issue has been triaged and marked for implementation.*
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35658