[GH-ISSUE #9663] Please document when AMD iGPU support is planned #6304

Open
opened 2026-04-12 17:45:43 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @justincranford on GitHub (Mar 11, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9663

Originally assigned to: @dhiltgen on GitHub.

Request

Could you add AMD iGPU acceleration to the roadmap?

It would be great to get clarification about if and when AMD iGPU support might be coming.

I found these comments in the source code that imply AMD iGPU support has been considered.

aee28501b5/discover/amd_windows.go (L124)

aee28501b5/discover/amd_linux.go (L293)

I can't find any documentation under github.com/ollama/ollama to know if and when AMD iGPUs will be supported in ollama rocm.

My Use case

I am running ollama latest bundled in openweb-ui latest Docker container.

docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

My desktop for software development:

  • CPU: AMD 7600X CPU
  • RAM 2x16GB DDR5-6000 CL30
  • SSD: 4TB Samsung Pro 990 PCIe 4.0
  • dGPU: n/a
  • IDE: VScode w/ Copilot
  • OS: Windows 11

Ideally, I would love to test and compare how ollama performs with AMD CPU vs AMD iGPU.

I am not a gamer. I don't ever plan to buy a discrete GPU. The iGPU more than fast enough. When time comes to invest in a hardware upgrade, I will likely look at a CPU., not a dGPU.

Thank you.

Originally created by @justincranford on GitHub (Mar 11, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9663 Originally assigned to: @dhiltgen on GitHub. # Request Could you add AMD iGPU acceleration to the roadmap? It would be great to get clarification about if and when AMD iGPU support might be coming. I found these comments in the source code that imply AMD iGPU support has been considered. https://github.com/ollama/ollama/blob/aee28501b592e2fe98863212913ffa8fb22e1ca0/discover/amd_windows.go#L124 https://github.com/ollama/ollama/blob/aee28501b592e2fe98863212913ffa8fb22e1ca0/discover/amd_linux.go#L293 I can't find any documentation under github.com/ollama/ollama to know if and when AMD iGPUs will be supported in ollama rocm. # My Use case I am running ollama latest bundled in openweb-ui latest Docker container. - https://github.com/open-webui/open-webui?tab=readme-ov-file#installing-open-webui-with-bundled-ollama-support > `docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama` My desktop for software development: - CPU: AMD 7600X CPU - RAM 2x16GB DDR5-6000 CL30 - SSD: 4TB Samsung Pro 990 PCIe 4.0 - dGPU: n/a - IDE: VScode w/ Copilot - OS: Windows 11 Ideally, I would love to test and compare how ollama performs with AMD CPU vs AMD iGPU. I am not a gamer. I don't ever plan to buy a discrete GPU. The iGPU more than fast enough. When time comes to invest in a hardware upgrade, I will likely look at a CPU., not a dGPU. Thank you.
GiteaMirror added the feature requestamd labels 2026-04-12 17:45:43 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 12, 2025):

You have to pass through the GPU devices for ollama to make use of the GPU, see https://github.com/ollama/ollama/blob/main/docs/docker.md#amd-gpu.

<!-- gh-comment-id:2716013967 --> @rick-github commented on GitHub (Mar 12, 2025): You have to pass through the GPU devices for ollama to make use of the GPU, see https://github.com/ollama/ollama/blob/main/docs/docker.md#amd-gpu.
Author
Owner

@jhemmond commented on GitHub (Mar 12, 2025):

@justincranford This already is implemented in the AMD ROCM Fork, and my favorite AMD Vulkan Fork. Links to these, and also to the latest Vulkan binaries are here:

OllamaSetup.zip - Linux
ollama-windows-amd64.zip

Files above are for Vulkan from this:
https://github.com/whyvl/ollama-vulkan/issues/7

Ollama for AMD: https://github.com/likelovewant/ollama-for-amd/releases

That said, I would Love for the Main Ollama (this repo) to natively support iGPUs.

<!-- gh-comment-id:2716365078 --> @jhemmond commented on GitHub (Mar 12, 2025): @justincranford This already is implemented in the AMD ROCM Fork, and my favorite AMD Vulkan Fork. Links to these, and also to the latest Vulkan binaries are here: [OllamaSetup.zip - Linux](https://github.com/user-attachments/files/19150979/OllamaSetup.zip) [ollama-windows-amd64.zip](https://github.com/user-attachments/files/19150980/ollama-windows-amd64.zip) Files above are for Vulkan from this: https://github.com/whyvl/ollama-vulkan/issues/7 Ollama for AMD: https://github.com/likelovewant/ollama-for-amd/releases That said, I would Love for the Main Ollama (this repo) to natively support iGPUs.
Author
Owner

@zztop007 commented on GitHub (Mar 15, 2025):

@jhemmond Are you planning to support gfx1151? The latest Ollama 0.6.0 includes ROCm support for gfx1151, but so far it still gives an error. Your link to Ollama Vulkan worked perfectly! Where can one turn for a potentially updated version in the future?

<!-- gh-comment-id:2726562938 --> @zztop007 commented on GitHub (Mar 15, 2025): @jhemmond Are you planning to support gfx1151? The latest Ollama 0.6.0 includes ROCm support for gfx1151, but so far it still gives an error. Your link to Ollama Vulkan worked perfectly! Where can one turn for a potentially updated version in the future?
Author
Owner

@jhemmond commented on GitHub (Mar 15, 2025):

Please thank the repo owner @McBane87 who is awesome! Since Vulkan is generic APIs the gfx1151 should work automatically once AMD updates their drivers for the new iGPU. I'd check in their release notes:

https://www.amd.com/en/resources/support-articles/release-notes/RN-RAD-WIN-VULKAN.html

<!-- gh-comment-id:2726757527 --> @jhemmond commented on GitHub (Mar 15, 2025): Please thank the repo owner @McBane87 who is awesome! Since Vulkan is generic APIs the gfx1151 should work automatically once AMD updates their drivers for the new iGPU. I'd check in their release notes: https://www.amd.com/en/resources/support-articles/release-notes/RN-RAD-WIN-VULKAN.html
Author
Owner

@McBane87 commented on GitHub (Mar 15, 2025):

Please thank the repo owner @McBane87 who is awesome! Since Vulkan is generic APIs the gfx1151 should work automatically once AMD updates their drivers for the new iGPU. I'd check in their release notes:

https://www.amd.com/en/resources/support-articles/release-notes/RN-RAD-WIN-VULKAN.html

I'm not the repo owner, but thanks ^^
I'm just a guy like you, who found the repo, created by whyvl, and was playing around with things there :-D

<!-- gh-comment-id:2726762135 --> @McBane87 commented on GitHub (Mar 15, 2025): > Please thank the repo owner [@McBane87](https://github.com/McBane87) who is awesome! Since Vulkan is generic APIs the gfx1151 should work automatically once AMD updates their drivers for the new iGPU. I'd check in their release notes: > > https://www.amd.com/en/resources/support-articles/release-notes/RN-RAD-WIN-VULKAN.html I'm not the repo owner, but thanks ^^ I'm just a guy like you, who found the repo, created by [whyvl](https://github.com/whyvl), and was playing around with things there :-D
Author
Owner

@jhemmond commented on GitHub (Mar 15, 2025):

My bad, but I'm still grateful :)
We need to get that repo constantly updated with latest builds. Hope I can carve some time to start building the Windows binaries this weekend since Ollama Main is pumping releases consistently. 0.6.1 is pre-release and hope it addresses the gemma load issues.

<!-- gh-comment-id:2726764354 --> @jhemmond commented on GitHub (Mar 15, 2025): My bad, but I'm still grateful :) We need to get that repo constantly updated with latest builds. Hope I can carve some time to start building the Windows binaries this weekend since Ollama Main is pumping releases consistently. 0.6.1 is pre-release and hope it addresses the gemma load issues.
Author
Owner

@zztop007 commented on GitHub (Mar 17, 2025):

I would love to have a later build if and when it pops up (for gemma).

<!-- gh-comment-id:2730628260 --> @zztop007 commented on GitHub (Mar 17, 2025): I would love to have a later build if and when it pops up (for gemma).
Author
Owner

@McBane87 commented on GitHub (Mar 18, 2025):

I would love to have a later build if and when it pops up (for gemma).

https://github.com/whyvl/ollama-vulkan/issues/7#issue-2828064858

<!-- gh-comment-id:2731746023 --> @McBane87 commented on GitHub (Mar 18, 2025): > I would love to have a later build if and when it pops up (for gemma). https://github.com/whyvl/ollama-vulkan/issues/7#issue-2828064858
Author
Owner

@zztop007 commented on GitHub (Mar 18, 2025):

I would love to have a later build if and when it pops up (for gemma).

https://github.com/whyvl/ollama-vulkan/issues/7#issue-2828064858

You made my week

<!-- gh-comment-id:2731979694 --> @zztop007 commented on GitHub (Mar 18, 2025): > > I would love to have a later build if and when it pops up (for gemma). > > https://github.com/whyvl/ollama-vulkan/issues/7#issue-2828064858 You made my week
Author
Owner

@YuliaS11 commented on GitHub (Apr 14, 2025):

I was able to build Kobold Cpp successfully simply following the instructions and specifying the architecture to be GFX1151 using the windows HIP SDK 6.2.4.

<!-- gh-comment-id:2800596367 --> @YuliaS11 commented on GitHub (Apr 14, 2025): I was able to build Kobold Cpp successfully simply following the instructions and specifying the architecture to be GFX1151 using the windows HIP SDK 6.2.4.
Author
Owner

@RangerMauve commented on GitHub (Jul 3, 2025):

I was under the impression that ROCm explicitly doesn't support iGPUs. Was there some sort of fix to make this not an issue? I found that forcing iGPU usage would crash my computer after a while which isn't ideal.

<!-- gh-comment-id:3032832762 --> @RangerMauve commented on GitHub (Jul 3, 2025): I was under the impression that ROCm explicitly doesn't support iGPUs. Was there some sort of fix to make this not an issue? I found that forcing iGPU usage would crash my computer after a while which isn't ideal.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6304