[GH-ISSUE #3781] How to run Ollama using AMD RX 6600 XT on Windows 11? - gfx1032, workaround works on linux only #28096

Closed
opened 2026-04-22 05:55:02 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @NAME0x0 on GitHub (Apr 20, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3781

What is the issue?

The responses by all the LLM's are slow as the CPU is prioritised. I wish to make use of my RX 6600 XT GPU but apparently the workaround is only on Linux. Furthermore, ROCm runtime is available for RX 6600 XT but not HIP SDK which is apparently what is needed for my GPU to run LLMs. However, the documentation for Ollama says that my GPU is supported. How do I make use of it then, since it's not utilising it at all?

OS

Windows

GPU

AMD

CPU

Intel

Ollama version

0.1.32

Originally created by @NAME0x0 on GitHub (Apr 20, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3781 ### What is the issue? The responses by all the LLM's are slow as the CPU is prioritised. I wish to make use of my RX 6600 XT GPU but apparently the workaround is only on Linux. Furthermore, ROCm runtime is available for RX 6600 XT but not HIP SDK which is apparently what is needed for my GPU to run LLMs. However, the documentation for Ollama says that my GPU is supported. How do I make use of it then, since it's not utilising it at all? ### OS Windows ### GPU AMD ### CPU Intel ### Ollama version 0.1.32
GiteaMirror added the bug label 2026-04-22 05:55:02 -05:00
Author
Owner

@likelovewant commented on GitHub (Apr 21, 2024):

make sure make your rocm support first . download somewhere in github , eg, here replace the file in hip sdk.
Then git clone ollama , edit the file in ollama\llm\generate\gen_windows.ps1 ,add your gpu number there . then follow the development guide ,step1,2 , then search gfx1102 , add your gpu where ever gfx1102 show . build again or simple follow the readme file in app folder to build an ollama install then you are make your ollama running on gpu

<!-- gh-comment-id:2068051898 --> @likelovewant commented on GitHub (Apr 21, 2024): make sure make your rocm support first . download somewhere in github , eg, [here]( https://github.com/brknsoul/ROCmLibs/ ) replace the file in hip sdk. Then git clone ollama , edit the file in ` ollama\llm\generate\gen_windows.ps1` ,add your gpu number there . then follow the development guide ,step1,2 , then search `gfx1102` , add your gpu where ever `gfx1102` show . build again or simple follow the readme file in app folder to build an ollama install then you are make your ollama running on gpu
Author
Owner

@dhiltgen commented on GitHub (Apr 24, 2024):

Let's keep the conversation over on the dup issue #3107

<!-- gh-comment-id:2075410378 --> @dhiltgen commented on GitHub (Apr 24, 2024): Let's keep the conversation over on the dup issue #3107
Author
Owner

@Sohnny0 commented on GitHub (May 9, 2024):

make sure make your rocm support first . download somewhere in github , eg, here replace the file in hip sdk. Then git clone ollama , edit the file in ollama\llm\generate\gen_windows.ps1 ,add your gpu number there . then follow the development guide ,step1,2 , then search gfx1102 , add your gpu where ever gfx1102 show . build again or simple follow the readme file in app folder to build an ollama install then you are make your ollama running on gpu

Thank you very much for your method, I successfully added gfx1031, my 6700xt graphics card can run llama3 perfectly

<!-- gh-comment-id:2102742257 --> @Sohnny0 commented on GitHub (May 9, 2024): > make sure make your rocm support first . download somewhere in github , eg, [here](https://github.com/brknsoul/ROCmLibs/) replace the file in hip sdk. Then git clone ollama , edit the file in ` ollama\llm\generate\gen_windows.ps1` ,add your gpu number there . then follow the development guide ,step1,2 , then search `gfx1102` , add your gpu where ever `gfx1102` show . build again or simple follow the readme file in app folder to build an ollama install then you are make your ollama running on gpu Thank you very much for your method, I successfully added gfx1031, my 6700xt graphics card can run llama3 perfectly
Author
Owner

@usmandilmeer commented on GitHub (May 14, 2024):

@Sohnny0
Hi,
I am stuck and did not understand this guide and i have windows 10 and rx6600 GPU.
Can you plz guide me step by step process for windows? or if you can make a video plz do it it will really helpful for me and others

<!-- gh-comment-id:2111209465 --> @usmandilmeer commented on GitHub (May 14, 2024): @Sohnny0 Hi, I am stuck and did not understand this guide and i have windows 10 and rx6600 GPU. Can you plz guide me step by step process for windows? or if you can make a video plz do it it will really helpful for me and others
Author
Owner

@muhammedaligurdal commented on GitHub (Jul 22, 2024):

Video please @Sohnny0

<!-- gh-comment-id:2243678625 --> @muhammedaligurdal commented on GitHub (Jul 22, 2024): Video please @Sohnny0
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28096