[GH-ISSUE #3572] Support for AMD Radeon RX 570 series #64242

Closed
opened 2026-05-03 16:43:51 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @Mr-Ples on GitHub (Apr 10, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3572

Originally assigned to: @dhiltgen on GitHub.

What are you trying to do?

trying to use my AMD GPU to accelerate ollama output

How should we solve this?

add Support for AMD Radeon RX 570 series

What is the impact of not solving this?

currently im not using ollama that much because of it

Anything else?

ollama is the best application ever hands down

Originally created by @Mr-Ples on GitHub (Apr 10, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3572 Originally assigned to: @dhiltgen on GitHub. ### What are you trying to do? trying to use my AMD GPU to accelerate ollama output ### How should we solve this? add Support for AMD Radeon RX 570 series ### What is the impact of not solving this? currently im not using ollama that much because of it ### Anything else? ollama is the best application ever hands down
Author
Owner

@muzig commented on GitHub (Apr 11, 2024):

please, support for AMD Radeon RX 5700

<!-- gh-comment-id:2048841995 --> @muzig commented on GitHub (Apr 11, 2024): please, support for AMD Radeon RX 5700
Author
Owner

@navr32 commented on GitHub (Apr 14, 2024):

If you want you can with llamacpp in vulkan mode. I have test on an RX480 8G vram. but on ollama i haven't try to compile with forcing the vulkan mode of the llamacpp build integrated to ollama.
So you just use llamacpp alone..
Or you try force vulkan mode on the ollama building yourself.

<!-- gh-comment-id:2054227105 --> @navr32 commented on GitHub (Apr 14, 2024): If you want you can with llamacpp in vulkan mode. I have test on an RX480 8G vram. but on ollama i haven't try to compile with forcing the vulkan mode of the llamacpp build integrated to ollama. So you just use llamacpp alone.. Or you try force vulkan mode on the ollama building yourself.
Author
Owner

@dhiltgen commented on GitHub (Apr 16, 2024):

Dup of #2503

<!-- gh-comment-id:2058017812 --> @dhiltgen commented on GitHub (Apr 16, 2024): Dup of #2503
Author
Owner

@windblade89 commented on GitHub (Apr 17, 2024):

Can you guys point me in the right direction to get AMD RX 580 8GB to work with this?

<!-- gh-comment-id:2062002164 --> @windblade89 commented on GitHub (Apr 17, 2024): Can you guys point me in the right direction to get AMD RX 580 8GB to work with this?
Author
Owner

@dhiltgen commented on GitHub (Apr 23, 2024):

Oops, I referenced the wrong issue number - this is actually a dup of #2453

<!-- gh-comment-id:2072902966 --> @dhiltgen commented on GitHub (Apr 23, 2024): Oops, I referenced the wrong issue number - this is actually a dup of #2453
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64242