[GH-ISSUE #4464] Support RX6600 (gfx1032) on windows (gfx override works on linux) #2788

Open
opened 2026-04-12 13:07:03 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @usmandilmeer on GitHub (May 16, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4464

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Hi,
Ollama(0.1.32) is working awesome with Zluda using AMD RX6600 on windows 10.

But I have downloaded and tested the all above versions from"0.1.33 to 0.1.38" Ollama is not working with Zluda.
It gives error "0xc000001d"

So, now I downgraded and using 0.1.32 with Zluda.

Is it Zluda's issue or Ollama's?
Can anyone help me working with newer versions of Ollama?

OS

Windows

GPU

AMD

CPU

Intel

Ollama version

0.1.32

Originally created by @usmandilmeer on GitHub (May 16, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4464 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Hi, Ollama(0.1.32) is working awesome with Zluda using AMD RX6600 on windows 10. But I have downloaded and tested the all above versions from"0.1.33 to 0.1.38" Ollama is not working with Zluda. It gives error "0xc000001d" So, now I downgraded and using 0.1.32 with Zluda. Is it Zluda's issue or Ollama's? Can anyone help me working with newer versions of Ollama? ### OS Windows ### GPU AMD ### CPU Intel ### Ollama version 0.1.32
GiteaMirror added the bugwindows labels 2026-04-12 13:07:03 -05:00
Author
Owner

@4thanks commented on GitHub (May 16, 2024):

https://github.com/ollama/ollama/issues/4355
try add HSA_OVERRIDE_GFX_VERSION=10.3.0 to environment variables, and start with zluda ollama.exe serve, not zluda ollama app.exe serve

<!-- gh-comment-id:2113915639 --> @4thanks commented on GitHub (May 16, 2024): https://github.com/ollama/ollama/issues/4355 try add `HSA_OVERRIDE_GFX_VERSION=10.3.0` to environment variables, and start with `zluda ollama.exe serve`, not `zluda ollama app.exe serve`
Author
Owner

@usmandilmeer commented on GitHub (May 16, 2024):

#4355 try add HSA_OVERRIDE_GFX_VERSION=10.3.0 to environment variables, and start with zluda ollama.exe serve, not zluda ollama app.exe serve

Should I use this with Ollama's latest version 0.1.38?

<!-- gh-comment-id:2113954958 --> @usmandilmeer commented on GitHub (May 16, 2024): > #4355 try add `HSA_OVERRIDE_GFX_VERSION=10.3.0` to environment variables, and start with `zluda ollama.exe serve`, not `zluda ollama app.exe serve` Should I use this with Ollama's latest version 0.1.38?
Author
Owner

@usmandilmeer commented on GitHub (May 22, 2024):

#4355 try add HSA_OVERRIDE_GFX_VERSION=10.3.0 to environment variables, and start with zluda ollama.exe serve, not zluda ollama app.exe serve
I have added these variables
image
and try to run with zluda ollama.exe serve and ollama server is running. but it's giving this error when run ollama run llama3
image
it's only gives this error above v0.1.33 to v0.1.38

it run perfectly with ollama v0.1.32 with AMD RX 6600 on windows 10

<!-- gh-comment-id:2124552299 --> @usmandilmeer commented on GitHub (May 22, 2024): > #4355 try add `HSA_OVERRIDE_GFX_VERSION=10.3.0` to environment variables, and start with `zluda ollama.exe serve`, not `zluda ollama app.exe serve` I have added these variables ![image](https://github.com/ollama/ollama/assets/51738693/d12013e6-b3bc-4fb5-a7b6-ca6436e3da16) and try to run with zluda ollama.exe serve and ollama server is running. but it's giving this error when run ollama run llama3 ![image](https://github.com/ollama/ollama/assets/51738693/4cf2a09c-467c-4dbb-8743-962f73ecd3e8) it's only gives this error above v0.1.33 to v0.1.38 it run perfectly with ollama v0.1.32 with AMD RX 6600 on windows 10
Author
Owner

@cangkuai commented on GitHub (May 27, 2024):

I used the exact same configuration method but I got the following error:

rocBLAS error: Cannot read C:\Program Files\AMD\ROCm\5.7\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1032

rocBLAS error: Could not initialize Tensile host:
regex_error(error_backref): The expression contained an invalid back reference.
time=2024-05-27T14:46:28.772+08:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 error:Could not initialize Tensile host:\r\nregex_error(error_backref): The expression contained an invalid back reference."
[GIN] 2024/05/27 - 14:46:28 | 500 |     761.835ms |       127.0.0.1 | POST     "/v1/chat/completions"
<!-- gh-comment-id:2132763096 --> @cangkuai commented on GitHub (May 27, 2024): I used the exact same configuration method but I got the following error: ```bash rocBLAS error: Cannot read C:\Program Files\AMD\ROCm\5.7\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1032 rocBLAS error: Could not initialize Tensile host: regex_error(error_backref): The expression contained an invalid back reference. time=2024-05-27T14:46:28.772+08:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 error:Could not initialize Tensile host:\r\nregex_error(error_backref): The expression contained an invalid back reference." [GIN] 2024/05/27 - 14:46:28 | 500 | 761.835ms | 127.0.0.1 | POST "/v1/chat/completions" ```
Author
Owner

@takhattori commented on GitHub (Jun 16, 2024):

it run perfectly with ollama v0.1.32 with AMD RX 6600 on windows 10

Exactly same situation. I also want to know how to solve this issue with RX6600.

Current condition:
Ryzen 5700x
Radeon RX6600
Windows11 23H2
Comfyui latest as of 14thJune : work well with ollama 0.1.32
Ollama 0.1.32. (not work 0.1.33 ~) : work well in console as well

both comfyui and ollama are applied cublab.dll(in Zluda 3.8);rename to cublab64_11.dll, then GPU resource is efficient.

  • issue: don't work Ollama0.1.33 onward. Just put the error message in console screen
    Ollama create function don't work. Even From line isn't recognized in 0.1.32 Ollama. Who can create some llm model from modelfile and .gguf ?
<!-- gh-comment-id:2171003967 --> @takhattori commented on GitHub (Jun 16, 2024): > it run perfectly with ollama v0.1.32 with AMD RX 6600 on windows 10 > Exactly same situation. I also want to know how to solve this issue with RX6600. Current condition: Ryzen 5700x Radeon RX6600 Windows11 23H2 Comfyui latest as of 14thJune : work well with ollama 0.1.32 Ollama 0.1.32. (not work 0.1.33 ~) : work well in console as well both comfyui and ollama are applied cublab.dll(in Zluda 3.8);rename to cublab64_11.dll, then GPU resource is efficient. - issue: don't work Ollama0.1.33 onward. Just put the error message in console screen Ollama create function don't work. Even From line isn't recognized in 0.1.32 Ollama. Who can create some llm model from modelfile and .gguf ?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2788