[GH-ISSUE #7694] When I run Ollama using AMD 6750GRE 12G I get an error - gfx1031 unsupported by official ROCm on windows #30673

Open
opened 2026-04-22 10:34:02 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @S-yf on GitHub (Nov 15, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7694

What is the issue?

After downloading and installing. Requires additional download of compiled Rocblas
Rocblas.dll overwrites the rocblas.dll that comes with the SDK, and puts rocblas.dll in the relative path of the HIP SDK file (if not, please download it yourself), and replaces the library folder (C:\Program Files\AMD that comes with the SDK \ROCm\6.1\bin\rocblas\library), it can run normally on the specified graphics card.
The second step is to replace the rocblas.dll file and Library folder in the ollama program directory (C:\Users\96133\AppData\Local\Programs\Ollama\lib\ollama Chinese file and folder with the same name)
Then I can let ollama run normally on the graphics card, but after I finish it, I get a prompt
Microsoft Windows [Version 10.0.19045.4894]
(c) Microsoft Corporation. All rights reserved.
C:\Users\96133>ollama run qwen2.5-coder:14b

2
Error: POST predict: Post "http://127.0.0.1:53690/completion": read tcp 127.0.0.1:53698->127.0.0.1:53690: wsarecv: An existing connection was forcibly closed by the remote host.
C:\Users\96133>
When I change the model too
C:\Users\96133>ollama run qwen2.5-coder
pulling manifest
pulling 60e05f210007... 100% ▕████████████████████████████████████████████████████████▏ 4.7 GB
pulling 66b9ea09bd5b... 100% ▕████████████████████████████████████████████████████████▏ 68 B
pulling e94a8ecb9327... 100% ▕████████████████████████████████████████████████████████▏ 1.6 KB
pulling 832dd9e00a68... 100% ▕████████████████████████████████████████████████████████▏ 11 KB
pulling d9bb33f27869... 100% ▕████████████████████████████████████████████████████████▏ 487 B
verifying sha256 digest
writing manifest
success
3
Error: POST predict: Post "http://127.0.0.1:52408/completion": read tcp 127.0.0.1:52411->127.0.0.1:52408: wsarecv: An existing connection was forcibly closed by the remote host.
Attached are the logs in the C:\Users\96133\AppData\Local\Ollama folder
I don't know why this is happening, when I run the file without replacing it on the cpu there is no problem,The ROCm I use is rocm.gfx1031.for.hip.sdk.6.1.2.7z from https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU/
app-1.log
server.log
help me what to do

OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.4.1

Originally created by @S-yf on GitHub (Nov 15, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7694 ### What is the issue? After downloading and installing. Requires additional download of compiled Rocblas Rocblas.dll overwrites the rocblas.dll that comes with the SDK, and puts rocblas.dll in the relative path of the HIP SDK file (if not, please download it yourself), and replaces the library folder (C:\Program Files\AMD that comes with the SDK \ROCm\6.1\bin\rocblas\library), it can run normally on the specified graphics card. The second step is to replace the rocblas.dll file and Library folder in the ollama program directory (C:\Users\96133\AppData\Local\Programs\Ollama\lib\ollama Chinese file and folder with the same name) Then I can let ollama run normally on the graphics card, but after I finish it, I get a prompt Microsoft Windows [Version 10.0.19045.4894] (c) Microsoft Corporation. All rights reserved. C:\Users\96133>ollama run qwen2.5-coder:14b >>> 2 Error: POST predict: Post "http://127.0.0.1:53690/completion": read tcp 127.0.0.1:53698->127.0.0.1:53690: wsarecv: An existing connection was forcibly closed by the remote host. C:\Users\96133> When I change the model too C:\Users\96133>ollama run qwen2.5-coder pulling manifest pulling 60e05f210007... 100% ▕████████████████████████████████████████████████████████▏ 4.7 GB pulling 66b9ea09bd5b... 100% ▕████████████████████████████████████████████████████████▏ 68 B pulling e94a8ecb9327... 100% ▕████████████████████████████████████████████████████████▏ 1.6 KB pulling 832dd9e00a68... 100% ▕████████████████████████████████████████████████████████▏ 11 KB pulling d9bb33f27869... 100% ▕████████████████████████████████████████████████████████▏ 487 B verifying sha256 digest writing manifest success >>> 3 Error: POST predict: Post "http://127.0.0.1:52408/completion": read tcp 127.0.0.1:52411->127.0.0.1:52408: wsarecv: An existing connection was forcibly closed by the remote host. Attached are the logs in the C:\Users\96133\AppData\Local\Ollama folder I don't know why this is happening, when I run the file without replacing it on the cpu there is no problem,The ROCm I use is rocm.gfx1031.for.hip.sdk.6.1.2.7z from https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU/ [app-1.log](https://github.com/user-attachments/files/17781911/app-1.log) [server.log](https://github.com/user-attachments/files/17781914/server.log) help me what to do ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.4.1
GiteaMirror added the bugwindows labels 2026-04-22 10:34:02 -05:00
Author
Owner

@S-yf commented on GitHub (Nov 15, 2024):

If I visit http://localhost:11434/ it says ollama is running

<!-- gh-comment-id:2480066436 --> @S-yf commented on GitHub (Nov 15, 2024): If I visit http://localhost:11434/ it says ollama is running
Author
Owner

@likelovewant commented on GitHub (Nov 17, 2024):

try setting the environment variable HSA_OVERRIDE_GFX_VERSION=10.3.1(for gfx1031),if you are using ollama https://github.com/ollama/ollama/releases, as gfx1031 is not official supported. also you need latest ollamasetup.exe for latest clients

<!-- gh-comment-id:2481323406 --> @likelovewant commented on GitHub (Nov 17, 2024): try setting the environment variable HSA_OVERRIDE_GFX_VERSION=10.3.1(for gfx1031),if you are using ollama https://github.com/ollama/ollama/releases, as gfx1031 is not official supported. also you need latest ollamasetup.exe for latest clients
Author
Owner

@S-yf commented on GitHub (Nov 18, 2024):

尝试设置环境变量 HSA_OVERRIDE_GFX_VERSION=10.3.1(适用于 gfx1031),如果你使用的是 ollama https://github.com/ollama/ollama/releases,因为 gfx1031 不受官方支持。同时你需要最新的 ollamasetup.exe来获取最新的客户端

Ok, I will try adding environment variables tonight to see if it can be solved. I downloaded the latest version of Ollama from the official website

<!-- gh-comment-id:2484143174 --> @S-yf commented on GitHub (Nov 18, 2024): > 尝试设置环境变量 HSA_OVERRIDE_GFX_VERSION=10.3.1(适用于 gfx1031),如果你使用的是 ollama [https://github.com/ollama/ollama/releases,](https://github.com/ollama/ollama/releases%EF%BC%8C)因为 gfx1031 不受官方支持。同时你需要最新的 ollamasetup.exe来获取最新的客户端 Ok, I will try adding environment variables tonight to see if it can be solved. I downloaded the latest version of Ollama from the official website
Author
Owner

@dhiltgen commented on GitHub (Nov 19, 2024):

The windows ROCm library does not implement the override variable unfortunately.

The error in the attached logs shows

ggml_cuda_compute_forward: RMS_NORM failed
CUDA error: invalid device function
  current device: 0, in function ggml_cuda_compute_forward at ggml-cuda.cu:2403
  err
ggml-cuda.cu:132: CUDA error

This matches an error others have seen trying to build from source - https://github.com/ollama/ollama/issues/6857#issuecomment-2435035968 - you might have luck following the comment threads on the issue.

<!-- gh-comment-id:2484454781 --> @dhiltgen commented on GitHub (Nov 19, 2024): The windows ROCm library does not implement the override variable unfortunately. The error in the attached logs shows ``` ggml_cuda_compute_forward: RMS_NORM failed CUDA error: invalid device function current device: 0, in function ggml_cuda_compute_forward at ggml-cuda.cu:2403 err ggml-cuda.cu:132: CUDA error ``` This matches an error others have seen trying to build from source - https://github.com/ollama/ollama/issues/6857#issuecomment-2435035968 - you might have luck following the comment threads on the issue.
Author
Owner

@S-yf commented on GitHub (Nov 19, 2024):

尝试设置环境变量 HSA_OVERRIDE_GFX_VERSION=10.3.1(适用于 gfx1031),如果你使用的是 ollama https://github.com/ollama/ollama/releases,因为 gfx1031 不受官方支持。同时你需要最新的 ollamasetup.exe 来获取最新的客户端

When I add the variable it will only run on the cpu and will not work on the gpu

<!-- gh-comment-id:2486235900 --> @S-yf commented on GitHub (Nov 19, 2024): > 尝试设置环境变量 HSA_OVERRIDE_GFX_VERSION=10.3.1(适用于 gfx1031),如果你使用的是 ollama [https://github.com/ollama/ollama/releases,](https://github.com/ollama/ollama/releases%EF%BC%8C)因为 gfx1031 不受官方支持。同时你需要最新的 ollamasetup.exe 来获取最新的客户端 When I add the variable it will only run on the cpu and will not work on the gpu
Author
Owner

@likelovewant commented on GitHub (Nov 19, 2024):

尝试设置环境变量 HSA_OVERRIDE_GFX_VERSION=10.3.1(适用于 gfx1031),如果你使用的是 ollama https://github.com/ollama/ollama/releases,因为 gfx1031 不受官方支持。同时你需要最新的 ollamasetup.exe 来获取最新的客户端

When I add the variable it will only run on the cpu and will not work on the gpu

As mentioned above by Dhiltgen ."The windows ROCm library does not implement the override variable unfortunately" in recent update release.
Now you may use the guide as the alternative solution or build it from source as suggest.

<!-- gh-comment-id:2486334554 --> @likelovewant commented on GitHub (Nov 19, 2024): > > 尝试设置环境变量 HSA_OVERRIDE_GFX_VERSION=10.3.1(适用于 gfx1031),如果你使用的是 ollama [https://github.com/ollama/ollama/releases,](https://github.com/ollama/ollama/releases%EF%BC%8C)因为 gfx1031 不受官方支持。同时你需要最新的 ollamasetup.exe 来获取最新的客户端 > > > > When I add the variable it will only run on the cpu and will not work on the gpu As mentioned above by Dhiltgen ."The windows ROCm library does not implement the override variable unfortunately" in recent update release. Now you may use the [guide](https://github.com/likelovewant/ollama-for-amd/wiki) as the alternative solution or build it from source as suggest.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30673