[GH-ISSUE #15715] Ollama fails to use local AMD HIP/ROCm installation on Windows #72080

Open
opened 2026-05-05 03:25:44 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @3129025464 on GitHub (Apr 20, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15715

I have installed the official AMD HIP/ROCm drivers from AMD on my Windows system, with the libraries located at C:\Program Files\AMD\hip\rocm. However, Ollama doesn't seem to recognize or utilize these libraries directly.
I understand that Ollama requires specific ROCm library files to be placed in its own directory structure (C:\Users[username]\AppData\Local\Programs\Ollama\lib\ollama) to enable AMD GPU acceleration, but I'm wondering:
Why can't Ollama directly use the official AMD ROCm installation that's already present on the system?
Why does it require copying specific ROCm libraries to Ollama's own directory instead of using the system-wide installation?
When I try to copy the AMD official ROCm libraries from C:\Program Files\AMD\hip\rocm to the corresponding Ollama directory as suggested by various guides, the GPU still isn't being utilized and models run on CPU only.
The expected behavior would be either:
Ollama detecting and using the existing AMD ROCm installation automatically, or
Clear documentation on how to properly configure Ollama to work with the system-wide AMD HIP/ROCm installation
Currently, users need to manually download and replace ROCm libraries in Ollama's directory structure, which seems redundant when the official AMD installation already exists. This process also appears to fail in many cases, as evidenced by the GPU not being detected even after manual library replacement.
Could you please clarify the recommended approach for using AMD GPUs with Ollama on Windows, especially regarding the interaction between Ollama's bundled ROCm libraries and the official AMD ROCm/HIP installation?

Originally created by @3129025464 on GitHub (Apr 20, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15715 I have installed the official AMD HIP/ROCm drivers from AMD on my Windows system, with the libraries located at C:\Program Files\AMD\hip\rocm. However, Ollama doesn't seem to recognize or utilize these libraries directly. I understand that Ollama requires specific ROCm library files to be placed in its own directory structure (C:\Users\[username]\AppData\Local\Programs\Ollama\lib\ollama\) to enable AMD GPU acceleration, but I'm wondering: Why can't Ollama directly use the official AMD ROCm installation that's already present on the system? Why does it require copying specific ROCm libraries to Ollama's own directory instead of using the system-wide installation? When I try to copy the AMD official ROCm libraries from C:\Program Files\AMD\hip\rocm to the corresponding Ollama directory as suggested by various guides, the GPU still isn't being utilized and models run on CPU only. The expected behavior would be either: Ollama detecting and using the existing AMD ROCm installation automatically, or Clear documentation on how to properly configure Ollama to work with the system-wide AMD HIP/ROCm installation Currently, users need to manually download and replace ROCm libraries in Ollama's directory structure, which seems redundant when the official AMD installation already exists. This process also appears to fail in many cases, as evidenced by the GPU not being detected even after manual library replacement. Could you please clarify the recommended approach for using AMD GPUs with Ollama on Windows, especially regarding the interaction between Ollama's bundled ROCm libraries and the official AMD ROCm/HIP installation?
GiteaMirror added the feature request label 2026-05-05 03:25:45 -05:00
Author
Owner

@e-strelock commented on GitHub (Apr 22, 2026):

It looks like, as a temporary workaround, you can set the environment variable:

ROCBLAS_TENSILE_LIBPATH=C:\Program Files\AMD\ROCm\7.1\bin\rocblas\library

(obviously, pointing to the correct path for your rocblas library). In my case, this works for most models, except for the qwen35+/qwen35moe+ family — see bug https://github.com/ollama/ollama/issues/14423.

<!-- gh-comment-id:4299372751 --> @e-strelock commented on GitHub (Apr 22, 2026): It looks like, as a temporary workaround, you can set the environment variable: ``` ROCBLAS_TENSILE_LIBPATH=C:\Program Files\AMD\ROCm\7.1\bin\rocblas\library ``` (obviously, pointing to the correct path for your rocblas library). In my case, this works for most models, except for the qwen35+/qwen35moe+ family — see bug https://github.com/ollama/ollama/issues/14423.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#72080