[GH-ISSUE #4070] Ollama run model error #28286

Closed
opened 2026-04-22 06:16:08 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @pandaymx on GitHub (May 1, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4070

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

question

I change env because my gpu don't support. My gpu version is AMD RX 6750 gre
PixPin_2024-05-01_11-25-24
PixPin_2024-05-01_11-26-58
I try to erase the env have no effect

OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.1.32

Originally created by @pandaymx on GitHub (May 1, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4070 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? ## question I change env because my gpu don't support. My gpu version is AMD RX 6750 gre <img width="465" alt="PixPin_2024-05-01_11-25-24" src="https://github.com/ollama/ollama/assets/82139672/19627989-f37b-44e7-aeb6-47c02db8b0f3"> <img width="1015" alt="PixPin_2024-05-01_11-26-58" src="https://github.com/ollama/ollama/assets/82139672/b8191847-1e43-4d15-b9f9-5fb4e3646883"> I try to erase the env have no effect ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.1.32
GiteaMirror added the gpuamdbug labels 2026-04-22 06:16:08 -05:00
Author
Owner

@pandaymx commented on GitHub (May 1, 2024):

llama runner process no longer running: 3221226505 error:Cannot read C:\Users\panda\AppData\Local\Programs\Ollama\rocm/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031
This is a Linux path, do not use in Winodws?

<!-- gh-comment-id:2087929139 --> @pandaymx commented on GitHub (May 1, 2024): llama runner process no longer running: 3221226505 error:Cannot read C:\Users\panda\AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031 This is a Linux path, do not use in Winodws?
Author
Owner

@dhiltgen commented on GitHub (May 1, 2024):

Can you share the server logs?

<!-- gh-comment-id:2088756474 --> @dhiltgen commented on GitHub (May 1, 2024): Can you share the server logs?
Author
Owner

@pandaymx commented on GitHub (May 1, 2024):

Can you share the server logs?

rocBLAS error: Cannot read C:\Users\panda\AppData\Local\Programs\Ollama\rocm/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031
List of available TensileLibrary Files :
time=2024-05-02T00:53:17.960+08:00 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: 3221226505 error:Cannot read C:\Users\panda\AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031"
[GIN] 2024/05/02 - 00:53:17 | 500 | 3.7567541s | 127.0.0.1 | POST "/api/chat"

<!-- gh-comment-id:2088760563 --> @pandaymx commented on GitHub (May 1, 2024): > Can you share the server logs? rocBLAS error: Cannot read C:\Users\panda\AppData\Local\Programs\Ollama\rocm\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031 List of available TensileLibrary Files : time=2024-05-02T00:53:17.960+08:00 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: 3221226505 error:Cannot read C:\\Users\\panda\\AppData\\Local\\Programs\\Ollama\\rocm\\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1031" [GIN] 2024/05/02 - 00:53:17 | 500 | 3.7567541s | 127.0.0.1 | POST "/api/chat"
Author
Owner

@dhiltgen commented on GitHub (May 1, 2024):

I would have expected a bit more information during startup. Here's what I see on my Radeon test system...

time=2024-05-01T09:59:59.889-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11.3 rocm_v5.7 cpu cpu_avx cpu_avx2]"
time=2024-05-01T09:59:59.889-07:00 level=INFO source=gpu.go:96 msg="Detecting GPUs"
time=2024-05-01T09:59:59.926-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-01T09:59:59.944-07:00 level=INFO source=amd_windows.go:39 msg="AMD Driver: 50732000"
time=2024-05-01T09:59:59.946-07:00 level=INFO source=amd_windows.go:68 msg="detected hip devices" count=2
time=2024-05-01T09:59:59.946-07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=0 name="AMD Radeon(TM) Graphics" gfx=gfx1036
time=2024-05-01T09:59:59.946-07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=0
time=2024-05-01T09:59:59.946-07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=1 name="AMD Radeon RX 7900 XTX" gfx=gfx1100
time=2024-05-01T09:59:59.946-07:00 level=INFO source=amd_windows.go:109 msg="amdgpu is supported" gpu=1 gpu_type=gfx1100
time=2024-05-01T10:00:00.130-07:00 level=INFO source=amd_windows.go:127 msg="amdgpu memory" gpu=1 total="24560.0 MiB"
time=2024-05-01T10:00:00.130-07:00 level=INFO source=amd_windows.go:128 msg="amdgpu memory" gpu=1 available="24432.0 MiB"

It may also be helpful to set $env:OLLAMA_DEBUG="1" to get more verbose logging on GPU discovery. The system is supposed to detect what ROCm supports, and bypass GPUs that aren't supported, but this "no such file" error implies that's not working properly.

<!-- gh-comment-id:2088817314 --> @dhiltgen commented on GitHub (May 1, 2024): I would have expected a bit more information during startup. Here's what I see on my Radeon test system... ``` time=2024-05-01T09:59:59.889-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11.3 rocm_v5.7 cpu cpu_avx cpu_avx2]" time=2024-05-01T09:59:59.889-07:00 level=INFO source=gpu.go:96 msg="Detecting GPUs" time=2024-05-01T09:59:59.926-07:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-05-01T09:59:59.944-07:00 level=INFO source=amd_windows.go:39 msg="AMD Driver: 50732000" time=2024-05-01T09:59:59.946-07:00 level=INFO source=amd_windows.go:68 msg="detected hip devices" count=2 time=2024-05-01T09:59:59.946-07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=0 name="AMD Radeon(TM) Graphics" gfx=gfx1036 time=2024-05-01T09:59:59.946-07:00 level=INFO source=amd_windows.go:99 msg="iGPU detected skipping" id=0 time=2024-05-01T09:59:59.946-07:00 level=INFO source=amd_windows.go:88 msg="hip device" id=1 name="AMD Radeon RX 7900 XTX" gfx=gfx1100 time=2024-05-01T09:59:59.946-07:00 level=INFO source=amd_windows.go:109 msg="amdgpu is supported" gpu=1 gpu_type=gfx1100 time=2024-05-01T10:00:00.130-07:00 level=INFO source=amd_windows.go:127 msg="amdgpu memory" gpu=1 total="24560.0 MiB" time=2024-05-01T10:00:00.130-07:00 level=INFO source=amd_windows.go:128 msg="amdgpu memory" gpu=1 available="24432.0 MiB" ``` It may also be helpful to set `$env:OLLAMA_DEBUG="1"` to get more verbose logging on GPU discovery. The system is supposed to detect what ROCm supports, and bypass GPUs that aren't supported, but this "no such file" error implies that's not working properly.
Author
Owner

@pandaymx commented on GitHub (May 2, 2024):

@dhiltgen Sorry,I deleted the entire contents of the server.log and restarted and ran all the logs of the model in this file。question link

<!-- gh-comment-id:2089374953 --> @pandaymx commented on GitHub (May 2, 2024): @dhiltgen Sorry,I deleted the entire contents of the server.log and restarted and ran all the logs of the model in this file。[question link](https://github.com/pandaymx/issue/blob/main/ollama%20server%20log)
Author
Owner

@dhiltgen commented on GitHub (May 2, 2024):

Unfortunately you've hit #3107. You can't use the override on windows.

<!-- gh-comment-id:2090951933 --> @dhiltgen commented on GitHub (May 2, 2024): Unfortunately you've hit #3107. You can't use the override on windows.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28286