[GH-ISSUE #4962] Ollama for Windows does not recognize amd 7600 gpu #28894

Closed
opened 2026-04-22 07:26:49 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @jeffreysinclair on GitHub (Jun 10, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4962

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I noticed that it is not using the amd GPU, as it presents the following error in the log:

time=2024-06-10T11:52:39.440-03:00 level=INFO source=amd_windows.go:90 msg="unsupported Radeon iGPU detected skipping" id=1 name="AMD Radeon(TM) Graphics" gfx=gfx1036
time=2024-06-10T11:52:39.496-03:00 level=INFO source=types.go:71 msg="inference compute" id=0 library=rocm compute=gfx1102 driver=0.0 name="AMD Radeon RX 7600" total="8.0

OS

Windows

GPU

AMD

CPU

AMD

Ollama version

Released v0.1.42

Originally created by @jeffreysinclair on GitHub (Jun 10, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4962 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I noticed that it is not using the amd GPU, as it presents the following error in the log: time=2024-06-10T11:52:39.440-03:00 level=INFO source=amd_windows.go:90 msg="unsupported Radeon iGPU detected skipping" id=1 name="AMD Radeon(TM) Graphics" gfx=gfx1036 time=2024-06-10T11:52:39.496-03:00 level=INFO source=types.go:71 msg="inference compute" id=0 library=rocm compute=gfx1102 driver=0.0 name=**"AMD Radeon RX 7600" total="8.0** ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version Released v0.1.42
GiteaMirror added the needs more infobugwindows labels 2026-04-22 07:26:49 -05:00
Author
Owner

@dhiltgen commented on GitHub (Jun 13, 2024):

Those log messages look normal, and we did correctly identify the discrete GPU.

What happens when you load a model, ideally one that fits in 8G? What do you see in ollama ps?

<!-- gh-comment-id:2166449896 --> @dhiltgen commented on GitHub (Jun 13, 2024): Those log messages look normal, and we did correctly identify the discrete GPU. What happens when you load a model, ideally one that fits in 8G? What do you see in `ollama ps`?
Author
Owner

@jeffreysinclair commented on GitHub (Jun 15, 2024):

ollama ps
NAME ID SIZE PROCESSOR UNTIL
phi3:latest 64c1188f2485 3.8 GB 100% GPU 3 minutes from now

<!-- gh-comment-id:2169014762 --> @jeffreysinclair commented on GitHub (Jun 15, 2024): ollama ps NAME ID SIZE PROCESSOR UNTIL phi3:latest 64c1188f2485 3.8 GB 100% GPU 3 minutes from now
Author
Owner

@jeffreysinclair commented on GitHub (Jun 15, 2024):

Sorry, I thought the messages were errors

<!-- gh-comment-id:2169015472 --> @jeffreysinclair commented on GitHub (Jun 15, 2024): Sorry, I thought the messages were errors
Author
Owner

@dhiltgen commented on GitHub (Jun 15, 2024):

It looks like it is running on your GPU, so I'll go ahead and close this.

<!-- gh-comment-id:2170524363 --> @dhiltgen commented on GitHub (Jun 15, 2024): It looks like it is running on your GPU, so I'll go ahead and close this.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28894