[GH-ISSUE #9823] OLLAMA can‘t correctly recognize AMD 6750gre 10G display memory #6431

Open
opened 2026-04-12 17:59:29 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @MinutyKnight on GitHub (Mar 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9823

OLLAMA mistakenly identified my VRAM as 12GiB, but my VRAM capacity is 10GiB. But OLLAMA can still run the deepseek-r1:1.5b model using the CPU. As follows:
level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1036 driver=6.3 name="AMD Radeon(TM) Graphics" total="12.2 GiB" available="12.0 GiB"

Additionally, OLLAMA provided some "key not found" warnings, but I don't quite understand what's going on. As follows:
level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128
server.log

Originally created by @MinutyKnight on GitHub (Mar 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9823 OLLAMA mistakenly identified my VRAM as 12GiB, but my VRAM capacity is 10GiB. But OLLAMA can still run the deepseek-r1:1.5b model using the CPU. As follows: level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1036 driver=6.3 name="AMD Radeon(TM) Graphics" total="12.2 GiB" available="12.0 GiB" Additionally, OLLAMA provided some "key not found" warnings, but I don't quite understand what's going on. As follows: level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128 [server.log](https://github.com/user-attachments/files/19291448/server.log)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6431