[GH-ISSUE #15879] Running qwen3.6:27b-bf16 on an AMD Ryzen AI Max leads to gibberish #72177

Open
opened 2026-05-05 03:35:45 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @zzador on GitHub (Apr 29, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15879

What is the issue?

I have an GMKTec EVO X2 (AMD Ryzen AI Max) with 128 GB unified memory. I dedicated 96 GB to VRAM and tried to run "qwen3.6:27b-bf16" but Ollama just produces gibberish (e.g: X82g&"62834DA...). All other quantized models run without problem.

Ollama version was 0.21.2

Relevant log output


OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.21.2

Originally created by @zzador on GitHub (Apr 29, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15879 ### What is the issue? I have an GMKTec EVO X2 (AMD Ryzen AI Max) with 128 GB unified memory. I dedicated 96 GB to VRAM and tried to run "qwen3.6:27b-bf16" but Ollama just produces gibberish (e.g: X82g&"62834DA...). All other quantized models run without problem. Ollama version was 0.21.2 ### Relevant log output ```shell ``` ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.21.2
GiteaMirror added the bug label 2026-05-05 03:35:45 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#72177