[GH-ISSUE #13190] Vulkan an, nur am loopen #55233

Open
opened 2026-04-29 08:34:18 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @Profex86 on GitHub (Nov 21, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13190

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Das Bild sagt eigentlich alles aus. Ollama Version ist die Aktuelle.

Image

Relevant log output


OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.13.0

Originally created by @Profex86 on GitHub (Nov 21, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13190 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Das Bild sagt eigentlich alles aus. Ollama Version ist die Aktuelle. <img width="2559" height="1439" alt="Image" src="https://github.com/user-attachments/assets/33b4be46-8faf-439d-9a07-380500830f64" /> ### Relevant log output ```shell ``` ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.13.0
GiteaMirror added the vulkanbug labels 2026-04-29 08:34:18 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 21, 2025):

Im Log steht mehr.

<!-- gh-comment-id:3562474265 --> @rick-github commented on GitHub (Nov 21, 2025): Im [Log](https://docs.ollama.com/troubleshooting) steht mehr.
Author
Owner

@Profex86 commented on GitHub (Nov 21, 2025):

ollama_backend.log

Vulkaninfo.log

Image
<!-- gh-comment-id:3562514691 --> @Profex86 commented on GitHub (Nov 21, 2025): [ollama_backend.log](https://github.com/user-attachments/files/23674209/ollama_backend.log) [Vulkaninfo.log](https://github.com/user-attachments/files/23674295/Vulkaninfo.log) <img width="1969" height="1393" alt="Image" src="https://github.com/user-attachments/assets/4327bd0a-b171-4f64-b000-2fb78e3f345a" />
Author
Owner

@rick-github commented on GitHub (Nov 21, 2025):

https://github.com/ollama/ollama/issues/12600#issuecomment-3556205865

<!-- gh-comment-id:3562539734 --> @rick-github commented on GitHub (Nov 21, 2025): https://github.com/ollama/ollama/issues/12600#issuecomment-3556205865
Author
Owner

@Profex86 commented on GitHub (Nov 21, 2025):

#12600 (comment)

On Nobara (Linux) the exact same Ollama version runs perfectly – I can fully utilize the total 128 GB of VRAM without any issues.
But this not my problem, my models answered symbols or loop words on windows.
On Windows, it’s not about VRAM at all: no model runs, whether it’s 1‑bit or 120‑bit. Something is fundamentally broken, almost as if my prompt isn’t being correctly converted into tokens.

<!-- gh-comment-id:3562580761 --> @Profex86 commented on GitHub (Nov 21, 2025): > [#12600 (comment)](https://github.com/ollama/ollama/issues/12600#issuecomment-3556205865) On Nobara (Linux) the exact same Ollama version runs perfectly – I can fully utilize the total 128 GB of VRAM without any issues. But this not my problem, my models answered symbols or loop words on windows. On Windows, it’s not about VRAM at all: no model runs, whether it’s 1‑bit or 120‑bit. Something is fundamentally broken, almost as if my prompt isn’t being correctly converted into tokens.
Author
Owner

@Profex86 commented on GitHub (Nov 21, 2025):

Temporary one time Fix (Windows, AMD MI50): I was able to get Ollama running by replacing the Vulkan backend files with the ones shipped in LM Studio.

Copied ggml-vulkan.dll and vulkan-1.dll from LM Studio’s install folder into Ollama’s program directory.

After that, models answered correctly. but, restart ollama, same thing, answer is broken

<!-- gh-comment-id:3562659190 --> @Profex86 commented on GitHub (Nov 21, 2025): Temporary one time Fix (Windows, AMD MI50): I was able to get Ollama running by replacing the Vulkan backend files with the ones shipped in LM Studio. Copied ggml-vulkan.dll and vulkan-1.dll from LM Studio’s install folder into Ollama’s program directory. After that, models answered correctly. but, restart ollama, same thing, answer is broken
Author
Owner

@dhiltgen commented on GitHub (Nov 21, 2025):

PR #12992 contains 40 updates to Vulkan from upstream, so it's possible the drift vs. LM Studio might be resolved once that's merged.

<!-- gh-comment-id:3564982008 --> @dhiltgen commented on GitHub (Nov 21, 2025): PR #12992 contains 40 updates to Vulkan from upstream, so it's possible the drift vs. LM Studio might be resolved once that's merged.
Author
Owner

@dhiltgen commented on GitHub (Dec 5, 2025):

Please give the 0.13.2 RC a try and see if you see better results
https://github.com/ollama/ollama/releases

<!-- gh-comment-id:3614877037 --> @dhiltgen commented on GitHub (Dec 5, 2025): Please give the 0.13.2 RC a try and see if you see better results https://github.com/ollama/ollama/releases
Author
Owner

@Profex86 commented on GitHub (Dec 29, 2025):

Es hat auf Anhieb perfekt funktioniert! Momentan optimiere ich, wie man unter Windows die bestmögliche Performance herausholen kann. Meine Erkenntnisse werde ich hier bald teilen. Liebe Grüße

<!-- gh-comment-id:3696991529 --> @Profex86 commented on GitHub (Dec 29, 2025): Es hat auf Anhieb perfekt funktioniert! Momentan optimiere ich, wie man unter Windows die bestmögliche Performance herausholen kann. Meine Erkenntnisse werde ich hier bald teilen. Liebe Grüße
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55233