[GH-ISSUE #14070] Qwen3 next had an important Bugfix in llama.cpp #34951

Closed
opened 2026-04-22 19:01:26 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @inforithmics on GitHub (Feb 4, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14070

What is the issue?

In pullrequest it was mentioned that the ollama code need to be adapted.

https://github.com/ggml-org/llama.cpp/pull/19324

Relevant log output


OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.15.5-rc.2

Originally created by @inforithmics on GitHub (Feb 4, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14070 ### What is the issue? In pullrequest it was mentioned that the ollama code need to be adapted. https://github.com/ggml-org/llama.cpp/pull/19324 ### Relevant log output ```shell ``` ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.15.5-rc.2
GiteaMirror added the bug label 2026-04-22 19:01:26 -05:00
Author
Owner

@jmorganca commented on GitHub (Feb 4, 2026):

Thanks for the issue! This should be fixed on main and in 0.15.5-rc3

<!-- gh-comment-id:3850258914 --> @jmorganca commented on GitHub (Feb 4, 2026): Thanks for the issue! This should be fixed on main and in 0.15.5-rc3
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34951