[GH-ISSUE #8848] why other model bad quantenizer #52246

Closed
opened 2026-04-28 22:38:43 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @SoloStark on GitHub (Feb 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8848

What is the issue?

Why are the models that are not flame so poorly quantenized, and mecca languages, regardless of the model it is, Command R7, Elm,... They only run well on the first flush, iqual on openwebui, only good big company ....

OS

Linux

GPU

No response

CPU

AMD

Ollama version

0.5.6

Originally created by @SoloStark on GitHub (Feb 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8848 ### What is the issue? Why are the models that are not flame so poorly quantenized, and mecca languages, regardless of the model it is, Command R7, Elm,... They only run well on the first flush, iqual on openwebui, only good big company .... ### OS Linux ### GPU _No response_ ### CPU AMD ### Ollama version 0.5.6
GiteaMirror added the bug label 2026-04-28 22:38:43 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52246