[GH-ISSUE #1470] Will it be possible to run the new mixtral model quantized? #26552

Closed
opened 2026-04-22 02:54:02 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @ManuXD32 on GitHub (Dec 11, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1470

Originally assigned to: @jmorganca on GitHub.

Hello, I know llama.cpp still doesn't support it, but this PR does https://github.com/ggerganov/llama.cpp/pull/4406
Is there a way to use it in ollama?

Originally created by @ManuXD32 on GitHub (Dec 11, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1470 Originally assigned to: @jmorganca on GitHub. Hello, I know llama.cpp still doesn't support it, but this PR does https://github.com/ggerganov/llama.cpp/pull/4406 Is there a way to use it in ollama?
Author
Owner

@r7l commented on GitHub (Dec 13, 2023):

The PR has been merged. This is a huge release. Ollama probably needs to update their vendor dependency to get Mixtral support.

<!-- gh-comment-id:1853837141 --> @r7l commented on GitHub (Dec 13, 2023): The [PR has been merged](https://github.com/ggerganov/llama.cpp/releases/tag/b1629). This is a huge release. Ollama probably needs to update their vendor dependency to get Mixtral support.
Author
Owner

@ManuXD32 commented on GitHub (Dec 14, 2023):

Okay, thanks!

<!-- gh-comment-id:1856459493 --> @ManuXD32 commented on GitHub (Dec 14, 2023): Okay, thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26552