[GH-ISSUE #1477] Mixtral 8X7B #26558

Closed
opened 2026-04-22 02:54:27 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @pdavis68 on GitHub (Dec 12, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1477

I have read that Mixtral 8X7B requires a PR from llama.cpp (https://github.com/ggerganov/llama.cpp/pull/4406) according to this source: (https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF).

Are there any plans yet to incorporate these changes? Is there a timeline? Mixtral 8X7B looks very impressive (appears to outperform LLaMA 2 70B in most benchmarks.) and I'd love to get it into Ollama!

Here's Mistral's page on it: https://mistral.ai/news/mixtral-of-experts/

Originally created by @pdavis68 on GitHub (Dec 12, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1477 I have read that Mixtral 8X7B requires a PR from llama.cpp (https://github.com/ggerganov/llama.cpp/pull/4406) according to this source: (https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF). Are there any plans yet to incorporate these changes? Is there a timeline? Mixtral 8X7B looks very impressive (appears to outperform LLaMA 2 70B in most benchmarks.) and I'd love to get it into Ollama! Here's Mistral's page on it: https://mistral.ai/news/mixtral-of-experts/
Author
Owner

@tmc commented on GitHub (Dec 12, 2023):

Until upstream (llama.cpp) support lands this isn't possible.

<!-- gh-comment-id:1851196485 --> @tmc commented on GitHub (Dec 12, 2023): Until upstream (llama.cpp) support lands this isn't possible.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26558