[GH-ISSUE #15802] minimax-m2.1 #56582

Open
opened 2026-04-29 11:03:38 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @Notbici on GitHub (Apr 24, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15802

I'm finding more and more of these flagship models being only uploaded for the cloud but I'd like the option to use those same models on the machines I already have.

How can I pull ollama cloud models to run locally, are they gatekept or using a different architecture to what Ollama already runs on?

Model I wanted to try: minimax-m2.1
Machine: 4x RTX 6000 Pro Blackwell

Thanks!

Originally created by @Notbici on GitHub (Apr 24, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15802 I'm finding more and more of these flagship models being only uploaded for the cloud but I'd like the option to use those same models on the machines I already have. How can I pull ollama cloud models to run locally, are they gatekept or using a different architecture to what Ollama already runs on? Model I wanted to try: `minimax-m2.1` Machine: 4x RTX 6000 Pro Blackwell Thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#56582