[GH-ISSUE #13671] Please consider supporting MiroThinker-v1.5 #8981

Closed
opened 2026-04-12 21:48:52 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @asmith26 on GitHub (Jan 10, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13671

MiroThinker is an open-source search agent model, built for tool-augmented reasoning and real-world information seeking, aiming to match the deep research experience of OpenAI Deep Research and Gemini Deep Research.

Thanks!

Originally created by @asmith26 on GitHub (Jan 10, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13671 - https://github.com/MiroMindAI/MiroThinker - https://huggingface.co/collections/miromind-ai/mirothinker-v15 > MiroThinker is an open-source search agent model, built for tool-augmented reasoning and real-world information seeking, aiming to match the deep research experience of OpenAI Deep Research and Gemini Deep Research. Thanks!
GiteaMirror added the model label 2026-04-12 21:48:52 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 11, 2026):

This is a fine tuned qwen3 model and so can be imported.

https://huggingface.co/models?search=Mirothinker%20v1.5%20gguf

$ ollama run hf.co/mradermacher/MiroThinker-v1.5-30B-GGUF:Q4_K_M hello
pulling manifest 
pulling 6d1c5507a2ca: 100% ▕███████████████████████████████████▏  18 GB                         
pulling 2d54db2b9bb2: 100% ▕███████████████████████████████████▏ 1.5 KB                         
pulling 4a6ce91d86a8: 100% ▕███████████████████████████████████▏   99 B                         
pulling 0db680db62ea: 100% ▕███████████████████████████████████▏  561 B                         
verifying sha256 digest
writing manifest 
success 
Thinking...
The user said "hello" which is a simple greeting. I should respond warmly and appropriately. Since the user hasn't asked any specific question yet, I could:
1. Acknowledge their greeting
2. Offer assistance
3. Keep it friendly and open-ended

I should keep my response simple, friendly, and inviting to encourage further conversation.
...done thinking.

Hello! 👋 How are you today? I'm here to help if you have any questions or need assistance with anything.
<!-- gh-comment-id:3734280210 --> @rick-github commented on GitHub (Jan 11, 2026): This is a fine tuned qwen3 model and so can be [imported](https://docs.ollama.com/import#importing-a-model). https://huggingface.co/models?search=Mirothinker%20v1.5%20gguf ```console $ ollama run hf.co/mradermacher/MiroThinker-v1.5-30B-GGUF:Q4_K_M hello pulling manifest pulling 6d1c5507a2ca: 100% ▕███████████████████████████████████▏ 18 GB pulling 2d54db2b9bb2: 100% ▕███████████████████████████████████▏ 1.5 KB pulling 4a6ce91d86a8: 100% ▕███████████████████████████████████▏ 99 B pulling 0db680db62ea: 100% ▕███████████████████████████████████▏ 561 B verifying sha256 digest writing manifest success Thinking... The user said "hello" which is a simple greeting. I should respond warmly and appropriately. Since the user hasn't asked any specific question yet, I could: 1. Acknowledge their greeting 2. Offer assistance 3. Keep it friendly and open-ended I should keep my response simple, friendly, and inviting to encourage further conversation. ...done thinking. Hello! 👋 How are you today? I'm here to help if you have any questions or need assistance with anything. ```
Author
Owner

@asmith26 commented on GitHub (Jan 11, 2026):

Many thanks for your help @rick-github that's really helpful to know.

I'd also be interested in a cloud solution as I have limited hardware?

<!-- gh-comment-id:3734490836 --> @asmith26 commented on GitHub (Jan 11, 2026): Many thanks for your help @rick-github that's really helpful to know. I'd also be interested in a cloud solution as I have limited hardware?
Author
Owner

@rick-github commented on GitHub (Jan 11, 2026):

Ollama Cloud has a free tier and currently offers the models listed below. If you specifically want to run MiroThinker in the cloud, you will have to rent a cloud GPU as Ollama Cloud doesn't offer the ability to run arbitrary models at the moment.

name context length quant capabilities
cogito-2.1:671b 163840 FP8 completion,thinking,tools
deepseek-v3.1:671b 163840 FP8 completion,thinking,tools
deepseek-v3.2 163840 FP8 completion,thinking,tools
devstral-2:123b 262144 FP8 completion,tools
devstral-small-2:24b 262144 FP8 completion,tools,vision
gemini-3-flash-preview 1048576 completion,thinking,tools
gemini-3-pro-preview 1048576 completion,thinking,tools,vision
gemma3:12b 131072 BF16 completion,vision
gemma3:27b 131072 BF16 completion,vision
gemma3:4b 131072 BF16 completion,vision
glm-4.6 202752 FP8 completion,thinking,tools
glm-4.7 202752 FP8 completion,thinking,tools
gpt-oss:120b 131072 MXFP4 completion,thinking,tools
gpt-oss:20b 131072 MXFP4 completion,thinking,tools
kimi-k2:1t 262144 FP8 completion,tools
kimi-k2-thinking 262144 INT4 completion,thinking,tools
minimax-m2.1 204800 FP8 completion,thinking,tools
minimax-m2 204800 completion,tools
ministral-3:14b 262144 FP8 completion,tools,vision
ministral-3:3b 262144 FP8 completion,tools,vision
ministral-3:8b 262144 FP8 completion,tools,vision
mistral-large-3:675b 262144 FP8 completion,tools,vision
nemotron-3-nano:30b 1048576 FP8 completion,thinking,tools
qwen3-coder:480b 262144 FP8 completion,tools
qwen3-next:80b 262144 FP8 completion,thinking,tools
qwen3-vl:235b 262144 BF16 completion,thinking,tools,vision
qwen3-vl:235b-instruct 262144 FP8 completion,tools,vision
rnj-1:8b 32768 FP16 completion,tools
<!-- gh-comment-id:3734827856 --> @rick-github commented on GitHub (Jan 11, 2026): [Ollama Cloud](https://ollama.com/cloud) has a free tier and currently offers the models listed below. If you specifically want to run MiroThinker in the cloud, you will have to rent a cloud GPU as Ollama Cloud doesn't offer the ability to run arbitrary models at the moment. | name |context length| quant | capabilities| | -- | -- | -- | -- | |[cogito-2.1:671b](https://ollama.com/library/cogito-2.1:671b)|163840|FP8|completion,thinking,tools| |[deepseek-v3.1:671b](https://ollama.com/library/deepseek-v3.1:671b)|163840|FP8|completion,thinking,tools| |[deepseek-v3.2](https://ollama.com/library/deepseek-v3.2)|163840|FP8|completion,thinking,tools| |[devstral-2:123b](https://ollama.com/library/devstral-2:123b)|262144|FP8|completion,tools| |[devstral-small-2:24b](https://ollama.com/library/devstral-small-2:24b)|262144|FP8|completion,tools,vision| |[gemini-3-flash-preview](https://ollama.com/library/gemini-3-flash-preview)|1048576||completion,thinking,tools| |[gemini-3-pro-preview](https://ollama.com/library/gemini-3-pro-preview)|1048576||completion,thinking,tools,vision| |[gemma3:12b](https://ollama.com/library/gemma3:12b)|131072|BF16|completion,vision| |[gemma3:27b](https://ollama.com/library/gemma3:27b)|131072|BF16|completion,vision| |[gemma3:4b](https://ollama.com/library/gemma3:4b)|131072|BF16|completion,vision| |[glm-4.6](https://ollama.com/library/glm-4.6)|202752|FP8|completion,thinking,tools| |[glm-4.7](https://ollama.com/library/glm-4.7)|202752|FP8|completion,thinking,tools| |[gpt-oss:120b](https://ollama.com/library/gpt-oss:120b)|131072|MXFP4|completion,thinking,tools| |[gpt-oss:20b](https://ollama.com/library/gpt-oss:20b)|131072|MXFP4|completion,thinking,tools| |[kimi-k2:1t](https://ollama.com/library/kimi-k2:1t)|262144|FP8|completion,tools| |[kimi-k2-thinking](https://ollama.com/library/kimi-k2-thinking)|262144|INT4|completion,thinking,tools| |[minimax-m2.1](https://ollama.com/library/minimax-m2.1)|204800|FP8|completion,thinking,tools| |[minimax-m2](https://ollama.com/library/minimax-m2)|204800||completion,tools| |[ministral-3:14b](https://ollama.com/library/ministral-3:14b)|262144|FP8|completion,tools,vision| |[ministral-3:3b](https://ollama.com/library/ministral-3:3b)|262144|FP8|completion,tools,vision| |[ministral-3:8b](https://ollama.com/library/ministral-3:8b)|262144|FP8|completion,tools,vision| |[mistral-large-3:675b](https://ollama.com/library/mistral-large-3:675b)|262144|FP8|completion,tools,vision| |[nemotron-3-nano:30b](https://ollama.com/library/nemotron-3-nano:30b)|1048576|FP8|completion,thinking,tools| |[qwen3-coder:480b](https://ollama.com/library/qwen3-coder:480b)|262144|FP8|completion,tools| |[qwen3-next:80b](https://ollama.com/library/qwen3-next:80b)|262144|FP8|completion,thinking,tools| |[qwen3-vl:235b](https://ollama.com/library/qwen3-vl:235b)|262144|BF16|completion,thinking,tools,vision| |[qwen3-vl:235b-instruct](https://ollama.com/library/qwen3-vl:235b-instruct)|262144|FP8|completion,tools,vision| |[rnj-1:8b](https://ollama.com/library/rnj-1:8b)|32768|FP16|completion,tools|
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8981