[GH-ISSUE #14065] Please support the newly released model of openbmb / minicpm-o 4.5:q4_K_M #55701

Closed
opened 2026-04-29 09:36:23 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @lmzdjj1 on GitHub (Feb 4, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14065

Downloaded today. Show that the model is not supported,

Originally created by @lmzdjj1 on GitHub (Feb 4, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14065 Downloaded today. Show that the model is not supported,
GiteaMirror added the feature request label 2026-04-29 09:36:23 -05:00
Author
Owner

@maternion commented on GitHub (Feb 4, 2026):

This is an omni model, which requires support for STT and TTS i suppose. Ollama is focusing on a new experimental engine which uses mlx and if they add support for audio it will be done using that, so you have to wait for a while.

<!-- gh-comment-id:3845716552 --> @maternion commented on GitHub (Feb 4, 2026): This is an omni model, which requires support for STT and TTS i suppose. Ollama is focusing on a new experimental engine which uses mlx and if they add support for audio it will be done using that, so you have to wait for a while.
Author
Owner

@denda188 commented on GitHub (Feb 5, 2026):

+1, Please provide support as soon as possible.

<!-- gh-comment-id:3850648700 --> @denda188 commented on GitHub (Feb 5, 2026): +1, Please provide support as soon as possible.
Author
Owner

@rick-github commented on GitHub (Feb 5, 2026):

This model is not supported in official ollama but there's a fork:

https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/ollama/minicpm-o4_5_ollama.md

<!-- gh-comment-id:3854846670 --> @rick-github commented on GitHub (Feb 5, 2026): This model is not supported in official ollama but there's a fork: https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/ollama/minicpm-o4_5_ollama.md
Author
Owner

@Hello-World-Traveler commented on GitHub (Feb 16, 2026):

500 Internal Server Error: llama runner process has terminated: GGML_ASSERT(false && "unsupported minicpmv version") failed

ollama pull openbmb/minicpm-o4.5:latest

Just pulled this model and doesn't run. I expected it to offload to CPU but after using 7GB vram , error.

Using ollama docker.

<!-- gh-comment-id:3906175412 --> @Hello-World-Traveler commented on GitHub (Feb 16, 2026): `500 Internal Server Error: llama runner process has terminated: GGML_ASSERT(false && "unsupported minicpmv version") failed` `ollama pull openbmb/minicpm-o4.5:latest` Just pulled this model and doesn't run. I expected it to offload to CPU but after using 7GB vram , error. Using ollama docker.
Author
Owner

@rick-github commented on GitHub (Feb 16, 2026):

This model is not supported in official ollama but there's a fork.

<!-- gh-comment-id:3908562329 --> @rick-github commented on GitHub (Feb 16, 2026): This model is not supported in official ollama but there's a fork.
Author
Owner

@Hello-World-Traveler commented on GitHub (Feb 16, 2026):

I think there needs to be a message/banner to alert the user that it's not yet supported by ollama.

<!-- gh-comment-id:3910472389 --> @Hello-World-Traveler commented on GitHub (Feb 16, 2026): I think there needs to be a message/banner to alert the user that it's not yet supported by ollama.
Author
Owner

@rick-github commented on GitHub (Feb 16, 2026):

Only models in the main library are guaranteed to be supported by ollama. Models uploaded by users may or may not be supported.

<!-- gh-comment-id:3910546500 --> @rick-github commented on GitHub (Feb 16, 2026): Only models in the main library are guaranteed to be supported by ollama. Models uploaded by users may or may not be supported.
Author
Owner

@Hello-World-Traveler commented on GitHub (Feb 16, 2026):

That's confusing. I did think it was from the main library.

Moving forward, do you know how we can get this to work when it's already downloaded and installed? The link you posted is to import the files and create the model card.

<!-- gh-comment-id:3910782766 --> @Hello-World-Traveler commented on GitHub (Feb 16, 2026): That's confusing. I did think it was from the main library. Moving forward, do you know how we can get this to work when it's already downloaded and installed? The link you posted is to import the files and create the model card.
Author
Owner

@rick-github commented on GitHub (Feb 17, 2026):

The link contains instructions on building a modified ollama that supports the architecture of this model.

<!-- gh-comment-id:3913909872 --> @rick-github commented on GitHub (Feb 17, 2026): The link contains instructions on building a modified ollama that supports the architecture of this model.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55701