[GH-ISSUE #14744] Title: Error running openbmb/minicpm-o4.5 — GGML_ASSERT unsupported minicpmv version #9532

Closed
opened 2026-04-12 22:27:20 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @Hetolos on GitHub (Mar 9, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14744

Environment

  • Ollama version: 0.17.7
  • OS: Windows 11 (Windows/10.0.22631)
  • GPU: NVIDIA GeForce RTX 4070 Laptop GPU, driver 13.1, VRAM 8.0 GiB
  • Models path: default (note: earlier logs mention models path not accessible: C:\Users\admin.ollama\models but model was pulled)

What I did

  • ollama run openbmb/minicpm-o4.5

What happened

  • Error shown:
    500 Internal Server Error: llama runner process has terminated: GGML_ASSERT(false && "unsupported minicpmv version") failed

Relevant server logs (excerpt):

  • server discovery and GPU match: inference compute ... Name=CUDA0 description="NVIDIA GeForce RTX 4070 Laptop GPU" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="8.0 GiB" available="5.5 GiB" (from server log). :contentReference[oaicite:7]{index=7}
  • UI error: 500 Internal Server Error: llama runner process has terminated: GGML_ASSERT(false && "unsupported minicpmv version") failed. :contentReference[oaicite:8]{index=8}

What I tried

  • Confirmed GPU visible to Ollama; tried upgrading Ollama.

Request

  • Please advise whether current Ollama release supports the minicpmv version used by openbmb/minicpm-o4.5, or whether there's a recommended Ollama build/flag to enable that support.
    Thanks.
Originally created by @Hetolos on GitHub (Mar 9, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14744 **Environment** - Ollama version: 0.17.7 - OS: Windows 11 (Windows/10.0.22631) - GPU: NVIDIA GeForce RTX 4070 Laptop GPU, driver 13.1, VRAM 8.0 GiB - Models path: default (note: earlier logs mention models path not accessible: C:\Users\admin\.ollama\models but model was pulled) **What I did** - ollama run openbmb/minicpm-o4.5 **What happened** - Error shown: `500 Internal Server Error: llama runner process has terminated: GGML_ASSERT(false && "unsupported minicpmv version") failed` **Relevant server logs (excerpt)**: - server discovery and GPU match: `inference compute ... Name=CUDA0 description="NVIDIA GeForce RTX 4070 Laptop GPU" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="8.0 GiB" available="5.5 GiB"` (from server log). :contentReference[oaicite:7]{index=7} - UI error: `500 Internal Server Error: llama runner process has terminated: GGML_ASSERT(false && "unsupported minicpmv version") failed`. :contentReference[oaicite:8]{index=8} **What I tried** - Confirmed GPU visible to Ollama; tried upgrading Ollama. **Request** - Please advise whether current Ollama release supports the `minicpmv` version used by openbmb/minicpm-o4.5, or whether there's a recommended Ollama build/flag to enable that support. Thanks.
Author
Owner

@rick-github commented on GitHub (Mar 9, 2026):

minicpm-o4.5 is not supported in official ollama yet. The model card links to a document that explains how to build a version of ollama that supports the model.

<!-- gh-comment-id:4026153249 --> @rick-github commented on GitHub (Mar 9, 2026): minicpm-o4.5 is not supported in official ollama yet. The model card links to a document that explains how to build a version of ollama that supports the model.
Author
Owner

@Hetolos commented on GitHub (Mar 10, 2026):

minicpm-o4.5 is not supported in official ollama yet. The model card links to a document that explains how to build a version of ollama that supports the model.

Okey,thx

<!-- gh-comment-id:4030258698 --> @Hetolos commented on GitHub (Mar 10, 2026): > minicpm-o4.5 is not supported in official ollama yet. The model card links to a document that explains how to build a version of ollama that supports the model. Okey,thx
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#9532