[PR #10637] chore: update mllama to use ollama engine #13305

Closed
opened 2026-04-13 00:23:21 -05:00 by GiteaMirror · 0 comments
Owner

Original Pull Request: https://github.com/ollama/ollama/pull/10637

State: closed
Merged: Yes


this change drops the mllama patch to llama.cpp and routes those model requests to the ollama engine.

this also fixes an issue with the implementation where the processed image is contains less than 4 tiles. since the model is trained with 4 tiles, e.g. it's precomputed tile positional embeddings, this must be padded. previously this would've been implicit but the new implementation requires explicit padding

**Original Pull Request:** https://github.com/ollama/ollama/pull/10637 **State:** closed **Merged:** Yes --- this change drops the mllama patch to llama.cpp and routes those model requests to the ollama engine. this also fixes an issue with the implementation where the processed image is contains less than 4 tiles. since the model is trained with 4 tiles, e.g. it's precomputed tile positional embeddings, this must be padded. previously this would've been implicit but the new implementation requires explicit padding
GiteaMirror added the pull-request label 2026-04-13 00:23:21 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#13305