[GH-ISSUE #10519] Gemma3 Vision conversion from gguf #53434

Closed
opened 2026-04-29 03:09:21 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @mmathew23 on GitHub (May 1, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10519

What is the issue?

In general, when people finetune vision models they can use the llama.cpp script to convert to gguf files, create a modelfile and then create an ollama model. This is an extremely common workflow, especially for custom quants. At the moment there is no way to programmatically do this for gemma3 vision models as it uses a different format requiring a single gguf. @pdevine mentioned stitching together the ggufs to follow the ollama format. I'm able to do this but ollama create fails but a fix is suggested at #10162. A common question we get at unsloth is how to run these vision finetunes with ollama and we'd love to be able to help with that. Accepting #10162 would unblock us. Alternatively, could ollama provide some guidance or thoughts for the plan to suport vision models going forward.

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @mmathew23 on GitHub (May 1, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10519 ### What is the issue? In general, when people finetune vision models they can use the llama.cpp script to convert to gguf files, create a modelfile and then create an ollama model. This is an extremely common workflow, especially for custom quants. At the moment there is no way to programmatically do this for gemma3 vision models as it uses a different format requiring a single gguf. @pdevine mentioned stitching together the ggufs to follow the ollama format. I'm able to do this but ollama create fails but a fix is suggested at [#10162](https://github.com/ollama/ollama/pull/10162). A common question we get at unsloth is how to run these vision finetunes with ollama and we'd love to be able to help with that. Accepting #10162 would unblock us. Alternatively, could ollama provide some guidance or thoughts for the plan to suport vision models going forward. ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the questionbug labels 2026-04-29 03:09:30 -05:00
Author
Owner

@rick-github commented on GitHub (May 21, 2025):

Fixed by #10722

<!-- gh-comment-id:2899277828 --> @rick-github commented on GitHub (May 21, 2025): Fixed by #10722
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53434