[GH-ISSUE #9727] Please support vision models #52870

Open
opened 2026-04-29 01:14:18 -05:00 by GiteaMirror · 22 comments
Owner

Originally created by @Jigit-ship-it on GitHub (Mar 13, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9727

I would like to see at Ollama these models:

CohereForAI/aya-vision-8b
CohereForAI/aya-vision-32b
microsoft/Phi-4-multimodal-instruct
Qwen2.5-VL-7B

Also Ollama 0.6.0. doesn't support these models architecture when I try to import from safetensors.
Please support it.

Thanks!

Originally created by @Jigit-ship-it on GitHub (Mar 13, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9727 I would like to see at Ollama these models: CohereForAI/aya-vision-8b CohereForAI/aya-vision-32b microsoft/Phi-4-multimodal-instruct Qwen2.5-VL-7B Also Ollama 0.6.0. doesn't support these models architecture when I try to import from safetensors. Please support it. Thanks!
GiteaMirror added the model label 2026-04-29 01:14:18 -05:00
Author
Owner

@YarvixPA commented on GitHub (Mar 13, 2025):

Also... 6 days ago, Jina AI published an embedding model for texts and images on HuggingFace. Implementing it in Ollama would be useful for image embedding.

https://huggingface.co/jinaai/jina-clip-v2

Please @ollama @jmorganca consider implementing more vision models as soon as possible and providing multiple quantization levels. Gemma 3 only offers Q4 or FP16... I'm interested in Q6_K_L (in my case). Perhaps other users are interested in other levels. I tried using the quantized version of Bartowski, but it doesn't have the vision capability (or at least it didn't work for me).

<!-- gh-comment-id:2721862590 --> @YarvixPA commented on GitHub (Mar 13, 2025): Also... 6 days ago, Jina AI published an embedding model for texts and images on HuggingFace. Implementing it in Ollama would be useful for image embedding. https://huggingface.co/jinaai/jina-clip-v2 Please @ollama @jmorganca consider implementing more vision models as soon as possible and providing multiple quantization levels. Gemma 3 only offers Q4 or FP16... I'm interested in Q6_K_L (in my case). Perhaps other users are interested in other levels. I tried using the quantized version of Bartowski, but it doesn't have the vision capability (or at least it didn't work for me).
Author
Owner

@rjmalagon commented on GitHub (Mar 16, 2025):

@YarvixPA Hi, I am building Gemma 3 on different quantization levels (mainly Q8 and BF16) with vision. While I have tensor files at hand ¿Which size do you need at Q6_K_L? And I will upload it to my repo for you.

<!-- gh-comment-id:2727552470 --> @rjmalagon commented on GitHub (Mar 16, 2025): @YarvixPA Hi, I am building Gemma 3 on different quantization levels (mainly Q8 and BF16) with vision. While I have tensor files at hand ¿Which size do you need at Q6_K_L? And I will upload it to my repo for you.
Author
Owner

@rjmalagon commented on GitHub (Mar 16, 2025):

I sympathize with the Ollama team. I read comments around Ollama and llama.cpp source code about the complex and hard job of integrating the multiple clip/vision components because they work very differently between them.

For example, did you know Gemma 3 only admit one image? There is good news about this, the Ollama team are working on how to correctly tackle this in such a way to make it easy to use for the end user.

The devs speak about it in code (and its comment) and we need someone to make a blog or friendly timeline about these advanced features in progress.

Also... 6 days ago, Jina AI published an embedding model for texts and images on HuggingFace. Implementing it in Ollama would be useful for image embedding.

https://huggingface.co/jinaai/jina-clip-v2

Please @ollama @jmorganca consider implementing more vision models as soon as possible and providing multiple quantization levels. Gemma 3 only offers Q4 or FP16... I'm interested in Q6_K_L (in my case). Perhaps other users are interested in other levels. I tried using the quantized version of Bartowski, but it doesn't have the vision capability (or at least it didn't work for me).

<!-- gh-comment-id:2727557625 --> @rjmalagon commented on GitHub (Mar 16, 2025): I sympathize with the Ollama team. I read comments around Ollama and llama.cpp source code about the complex and hard job of integrating the multiple clip/vision components because they work very differently between them. For example, did you know Gemma 3 only admit one image? There is good news about this, the Ollama team are working on how to correctly tackle this in such a way to make it easy to use for the end user. The devs speak about it in code (and its comment) and we need someone to make a blog or friendly timeline about these advanced features in progress. > Also... 6 days ago, Jina AI published an embedding model for texts and images on HuggingFace. Implementing it in Ollama would be useful for image embedding. > > https://huggingface.co/jinaai/jina-clip-v2 > > Please [@ollama](https://github.com/ollama) [@jmorganca](https://github.com/jmorganca) consider implementing more vision models as soon as possible and providing multiple quantization levels. Gemma 3 only offers Q4 or FP16... I'm interested in Q6_K_L (in my case). Perhaps other users are interested in other levels. I tried using the quantized version of Bartowski, but it doesn't have the vision capability (or at least it didn't work for me).
Author
Owner

@YarvixPA commented on GitHub (Mar 16, 2025):

Q6_K_L would be good. Is it possible to do several levels just so that more users have to choose?

<!-- gh-comment-id:2727559457 --> @YarvixPA commented on GitHub (Mar 16, 2025): Q6_K_L would be good. Is it possible to do several levels just so that more users have to choose?
Author
Owner

@YarvixPA commented on GitHub (Mar 16, 2025):

Yes. I have read that it is difficult or for a long time to implement the clip/vision and I understand 100% that it is not something that is done from one day to the next... every day we see more models come out and we have to support it but it is difficult because there are many

But despite that, I think we are already entering multimodal execution in the local... and also in reasoning

I'm not saying that multimodal is new but we are already seeing more multimodal models

By the way, it is possible to do something with Aya Vision use SigLip2 from Google maybe SigLip has already been implemented (I will look for it and comment if I find something)

<!-- gh-comment-id:2727562528 --> @YarvixPA commented on GitHub (Mar 16, 2025): Yes. I have read that it is difficult or for a long time to implement the clip/vision and I understand 100% that it is not something that is done from one day to the next... every day we see more models come out and we have to support it but it is difficult because there are many But despite that, I think we are already entering multimodal execution in the local... and also in reasoning I'm not saying that multimodal is new but we are already seeing more multimodal models By the way, it is possible to do something with Aya Vision use SigLip2 from Google maybe SigLip has already been implemented (I will look for it and comment if I find something)
Author
Owner

@rjmalagon commented on GitHub (Mar 16, 2025):

Q6_K_L would be good. Is it possible to do several levels just so that more users have to choose?

I am resource constrained, there are 4 Gemma 3 sizes and around 10 quantization levels per size.
I am gladly to help, but I have to give priority to upload what is needed, if someone needs another level in the future, maybe I can upload too.

<!-- gh-comment-id:2727562785 --> @rjmalagon commented on GitHub (Mar 16, 2025): > Q6_K_L would be good. Is it possible to do several levels just so that more users have to choose? I am resource constrained, there are 4 Gemma 3 sizes and around 10 quantization levels per size. I am gladly to help, but I have to give priority to upload what is needed, if someone needs another level in the future, maybe I can upload too.
Author
Owner

@YarvixPA commented on GitHub (Mar 16, 2025):

12B Q6_K_L

If I can help in the process, tell me how to cuantize it to upload it with my internet too and have more options

Just let me know

<!-- gh-comment-id:2727564168 --> @YarvixPA commented on GitHub (Mar 16, 2025): 12B Q6_K_L If I can help in the process, tell me how to cuantize it to upload it with my internet too and have more options Just let me know
Author
Owner

@rjmalagon commented on GitHub (Mar 16, 2025):

Thanks, I use llama-quantize, it only supports Q_6K, I am uploading to rjmalagon/gemma-3:12b-it-q6_K, in some minutes will be online, you can read more about my Gemma 3 compilation here https://ollama.com/rjmalagon/gemma-3

Later this day I will post how I convert and quantize this model. Because it needs additional steps for multimodal models.

<!-- gh-comment-id:2727579292 --> @rjmalagon commented on GitHub (Mar 16, 2025): Thanks, I use llama-quantize, it only supports Q_6K, I am uploading to rjmalagon/gemma-3:12b-it-q6_K, in some minutes will be online, you can read more about my Gemma 3 compilation here https://ollama.com/rjmalagon/gemma-3 Later this day I will post how I convert and quantize this model. Because it needs additional steps for multimodal models.
Author
Owner

@rjmalagon commented on GitHub (Mar 16, 2025):

Is ready on https://ollama.com/rjmalagon/gemma-3:12b-it-q6_K , it has F32 on projector side.

I suspect that the chat template is not complete (I just copy it from the Gemma 3 from ollama.com)

<!-- gh-comment-id:2727591112 --> @rjmalagon commented on GitHub (Mar 16, 2025): Is ready on https://ollama.com/rjmalagon/gemma-3:12b-it-q6_K , it has F32 on projector side. I suspect that the chat template is not complete (I just copy it from the Gemma 3 from ollama.com)
Author
Owner

@rjmalagon commented on GitHub (Mar 16, 2025):

Steps to reproduce:
Download model with git clone git@hf.co:google/gemma-3-12b-it (need HF account with registered ssh key because is a "gated" model, ask for access to the model file first on https://huggingface.co/google/gemma-3-12b-it )

From llama.cpp on https://github.com/ggml-org/llama.cpp :
You will need llama.cpp/convert_hf_to_gguf.py to convert the text part (needs python requirements)

convert_hf_to_gguf.py  --outtype bf16 --outfile ./gemma-3.gguf --model-name gemma-3 ./

You will need llama.cpp/examples/llava/gemma3_convert_encoder_to_gguf.py to convert the vision part (needs python requirements)

gemma3_convert_encoder_to_gguf.py --outtype f32 --outfile gemma-3-mmproj.gguf ./

And to quantize you will need llama.cpp/bin/llama-quantize (c++, needs compilation)

llama-quantize  gemma-3.gguf  gemma-3-q.gguf Q6_K

You need both gguf files on your model file.

FROM gemma-3-q.gguf
FROM gemma-3-mmproj.gguf
TEMPLATE """{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if or (eq .Role "user") (eq .Role "system") }}<start_of_turn>user
{{ .Content }}<end_of_turn>
{{ if $last }}<start_of_turn>model
{{ end }}
{{- else if eq .Role "assistant" }}<start_of_turn>model
{{ .Content }}{{ if not $last }}<end_of_turn>
{{ end }}
{{- end }}
{{- end }}"""
PARAMETER stop <end_of_turn>

And load it on Ollama just like any other model.
ollama create -f Modelfile-gemma3-q gemma-3:12b-it-q6_K

<!-- gh-comment-id:2727623995 --> @rjmalagon commented on GitHub (Mar 16, 2025): Steps to reproduce: Download model with `git clone git@hf.co:google/gemma-3-12b-it` (need HF account with registered ssh key because is a "gated" model, ask for access to the model file first on https://huggingface.co/google/gemma-3-12b-it ) From llama.cpp on https://github.com/ggml-org/llama.cpp : You will need `llama.cpp/convert_hf_to_gguf.py` to convert the text part (needs python requirements) ``` convert_hf_to_gguf.py --outtype bf16 --outfile ./gemma-3.gguf --model-name gemma-3 ./ ``` You will need `llama.cpp/examples/llava/gemma3_convert_encoder_to_gguf.py` to convert the vision part (needs python requirements) ``` gemma3_convert_encoder_to_gguf.py --outtype f32 --outfile gemma-3-mmproj.gguf ./ ``` And to quantize you will need `llama.cpp/bin/llama-quantize` (c++, needs compilation) ``` llama-quantize gemma-3.gguf gemma-3-q.gguf Q6_K ``` You need both gguf files on your model file. ``` FROM gemma-3-q.gguf FROM gemma-3-mmproj.gguf TEMPLATE """{{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if or (eq .Role "user") (eq .Role "system") }}<start_of_turn>user {{ .Content }}<end_of_turn> {{ if $last }}<start_of_turn>model {{ end }} {{- else if eq .Role "assistant" }}<start_of_turn>model {{ .Content }}{{ if not $last }}<end_of_turn> {{ end }} {{- end }} {{- end }}""" PARAMETER stop <end_of_turn> ``` And load it on Ollama just like any other model. `ollama create -f Modelfile-gemma3-q gemma-3:12b-it-q6_K`
Author
Owner

@lemassykoi commented on GitHub (Mar 17, 2025):

I discovered yesterday that Gemma3's Ollama webpage indicates that it is vison compatible, despite the missing 'vision' tag

Image

I tried and it's working from OpenWebUI

Image

<!-- gh-comment-id:2729150777 --> @lemassykoi commented on GitHub (Mar 17, 2025): I discovered yesterday that Gemma3's Ollama webpage indicates that it is vison compatible, despite the missing 'vision' tag ![Image](https://github.com/user-attachments/assets/5e389234-5ec0-4997-8465-fda692702a38) I tried and it's working from OpenWebUI ![Image](https://github.com/user-attachments/assets/2a46b405-e53f-42f3-9a79-a24d8aaefd17)
Author
Owner

@YarvixPA commented on GitHub (Mar 17, 2025):

Yes, but there are not Q6_K_L or Q6_K

<!-- gh-comment-id:2730536980 --> @YarvixPA commented on GitHub (Mar 17, 2025): Yes, but there are not Q6_K_L or Q6_K
Author
Owner

@rjmalagon commented on GitHub (Mar 22, 2025):

Yes, but there are not Q6_K_L or Q6_K

Sorry, I changed the model creation process to properly integrate the vision part
You can pull to Ollama from rjmalagon/gemma-3:12b-it-q6_K

I about the correct model creation, for Gemma3 it is easier to just point to the path of Gemma3 tensor files (from HugginFace) on the modelife, Ollama will properly convert into the needed multimodal model.

ollama create -f Modelfile-gemma3 -q q6_K gemma-3:12b-it-q6_K

Where the modelfile contains

FROM path_to_folder/gemma-3-12b-it
TEMPLATE """{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if or (eq .Role "user") (eq .Role "system") }}<start_of_turn>user
{{ .Content }}<end_of_turn>
{{ if $last }}<start_of_turn>model
{{ end }}
{{- else if eq .Role "assistant" }}<start_of_turn>model
{{ .Content }}{{ if not $last }}<end_of_turn>
{{ end }}
{{- end }}
{{- end }}"""
PARAMETER stop <end_of_turn>
<!-- gh-comment-id:2745699132 --> @rjmalagon commented on GitHub (Mar 22, 2025): > Yes, but there are not Q6_K_L or Q6_K Sorry, I changed the model creation process to properly integrate the vision part You can pull to Ollama from rjmalagon/gemma-3:12b-it-q6_K I about the correct model creation, for Gemma3 it is easier to just point to the path of Gemma3 tensor files (from HugginFace) on the modelife, Ollama will properly convert into the needed multimodal model. `ollama create -f Modelfile-gemma3 -q q6_K gemma-3:12b-it-q6_K` Where the modelfile contains ``` FROM path_to_folder/gemma-3-12b-it TEMPLATE """{{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if or (eq .Role "user") (eq .Role "system") }}<start_of_turn>user {{ .Content }}<end_of_turn> {{ if $last }}<start_of_turn>model {{ end }} {{- else if eq .Role "assistant" }}<start_of_turn>model {{ .Content }}{{ if not $last }}<end_of_turn> {{ end }} {{- end }} {{- end }}""" PARAMETER stop <end_of_turn> ```
Author
Owner

@YarvixPA commented on GitHub (May 22, 2025):

@rjmalagon hola, crees que podrias escribirme por discord y explicarme/ayudarme a cuantizar un modelo? me gustaria cuantizar el qwen2.5vl 7b instruct a q6k pero que incluya tambien el function tool calling

tambien pero bueno seria mas adelante, me gustaria aya vision 8b pero ese lo intentaria luego primero ver, aprender a hacer qwen2.5vl 7b

<!-- gh-comment-id:2900129569 --> @YarvixPA commented on GitHub (May 22, 2025): @rjmalagon hola, crees que podrias escribirme por discord y explicarme/ayudarme a cuantizar un modelo? me gustaria cuantizar el qwen2.5vl 7b instruct a q6k pero que incluya tambien el function tool calling tambien pero bueno seria mas adelante, me gustaria aya vision 8b pero ese lo intentaria luego primero ver, aprender a hacer qwen2.5vl 7b
Author
Owner

@rjmalagon commented on GitHub (May 24, 2025):

Hola @YarvixPA , no me entra (por falta de gusto) el Discord. El detalle estará en el function tool calling, porque esto depende más de la plantilla, que desafortunadamente en Ollama tiene más chiste que lo que mi experiencia podrá apoyarle (aunque sospecho que se la puede "transplantar" de la plantilla de Qwen 2.5).

Puede simplificarse la vida tomando de hf.co/unsloth/Qwen2.5-VL-7B-Instruct-GGUF:Q6_K
ollama pull hf.co/unsloth/Qwen2.5-VL-7B-Instruct-GGUF:Q6_K
Exportar el "modelfile"
ollama show --modelfile hf.co/unsloth/Qwen2.5-VL-7B-Instruct-GGUF:Q6_K > modelfile
Modificar ese "modelfile" con la parte del tool calling (se la debo, pero puede tomar pista de https://ollama.com/library/qwen2.5:7b/blobs/eb4402837c78 ).
Y reimportarlo a ollama
ollama create -f modelfile mi_custom/Qwen2.5-VL-7B-Instruct-GGUF:Q6_K
Con eso tendrá su variante.

Pero si usted quiere cuantizarla a mano, también le puedo dar instrucciones precisas.

<!-- gh-comment-id:2906483254 --> @rjmalagon commented on GitHub (May 24, 2025): Hola @YarvixPA , no me entra (por falta de gusto) el Discord. El detalle estará en el function tool calling, porque esto depende más de la plantilla, que desafortunadamente en Ollama tiene más chiste que lo que mi experiencia podrá apoyarle (aunque sospecho que se la puede "transplantar" de la plantilla de Qwen 2.5). Puede simplificarse la vida tomando de hf.co/unsloth/Qwen2.5-VL-7B-Instruct-GGUF:Q6_K `ollama pull hf.co/unsloth/Qwen2.5-VL-7B-Instruct-GGUF:Q6_K` Exportar el "modelfile" `ollama show --modelfile hf.co/unsloth/Qwen2.5-VL-7B-Instruct-GGUF:Q6_K > modelfile` Modificar ese "modelfile" con la parte del tool calling (se la debo, pero puede tomar pista de https://ollama.com/library/qwen2.5:7b/blobs/eb4402837c78 ). Y reimportarlo a ollama `ollama create -f modelfile mi_custom/Qwen2.5-VL-7B-Instruct-GGUF:Q6_K` Con eso tendrá su variante. Pero si usted quiere cuantizarla a mano, también le puedo dar instrucciones precisas.
Author
Owner

@amsaravi commented on GitHub (May 25, 2025):

/content# ollama create -f My_Model -q q6_K "gemma-3:27b-it-q6_K"
converting model
......
Error: unsupported quantization type Q6_K - supported types are F32, F16, Q4_K_S, Q4_K_M, Q8_0

<!-- gh-comment-id:2907866803 --> @amsaravi commented on GitHub (May 25, 2025): /content# ollama create -f My_Model -q q6_K "gemma-3:27b-it-q6_K" converting model ...... Error: unsupported quantization type Q6_K - supported types are F32, F16, Q4_K_S, Q4_K_M, Q8_0
Author
Owner

@amsaravi commented on GitHub (May 25, 2025):

can u produce colab notebook to do the process?

<!-- gh-comment-id:2907867400 --> @amsaravi commented on GitHub (May 25, 2025): can u produce colab notebook to do the process?
Author
Owner

@rjmalagon commented on GitHub (May 25, 2025):

You will need the convert script of the llama.cpp project for that.
And is a little bit harder, because you need the gguf files (text+vision) , and quantize the text part.

/content# ollama create -f My_Model -q q6_K "gemma-3:27b-it-q6_K"
converting model
......
Error: unsupported quantization type Q6_K - supported types are F32, F16, Q4_K_S, Q4_K_M, Q8_0

<!-- gh-comment-id:2908081370 --> @rjmalagon commented on GitHub (May 25, 2025): You will need the convert script of the llama.cpp project for that. And is a little bit harder, because you need the gguf files (text+vision) , and quantize the text part. > /content# ollama create -f My_Model -q q6_K "gemma-3:27b-it-q6_K" > converting model > ...... > Error: unsupported quantization type Q6_K - supported types are F32, F16, Q4_K_S, Q4_K_M, Q8_0 >
Author
Owner

@rjmalagon commented on GitHub (May 26, 2025):

@amsaravi @YarvixPA And you will need to take considerations.
The Ollama development team is following a focus approach and is gradually deprecating support for quantization outside of the "F32, F16, Q4_K_S, Q4_K_M, Q8_0" types. (There is more to it than that; please read these pull requests)

https://github.com/ollama/ollama/pull/10842
https://github.com/ollama/ollama/pull/10647

<!-- gh-comment-id:2908443768 --> @rjmalagon commented on GitHub (May 26, 2025): @amsaravi @YarvixPA And you will need to take considerations. The Ollama development team is following a focus approach and is gradually deprecating support for quantization outside of the "F32, F16, Q4_K_S, Q4_K_M, Q8_0" types. (There is more to it than that; please read these pull requests) https://github.com/ollama/ollama/pull/10842 https://github.com/ollama/ollama/pull/10647
Author
Owner

@amsaravi commented on GitHub (May 26, 2025):

any way, based on your previous posts, i cant find "llama.cpp/examples/llava/gemma3_convert_encoder_to_gguf.py" in their repo. i need q6 quant any way. help please.

<!-- gh-comment-id:2910134934 --> @amsaravi commented on GitHub (May 26, 2025): any way, based on your previous posts, i cant find "llama.cpp/examples/llava/gemma3_convert_encoder_to_gguf.py" in their repo. i need q6 quant any way. help please.
Author
Owner

@amsaravi commented on GitHub (May 26, 2025):

i found the docs at: https://github.com/ggml-org/llama.cpp/blob/master/docs/multimodal/gemma3.md

<!-- gh-comment-id:2910141760 --> @amsaravi commented on GitHub (May 26, 2025): i found the docs at: [https://github.com/ggml-org/llama.cpp/blob/master/docs/multimodal/gemma3.md](https://github.com/ggml-org/llama.cpp/blob/master/docs/multimodal/gemma3.md)
Author
Owner

@amsaravi commented on GitHub (May 28, 2025):

finally i created this colab notebook to do the job using older ollama version i.e. 0.6.2:

<!-- gh-comment-id:2916983498 --> @amsaravi commented on GitHub (May 28, 2025): finally i created [this colab notebook](https://colab.research.google.com/drive/1d3OIgJIVNeHCfs9ELBqAWAP0yO-YoExf?usp=sharing) to do the job using older ollama version i.e. 0.6.2:
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52870