[GH-ISSUE #10565] Vision models from Ollama site don't work. #69013

Closed
opened 2026-05-04 16:47:21 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @logan683 on GitHub (May 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10565

What is the issue?

I downloaded the following from Ollama:
nidum-gemma-3-4b-it-uncensored:q8_0
nidum-gemma-3-27b-instruct-uncensored:Q6_K

I access models using Ollama through Open WebUI, so I'm not sure exactly where the problem is.

The only vision model that seems to work is llama3.2-vision, also from the Ollama site.

Has anyone had problems with nidum-gemma-3-4b-it-uncensored:q8_0 or nidum-gemma-3-27b-instruct-uncensored:Q6_K? Vision is enabled for both in Open WebUI. Is there something else I need to do? I also access Ollama through ComfyUI custom nodes and would like to be able to pass images through a vision model but currently can't.

Thank you.

Relevant log output


OS

Windows

GPU

No response

CPU

Intel

Ollama version

0.6.7

Originally created by @logan683 on GitHub (May 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10565 ### What is the issue? I downloaded the following from Ollama: nidum-gemma-3-4b-it-uncensored:q8_0 nidum-gemma-3-27b-instruct-uncensored:Q6_K I access models using Ollama through Open WebUI, so I'm not sure exactly where the problem is. The only vision model that seems to work is llama3.2-vision, also from the Ollama site. Has anyone had problems with nidum-gemma-3-4b-it-uncensored:q8_0 or nidum-gemma-3-27b-instruct-uncensored:Q6_K? Vision is enabled for both in Open WebUI. Is there something else I need to do? I also access Ollama through ComfyUI custom nodes and would like to be able to pass images through a vision model but currently can't. Thank you. ### Relevant log output ```shell ``` ### OS Windows ### GPU _No response_ ### CPU Intel ### Ollama version 0.6.7
GiteaMirror added the bug label 2026-05-04 16:47:21 -05:00
Author
Owner

@rick-github commented on GitHub (May 5, 2025):

These are user-created models, you'll have to take it up with nidumai.

Working vision models can be found here.

<!-- gh-comment-id:2850279739 --> @rick-github commented on GitHub (May 5, 2025): These are user-created models, you'll have to take it up with [nidumai](https://ollama.com/nidumai). Working vision models can be found [here](https://ollama.com/search?c=vision).
Author
Owner

@logan683 commented on GitHub (May 5, 2025):

Thank you for the reply. I will check these. I am looking for uncensored, local vision models. The models I am having problems with are from the Ollama site and labeled as vision models specifically. Is this a label Ollama assigns or something a user assigns upon upload?

https://ollama.com/nidumai/nidum-gemma-3-27b-instruct-uncensored

It's confusing when the model is listed as vision capable on the site but isn't actually able to perform this function. At this point, clarity for users from Ollama would help greatly as not everyone may know to contact the model maker.

Thank you very much.

<!-- gh-comment-id:2852289206 --> @logan683 commented on GitHub (May 5, 2025): Thank you for the reply. I will check these. I am looking for uncensored, local vision models. The models I am having problems with are from the Ollama site and labeled as vision models specifically. Is this a label Ollama assigns or something a user assigns upon upload? https://ollama.com/nidumai/nidum-gemma-3-27b-instruct-uncensored It's confusing when the model is listed as vision capable on the site but isn't actually able to perform this function. At this point, clarity for users from Ollama would help greatly as not everyone may know to contact the model maker. Thank you very much.
Author
Owner

@rick-github commented on GitHub (May 5, 2025):

I don't know the specifics of how the ollama website assigns labels. My guess is that the vision tag is assigned because that's a feature of the gemma3 family of models. However, if you check the metadata for the model you see that it lacks gemma3.vision KV entries and there are no v.blk tensors. Compare with the official gemma3 release in the ollama library.

<!-- gh-comment-id:2852463839 --> @rick-github commented on GitHub (May 5, 2025): I don't know the specifics of how the ollama website assigns labels. My guess is that the vision tag is assigned because that's a feature of the gemma3 family of models. However, if you check the [metadata](https://ollama.com/nidumai/nidum-gemma-3-27b-instruct-uncensored:q3_k_m/blobs/a3644015a036) for the model you see that it lacks `gemma3.vision` KV entries and there are no `v.blk` tensors. Compare with the official gemma3 release in the [ollama library](https://ollama.com/library/gemma3:4b-it-q4_K_M/blobs/aeda25e63ebd).
Author
Owner

@logan683 commented on GitHub (May 5, 2025):

Thank you for the reply. I understand now. I'm trying to learn about AI
and there's a lot to it. Now I know next time what to look for in the
metadata for a model.

Thank you very much for your time.

On Mon, May 5, 2025 at 5:15 PM frob @.***> wrote:

rick-github left a comment (ollama/ollama#10565)
https://github.com/ollama/ollama/issues/10565#issuecomment-2852463839

I don't know the specifics of how the ollama website assigns labels. My
guess is that the vision tag is assigned because that's a feature of the
gemma3 family of models. However, if you check the metadata
https://ollama.com/nidumai/nidum-gemma-3-27b-instruct-uncensored:q3_k_m/blobs/a3644015a036
for the model you see that it lacks gemma3.vision KV entries and there
are no v.blk tensors. Compare with the official gemma3 release in the ollama
library
https://ollama.com/library/gemma3:4b-it-q4_K_M/blobs/aeda25e63ebd.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/10565#issuecomment-2852463839,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ACAKO4Q5NW4DI42HRWENN6D247PJDAVCNFSM6AAAAAB4NT7OPGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQNJSGQ3DGOBTHE
.
You are receiving this because you authored the thread.Message ID:
@.***>

<!-- gh-comment-id:2852764244 --> @logan683 commented on GitHub (May 5, 2025): Thank you for the reply. I understand now. I'm trying to learn about AI and there's a lot to it. Now I know next time what to look for in the metadata for a model. Thank you very much for your time. On Mon, May 5, 2025 at 5:15 PM frob ***@***.***> wrote: > *rick-github* left a comment (ollama/ollama#10565) > <https://github.com/ollama/ollama/issues/10565#issuecomment-2852463839> > > I don't know the specifics of how the ollama website assigns labels. My > guess is that the vision tag is assigned because that's a feature of the > gemma3 family of models. However, if you check the metadata > <https://ollama.com/nidumai/nidum-gemma-3-27b-instruct-uncensored:q3_k_m/blobs/a3644015a036> > for the model you see that it lacks gemma3.vision KV entries and there > are no v.blk tensors. Compare with the official gemma3 release in the ollama > library > <https://ollama.com/library/gemma3:4b-it-q4_K_M/blobs/aeda25e63ebd>. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/10565#issuecomment-2852463839>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ACAKO4Q5NW4DI42HRWENN6D247PJDAVCNFSM6AAAAAB4NT7OPGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQNJSGQ3DGOBTHE> > . > You are receiving this because you authored the thread.Message ID: > ***@***.***> >
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69013