[GH-ISSUE #12851] Qwen3-VL does not support images (0.12.7)? #70575

Open
opened 2026-05-04 22:03:26 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @ansorre on GitHub (Oct 30, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12851

Originally assigned to: @hoyyeva on GitHub.

What is the issue?

As per app.log I'm on "Ollama version=0.12.7 OS=Windows/10.0.26100"
Using "qwen3-vl:30b" which as per info on https://ollama.com/library/qwen3-vl is able to accept "Text, Image" inputs.
Still I get the error: "The model does not support images" when I drop an image in the Ollama chat.
Is this the expected behavior? And if yes, why?

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @ansorre on GitHub (Oct 30, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12851 Originally assigned to: @hoyyeva on GitHub. ### What is the issue? As per app.log I'm on "Ollama version=0.12.7 OS=Windows/10.0.26100" Using "qwen3-vl:30b" which as per info on https://ollama.com/library/qwen3-vl is able to accept "Text, Image" inputs. Still I get the error: "The model does not support images" when I drop an image in the Ollama chat. Is this the expected behavior? And if yes, why? ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the appbug labels 2026-05-04 22:03:27 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 30, 2025):

Does it work from the command line?

$ ollama run qwen3-vl:30b describe this image: ./picture.png 
Added image './picture.png'
Thinking...
So, let's see. The image shows a small white puppy. 
...
...done thinking.

The image features a **small, fluffy white puppy** as the central subject, sitting 
...
<!-- gh-comment-id:3467094757 --> @rick-github commented on GitHub (Oct 30, 2025): Does it work from the command line? ```console $ ollama run qwen3-vl:30b describe this image: ./picture.png Added image './picture.png' Thinking... So, let's see. The image shows a small white puppy. ... ...done thinking. The image features a **small, fluffy white puppy** as the central subject, sitting ... ```
Author
Owner

@ansorre commented on GitHub (Oct 30, 2025):

@rick-github Yes, It works perfectly via command line! Sorry for not trying this before.

<!-- gh-comment-id:3467521483 --> @ansorre commented on GitHub (Oct 30, 2025): @rick-github Yes, It works perfectly via command line! Sorry for not trying this before.
Author
Owner

@rod-fu commented on GitHub (Oct 31, 2025):

i have a question that i wanna use python to call ollama api, can i do it on 0.12.7?

<!-- gh-comment-id:3471112334 --> @rod-fu commented on GitHub (Oct 31, 2025): i have a question that i wanna use python to call ollama api, can i do it on 0.12.7?
Author
Owner

@ansorre commented on GitHub (Oct 31, 2025):

i have a question that i wanna use python to call ollama api, can i do it on 0.12.7?

Yes, you definitely can. Ask ChatGPT/Claude/Gemini for help.

<!-- gh-comment-id:3473209542 --> @ansorre commented on GitHub (Oct 31, 2025): > i have a question that i wanna use python to call ollama api, can i do it on 0.12.7? Yes, you definitely can. Ask ChatGPT/Claude/Gemini for help.
Author
Owner

@rick-github commented on GitHub (Oct 31, 2025):

https://github.com/ollama/ollama-python

<!-- gh-comment-id:3473668189 --> @rick-github commented on GitHub (Oct 31, 2025): https://github.com/ollama/ollama-python
Author
Owner

@lisngt commented on GitHub (Nov 7, 2025):

Image

I have the same error.

<!-- gh-comment-id:3502690353 --> @lisngt commented on GitHub (Nov 7, 2025): <img width="1006" height="438" alt="Image" src="https://github.com/user-attachments/assets/21aeddb3-876f-42d0-a608-41374d24f809" /> I have the same error.
Author
Owner

@mistermantas commented on GitHub (Nov 9, 2025):

its a bug, i have the same issue too, cli is fine

<!-- gh-comment-id:3508280310 --> @mistermantas commented on GitHub (Nov 9, 2025): its a bug, i have the same issue too, cli is fine
Author
Owner

@GitOguz commented on GitHub (Nov 15, 2025):

same issue, cant let it do anything almost on a image.

<!-- gh-comment-id:3536966903 --> @GitOguz commented on GitHub (Nov 15, 2025): same issue, cant let it do anything almost on a image.
Author
Owner

@hoyyeva commented on GitHub (Nov 19, 2025):

Hello everyone! We are currently working on a fix. In the meantime, as a temporarily solution if you can try running ollama serve in a different terminal and then restart the app it should work. We are working on a more robust solution. Sorry for the inconvenience, and we will update here once the fix is ready!

<!-- gh-comment-id:3553913226 --> @hoyyeva commented on GitHub (Nov 19, 2025): Hello everyone! We are currently working on a fix. In the meantime, as a temporarily solution if you can try running `ollama serve` in a different terminal and then restart the app it should work. We are working on a more robust solution. Sorry for the inconvenience, and we will update here once the fix is ready!
Author
Owner

@theodoruszq commented on GitHub (Dec 9, 2025):

Hello everyone! We are currently working on a fix. In the meantime, as a temporarily solution if you can try running ollama serve in a different terminal and then restart the app it should work. We are working on a more robust solution. Sorry for the inconvenience, and we will update here once the fix is ready!

hi could you please check this model?

Image

I tried qwen3-vl:4b etc with the lastest ollama(0.13.2), the pasted image works well, but not with this qwen3-vl:30b/qwen3-vl:8b, they failed.

By the way, could you give some suggestions to disable thinking for qwen-vl model? I tried add special token to my prompt, but it doesn't work.

<!-- gh-comment-id:3631474630 --> @theodoruszq commented on GitHub (Dec 9, 2025): > Hello everyone! We are currently working on a fix. In the meantime, as a temporarily solution if you can try running `ollama serve` in a different terminal and then restart the app it should work. We are working on a more robust solution. Sorry for the inconvenience, and we will update here once the fix is ready! hi could you please check this model? <img width="681" height="155" alt="Image" src="https://github.com/user-attachments/assets/fb102159-ecfa-49a5-b404-1008cea5e8d9" /> I tried qwen3-vl:4b etc with the lastest ollama(0.13.2), the pasted image works well, but not with this qwen3-vl:30b/qwen3-vl:8b, they failed. By the way, could you give some suggestions to disable thinking for qwen-vl model? I tried add </think> special token to my prompt, but it doesn't work.
Author
Owner

@teddybear082 commented on GitHub (Dec 13, 2025):

By the way, could you give some suggestions to disable thinking for qwen-vl model? I tried add special token to my prompt, but it doesn't work.

Use the instruct version of the models. When you go to the ollama models web page click the button for more models and you will see them. Instruct versions won’t do the thinking stuff.

<!-- gh-comment-id:3649563391 --> @teddybear082 commented on GitHub (Dec 13, 2025): > > > > By the way, could you give some suggestions to disable thinking for qwen-vl model? I tried add special token to my prompt, but it doesn't work. Use the instruct version of the models. When you go to the ollama models web page click the button for more models and you will see them. Instruct versions won’t do the thinking stuff.
Author
Owner

@hyacin75 commented on GitHub (Dec 15, 2025):

Hello everyone! We are currently working on a fix. In the meantime, as a temporarily solution if you can try running ollama serve in a different terminal and then restart the app it should work. We are working on a more robust solution. Sorry for the inconvenience, and we will update here once the fix is ready!

Not working for me - perhaps I'm doing it wrong? Shut down GUI app, run 'ollama serve' in a terminal, re-launch GUI app? Still seeing the same 'not supported' messages.

<!-- gh-comment-id:3656752416 --> @hyacin75 commented on GitHub (Dec 15, 2025): > Hello everyone! We are currently working on a fix. In the meantime, as a temporarily solution if you can try running `ollama serve` in a different terminal and then restart the app it should work. We are working on a more robust solution. Sorry for the inconvenience, and we will update here once the fix is ready! Not working for me - perhaps I'm doing it wrong? Shut down GUI app, run 'ollama serve' in a terminal, re-launch GUI app? Still seeing the same 'not supported' messages.
Author
Owner

@hoyyeva commented on GitHub (Dec 15, 2025):

Hi @theodoruszq & @hyacin75 do you have the model downloaded already? We noticed that there is a limitation on one of the ollama endpoint to get correct capabilities when the model is not downloaded. We are currently working on an improved ollama API for model search which shoul address this issue as well. For now, the workaround is to download the model first by starting a chat without attaching an image. Once the download is complete, you should be able to attach image files.

<!-- gh-comment-id:3657515842 --> @hoyyeva commented on GitHub (Dec 15, 2025): Hi @theodoruszq & @hyacin75 do you have the model downloaded already? We noticed that there is a limitation on one of the ollama endpoint to get correct capabilities when the model is not downloaded. We are currently working on an improved ollama API for model search which shoul address this issue as well. For now, the workaround is to download the model first by starting a chat without attaching an image. Once the download is complete, you should be able to attach image files.
Author
Owner

@hyacin75 commented on GitHub (Dec 15, 2025):

@hoyyeva Possible I did not - I gave up trying and just fired it up on the CLI, and iirc it downloaded the model first thing. I was able to load the images I wanted analyzed though and got what I needed thanks to all the great tips and workaround here! :-) It was just a one-off for me to get a prompt I could feed back into the non-vision model, to have it spit out an SD prompt, lol, AI helping me guide AI to control another AI... head spinning. 😵‍💫 🤣

<!-- gh-comment-id:3657559075 --> @hyacin75 commented on GitHub (Dec 15, 2025): @hoyyeva Possible I did not - I gave up trying and just fired it up on the CLI, and iirc it downloaded the model first thing. I was able to load the images I wanted analyzed though and got what I needed thanks to all the great tips and workaround here! :-) It was just a one-off for me to get a prompt I could feed back into the non-vision model, to have it spit out an SD prompt, lol, AI helping me guide AI to control another AI... head spinning. 😵‍💫 🤣
Author
Owner

@theodoruszq commented on GitHub (Dec 24, 2025):

Hi @theodoruszq & @hyacin75 do you have the model downloaded already? We noticed that there is a limitation on one of the ollama endpoint to get correct capabilities when the model is not downloaded. We are currently working on an improved ollama API for model search which shoul address this issue as well. For now, the workaround is to download the model first by starting a chat without attaching an image. Once the download is complete, you should be able to attach image files.

Actually I have the model downloaded already, it maybe a bug. I am wondering some easy python code to debugging with... Finally I switched to GPT-OSS20B to solve my offline request.

Thanks anyway~

<!-- gh-comment-id:3690221243 --> @theodoruszq commented on GitHub (Dec 24, 2025): > Hi [@theodoruszq](https://github.com/theodoruszq) & [@hyacin75](https://github.com/hyacin75) do you have the model downloaded already? We noticed that there is a limitation on one of the ollama endpoint to get correct capabilities when the model is not downloaded. We are currently working on an improved ollama API for model search which shoul address this issue as well. For now, the workaround is to download the model first by starting a chat without attaching an image. Once the download is complete, you should be able to attach image files. Actually I have the model downloaded already, it maybe a bug. I am wondering some easy python code to debugging with... Finally I switched to GPT-OSS20B to solve my offline request. Thanks anyway~
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70575