[GH-ISSUE #13459] [BUG] GUI app shows "model does not support images" error for qwen3-vl:8b when adding image attachments, but works after CLI interaction #55393

Open
opened 2026-04-29 09:05:55 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @xuyinuox-ui on GitHub (Dec 13, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13459

Originally assigned to: @hoyyeva on GitHub.

What is the issue?

Environment:

Ollama version: 0.13.3 (latest)
OS: Windows
Hardware configuration:
CPU: AMD R9-8945HX
RAM: 16GB DDR5-5600
GPU: NVIDIA RTX5060 8GB
Issue Description:
When using the Ollama GUI application to deploy and interact with the qwen3-vl:8b model, I encounter an error when trying to add image attachments. The GUI shows a popup error stating that the model does not support images. However, when I open a separate command-line window and run the same model with image input, it works perfectly fine and can describe image content. After this CLI interaction, if I return to the GUI application, the image attachment functionality suddenly works normally for the same model.

Steps to Reproduce:

Open Ollama GUI application
Select or pull the qwen3-vl:8b model
Click the attachment button to add an image
Observe the popup error: "This model does not support images"
Open a separate terminal/command prompt window
Run the same qwen3-vl:8b model via CLI and successfully add/process an image
Return to the Ollama GUI application
Try adding an image attachment again - it now works normally
Expected Behavior:
The GUI application should recognize that qwen3-vl:8b is a vision-language model and allow image attachments from the first attempt, without requiring CLI interaction to "activate" this functionality.

Actual Behavior:
The GUI incorrectly reports that the model doesn't support images initially, but this functionality is restored after CLI interaction with the same model.

Additional Information:

This appears to be a GUI-specific initialization issue where the model capabilities are not properly detected or cached when first loaded in the GUI context
The CLI version correctly identifies and utilizes the multi-modal capabilities immediately
After CLI interaction "wakes up" the model's image processing capabilities, the GUI can then access them normally
This suggests a synchronization or state management issue between the GUI frontend and the underlying model service

Relevant log output


OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.13.3

Originally created by @xuyinuox-ui on GitHub (Dec 13, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13459 Originally assigned to: @hoyyeva on GitHub. ### What is the issue? Environment: Ollama version: 0.13.3 (latest) OS: Windows Hardware configuration: CPU: AMD R9-8945HX RAM: 16GB DDR5-5600 GPU: NVIDIA RTX5060 8GB Issue Description: When using the Ollama GUI application to deploy and interact with the qwen3-vl:8b model, I encounter an error when trying to add image attachments. The GUI shows a popup error stating that the model does not support images. However, when I open a separate command-line window and run the same model with image input, it works perfectly fine and can describe image content. After this CLI interaction, if I return to the GUI application, the image attachment functionality suddenly works normally for the same model. Steps to Reproduce: Open Ollama GUI application Select or pull the qwen3-vl:8b model Click the attachment button to add an image Observe the popup error: "This model does not support images" Open a separate terminal/command prompt window Run the same qwen3-vl:8b model via CLI and successfully add/process an image Return to the Ollama GUI application Try adding an image attachment again - it now works normally Expected Behavior: The GUI application should recognize that qwen3-vl:8b is a vision-language model and allow image attachments from the first attempt, without requiring CLI interaction to "activate" this functionality. Actual Behavior: The GUI incorrectly reports that the model doesn't support images initially, but this functionality is restored after CLI interaction with the same model. Additional Information: This appears to be a GUI-specific initialization issue where the model capabilities are not properly detected or cached when first loaded in the GUI context The CLI version correctly identifies and utilizes the multi-modal capabilities immediately After CLI interaction "wakes up" the model's image processing capabilities, the GUI can then access them normally This suggests a synchronization or state management issue between the GUI frontend and the underlying model service ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.13.3
GiteaMirror added the bug label 2026-04-29 09:05:55 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 13, 2025):

#13211

<!-- gh-comment-id:3649744609 --> @rick-github commented on GitHub (Dec 13, 2025): #13211
Author
Owner

@hoyyeva commented on GitHub (Dec 15, 2025):

Hi @xuyinuox-ui, thank you for reporting the issue. To better understand the situation, could you let us know was the model downloaded or a non-downloaded one? Also, were you using a new chat or an existing chat?

<!-- gh-comment-id:3657167082 --> @hoyyeva commented on GitHub (Dec 15, 2025): Hi @xuyinuox-ui, thank you for reporting the issue. To better understand the situation, could you let us know was the model downloaded or a non-downloaded one? Also, were you using a new chat or an existing chat?
Author
Owner

@hoyyeva commented on GitHub (Dec 15, 2025):

The error message “model does not support images” is based on the list of capabilities we receive from the Ollama endpoint (ollama.show). This is a current limitation of the API. I suspect this is happening because the model was not downloaded yet.

We are currently working on an improved Ollama API for model search, which should address this issue as well. For now, the workaround is to download the model first by starting a chat without attaching an image. Once the download is complete, you should be able to attach image files.

Please let me know if this matches what you experienced. I am happy to investigate further if not.

<!-- gh-comment-id:3657484647 --> @hoyyeva commented on GitHub (Dec 15, 2025): The error message “model does not support images” is based on the list of capabilities we receive from the Ollama endpoint (ollama.show). This is a current limitation of the API. I suspect this is happening because the model was not downloaded yet. We are currently working on an improved Ollama API for model search, which should address this issue as well. For now, the workaround is to download the model first by starting a chat without attaching an image. Once the download is complete, you should be able to attach image files. Please let me know if this matches what you experienced. I am happy to investigate further if not.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55393