[GH-ISSUE #11616] Prompt led to hang: I dont see the picture #7671

Closed
opened 2026-04-12 19:45:55 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @SkybuckFlying on GitHub (Aug 1, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11616

What is the issue?

Prompt led to hang: I dont see the picture

(before that I ask it to re-construct a picture from 4 picture, basically a ufo light source, not sure if that has anything to do with it, my 16 core processor went to 67% and a few seconds I stopped it, it didn't feel right ?)

I believe ollama cannot generate pictures yet is that true ? (Would be cool if it could)

If so maybe this cause the model gemma3:12b to go into a loop, trying to solve an unsolveable problem ?

Relevant log output


OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

Display version in app pls.

Originally created by @SkybuckFlying on GitHub (Aug 1, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11616 ### What is the issue? Prompt led to hang: I dont see the picture (before that I ask it to re-construct a picture from 4 picture, basically a ufo light source, not sure if that has anything to do with it, my 16 core processor went to 67% and a few seconds I stopped it, it didn't feel right ?) I believe ollama cannot generate pictures yet is that true ? (Would be cool if it could) If so maybe this cause the model gemma3:12b to go into a loop, trying to solve an unsolveable problem ? ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version Display version in app pls.
GiteaMirror added the bug label 2026-04-12 19:45:55 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 1, 2025):

I believe ollama cannot generate pictures yet is that true ?

True.

If so maybe this cause the model gemma3:12b to go into a loop, trying to solve an unsolveable problem ?

Unlikely. It could be that the image processor is loaded in system RAM, and processing the images caused the increase in CPU usage,

<!-- gh-comment-id:3141732946 --> @rick-github commented on GitHub (Aug 1, 2025): > I believe ollama cannot generate pictures yet is that true ? True. > If so maybe this cause the model gemma3:12b to go into a loop, trying to solve an unsolveable problem ? Unlikely. It could be that the image processor is loaded in system RAM, and processing the images caused the increase in CPU usage,
Author
Owner

@Anubhav4813 commented on GitHub (Aug 1, 2025):

Hi @SkybuckFlying!

To help debug this further, could you provide a few additional details?

For the Ollama version: You can get your Ollama version by running:

bash
ollama --version
Regarding the hang/high CPU usage: A few things that might help:

What model were you using exactly? You mentioned gemma3:12b - could you confirm the exact model name you pulled?

What was your exact prompt? The specific wording might help reproduce the issue.

Log output: Even though the log section is empty, you might find relevant logs at:

Windows: %LOCALAPPDATA%\Ollama\logs
Or run Ollama with verbose logging: ollama serve --verbose
About image generation: As @rick-github confirmed, Ollama currently doesn't generate images - it only processes/analyzes them. However, the high CPU usage during image analysis is normal, especially with multiple images, as the vision processing can be computationally intensive.

Potential workarounds:

Try with a single image first to see if the issue persists
Monitor memory usage alongside CPU - vision models can be memory-intensive
Consider using a smaller vision-capable model if available
If you can reproduce this consistently, the logs and exact steps would be really valuable for the maintainers to investigate!

this is the response from github copliot pro version.... see if it can help....

<!-- gh-comment-id:3142033090 --> @Anubhav4813 commented on GitHub (Aug 1, 2025): Hi @SkybuckFlying! To help debug this further, could you provide a few additional details? For the Ollama version: You can get your Ollama version by running: bash ollama --version Regarding the hang/high CPU usage: A few things that might help: What model were you using exactly? You mentioned gemma3:12b - could you confirm the exact model name you pulled? What was your exact prompt? The specific wording might help reproduce the issue. Log output: Even though the log section is empty, you might find relevant logs at: Windows: %LOCALAPPDATA%\Ollama\logs\ Or run Ollama with verbose logging: ollama serve --verbose About image generation: As @rick-github confirmed, Ollama currently doesn't generate images - it only processes/analyzes them. However, the high CPU usage during image analysis is normal, especially with multiple images, as the vision processing can be computationally intensive. Potential workarounds: Try with a single image first to see if the issue persists Monitor memory usage alongside CPU - vision models can be memory-intensive Consider using a smaller vision-capable model if available If you can reproduce this consistently, the logs and exact steps would be really valuable for the maintainers to investigate! this is the response from github copliot pro version.... see if it can help....
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7671