[GH-ISSUE #10082] Gemma3 cannot correctly read the image #6607

Open
opened 2026-04-12 18:16:18 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @mikeshuangyan on GitHub (Apr 2, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10082

What is the issue?

My GPU is an 4080 Super, and I am running the gemma:12b model.
For the Gemma3 model, I attempted to read the text within an image, but when the prompt is too long, it doesn’t work properly.

>>> Extract all visible text from this image in Japanese **without any changes**.
... - **Do not summarize, paraphrase, or infer missing text.**
... - Retain all spacing, punctuation, and formatting exactly as in the image.
... - If text is unclear or partially visible, extract as much as possible without guessing.
... - **Include all text, even if it seems irrelevant or repeated.**
... "E:\test.png"
Added image 'E:\test.png'
- **do not**

- **Do not**

- **do not**

When the prompt is shorter, most of the time it can successfully read the text, but occasionally there is an issue where the prompt is repeated infinitely in the response.

>>> Extract all visible text from this image e:\test.png
Added image 'e:\test.png'
寒気などの影響で、埼玉県内は日中の最高気温が平年を10度前後下回って各地で真冬並みの寒気となりました。

一方、湿った空気が流れ込むため、2日明け方から昼前にかけて大雨となるおそれがあり、気象台は土砂災害や低い土地の浸水な
どに注意するよう呼びかけています。

total duration:       2.2683157s
load duration:        45.7458ms
prompt eval count:    276 token(s)
prompt eval duration: 811.6163ms
prompt eval rate:     340.06 tokens/s
eval count:           82 token(s)
eval duration:        1.4098345s
eval rate:            58.16 tokens/s

Relevant log output


OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.6.3

Originally created by @mikeshuangyan on GitHub (Apr 2, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10082 ### What is the issue? My GPU is an 4080 Super, and I am running the gemma:12b model. For the Gemma3 model, I attempted to read the text within an image, but when the prompt is too long, it doesn’t work properly. ``` >>> Extract all visible text from this image in Japanese **without any changes**. ... - **Do not summarize, paraphrase, or infer missing text.** ... - Retain all spacing, punctuation, and formatting exactly as in the image. ... - If text is unclear or partially visible, extract as much as possible without guessing. ... - **Include all text, even if it seems irrelevant or repeated.** ... "E:\test.png" Added image 'E:\test.png' - **do not** - **Do not** - **do not** ``` When the prompt is shorter, most of the time it can successfully read the text, but occasionally there is an issue where the prompt is repeated infinitely in the response. ``` >>> Extract all visible text from this image e:\test.png Added image 'e:\test.png' 寒気などの影響で、埼玉県内は日中の最高気温が平年を10度前後下回って各地で真冬並みの寒気となりました。 一方、湿った空気が流れ込むため、2日明け方から昼前にかけて大雨となるおそれがあり、気象台は土砂災害や低い土地の浸水な どに注意するよう呼びかけています。 total duration: 2.2683157s load duration: 45.7458ms prompt eval count: 276 token(s) prompt eval duration: 811.6163ms prompt eval rate: 340.06 tokens/s eval count: 82 token(s) eval duration: 1.4098345s eval rate: 58.16 tokens/s ``` ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.6.3
GiteaMirror added the bug label 2026-04-12 18:16:18 -05:00
Author
Owner

@tneQpx commented on GitHub (Apr 2, 2025):

This seems to be the same issue that I have and mentioned in a comment on here https://github.com/ollama/ollama/issues/9845

If the prompt is "Extract all text" it works really well, with some minor errors. If the prompt is starting to get long it hallucinates most of the time but for some reason is still doing it correctly sometimes.

<!-- gh-comment-id:2771708947 --> @tneQpx commented on GitHub (Apr 2, 2025): This seems to be the same issue that I have and mentioned in a comment on here https://github.com/ollama/ollama/issues/9845 If the prompt is "Extract all text" it works really well, with some minor errors. If the prompt is starting to get long it hallucinates most of the time but for some reason is still doing it correctly sometimes.
Author
Owner

@mmb78 commented on GitHub (Apr 3, 2025):

Check how long (how many tokens) your whole prompt actually is. Check if changing the context window, to for example 8k, would not resolve your issues. Remember default context length is only 2048 in Ollama.
https://github.com/ollama/ollama/blob/main/docs/modelfile.md

<!-- gh-comment-id:2774778400 --> @mmb78 commented on GitHub (Apr 3, 2025): Check how long (how many tokens) your whole prompt actually is. Check if changing the context window, to for example 8k, would not resolve your issues. Remember default context length is only 2048 in Ollama. https://github.com/ollama/ollama/blob/main/docs/modelfile.md
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6607