[GH-ISSUE #11230] Flash attention and non-Q4 models not working with Qwen 2.5 VL #33157

Open
opened 2026-04-22 15:35:02 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @filips123 on GitHub (Jun 28, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11230

What is the issue?

I'm using Ollama on HPC with Slurm to run Qwen 2.5 VL model to perform OCR on some scanned documents that were written with typewriter.

I am using Ollama 0.9.2, because the latest 0.9.3 does not detect GPU for some reason (#11220).

Running both 32b and 72b versions with the default Q4 quantization works fine. However, it seems that accuracy isn't so great compared to hosted versions of Qwen 2.5 VL on Hugging Face (but I haven't accurately measured this). Because of this, I wanted to try running the Q8 and FP16 versions of the model to see if quantization was affecting the results.

These are logs and nvidia-smi outputs for this part:

stderr-58671197.log
stdout-58671197.log
nvidia-smi.log
nvidia-smi-q.log

First, I tried loading the same 32b Q4 model, but with flash attention enabled:

export OLLAMA_FLASH_ATTENTION=1
export OLLAMA_KV_CACHE_TYPE=f16

This drastically increased the inference time. The server was stuck on "loading cache slot" for around 10 minutes, then the client retried request. These are the relevant lines from the log:

time=2025-06-28T12:45:45.044+02:00 level=INFO source=server.go:630 msg="llama runner started in 4.11 seconds"
time=2025-06-28T12:45:45.143+02:00 level=DEBUG source=sched.go:495 msg="finished setting up" runner.name=registry.ollama.ai/library/qwen2.5vl:32b runner.inference=cuda runner.devices=1 runner.size="25.3 GiB" runner.vram="25.3 GiB" runner.parallel=2 runner.pid=1686729 runner.model=/tmp/58671284/models/blobs/sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66 runner.num_ctx=8192
time=2025-06-28T12:45:45.294+02:00 level=DEBUG source=server.go:729 msg="completion request" images=1 prompt=1043 format=""
time=2025-06-28T12:45:45.398+02:00 level=DEBUG source=vocabulary.go:52 msg="adding bos token to prompt" id=[0]
time=2025-06-28T12:45:45.596+02:00 level=DEBUG source=vocabulary.go:52 msg="adding bos token to prompt" id=[0]
time=2025-06-28T12:45:45.597+02:00 level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=0 prompt=1467 used=0 remaining=1467
time=2025-06-28T12:55:28.123+02:00 level=DEBUG source=sched.go:503 msg="context for request finished"

And these are the full logs:

stderr-58671284.log
stdout-58671284.log
nvidia-smi.log
nvidia-smi-q.log

In some cases, even after retrying the request, the model was still stuck on "loading cache slot", causing the program to be completely frozen.

I also tried to use 32b Q8 model (qwen2.5vl:32b-q8_0) with flash attention disabled, but this causes various "context limit hit - shifting" errors:

time=2025-06-28T13:20:24.464+02:00 level=INFO source=server.go:630 msg="llama runner started in 6.46 seconds"
time=2025-06-28T13:20:24.500+02:00 level=DEBUG source=sched.go:495 msg="finished setting up" runner.name=registry.ollama.ai/library/qwen2.5vl:32b-q8_0 runner.inference=cuda runner.devices=1 runner.size="39.1 GiB" runner.vram="39.1 GiB" runner.parallel=2 runner.pid=1860561 runner.model=/tmp/58671511/models/blobs/sha256-d63254d428dbd460e561cc95dc5d77a0beb27990ce984b0a52620d353230f01c runner.num_ctx=8192
time=2025-06-28T13:20:24.559+02:00 level=DEBUG source=server.go:729 msg="completion request" images=1 prompt=1043 format=""
time=2025-06-28T13:20:24.612+02:00 level=DEBUG source=vocabulary.go:52 msg="adding bos token to prompt" id=[0]
time=2025-06-28T13:20:24.809+02:00 level=DEBUG source=vocabulary.go:52 msg="adding bos token to prompt" id=[0]
time=2025-06-28T13:20:24.810+02:00 level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=0 prompt=1467 used=0 remaining=1467
time=2025-06-28T13:21:40.142+02:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=4 discard=2046
time=2025-06-28T13:22:37.397+02:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=4 discard=2046
time=2025-06-28T13:23:35.472+02:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=4 discard=2046
time=2025-06-28T13:24:32.413+02:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=4 discard=2046

Even though using even larger model but with Q4 quantization (qwen2.5vl:72b-q4_K_M) fits fine on the GPU.

These are the full logs:

stderr-58671511.log
stdout-58671511.log
nvidia-smi.log
nvidia-smi-q.log

Relevant log output


OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.9.2

Originally created by @filips123 on GitHub (Jun 28, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11230 ### What is the issue? I'm using Ollama on HPC with Slurm to run Qwen 2.5 VL model to perform OCR on some scanned documents that were written with typewriter. I am using Ollama 0.9.2, because the latest 0.9.3 does not detect GPU for some reason (#11220). Running both 32b and 72b versions with the default Q4 quantization works fine. However, it seems that accuracy isn't so great compared to hosted versions of Qwen 2.5 VL on Hugging Face (but I haven't accurately measured this). Because of this, I wanted to try running the Q8 and FP16 versions of the model to see if quantization was affecting the results. These are logs and `nvidia-smi` outputs for this part: [stderr-58671197.log](https://github.com/user-attachments/files/20960271/stderr-58671197.log) [stdout-58671197.log](https://github.com/user-attachments/files/20960269/stdout-58671197.log) [nvidia-smi.log](https://github.com/user-attachments/files/20960268/nvidia-smi.log) [nvidia-smi-q.log](https://github.com/user-attachments/files/20960270/nvidia-smi-q.log) First, I tried loading the same 32b Q4 model, but with flash attention enabled: ```bash export OLLAMA_FLASH_ATTENTION=1 export OLLAMA_KV_CACHE_TYPE=f16 ``` This drastically increased the inference time. The server was stuck on "loading cache slot" for around 10 minutes, then the client retried request. These are the relevant lines from the log: ``` time=2025-06-28T12:45:45.044+02:00 level=INFO source=server.go:630 msg="llama runner started in 4.11 seconds" time=2025-06-28T12:45:45.143+02:00 level=DEBUG source=sched.go:495 msg="finished setting up" runner.name=registry.ollama.ai/library/qwen2.5vl:32b runner.inference=cuda runner.devices=1 runner.size="25.3 GiB" runner.vram="25.3 GiB" runner.parallel=2 runner.pid=1686729 runner.model=/tmp/58671284/models/blobs/sha256-043a363c6ca35e3b1a29b8a5b0bbd28474820239bbc5ad943c9be18f0dc77b66 runner.num_ctx=8192 time=2025-06-28T12:45:45.294+02:00 level=DEBUG source=server.go:729 msg="completion request" images=1 prompt=1043 format="" time=2025-06-28T12:45:45.398+02:00 level=DEBUG source=vocabulary.go:52 msg="adding bos token to prompt" id=[0] time=2025-06-28T12:45:45.596+02:00 level=DEBUG source=vocabulary.go:52 msg="adding bos token to prompt" id=[0] time=2025-06-28T12:45:45.597+02:00 level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=0 prompt=1467 used=0 remaining=1467 time=2025-06-28T12:55:28.123+02:00 level=DEBUG source=sched.go:503 msg="context for request finished" ``` And these are the full logs: [stderr-58671284.log](https://github.com/user-attachments/files/20960338/stderr-58671284.log) [stdout-58671284.log](https://github.com/user-attachments/files/20960339/stdout-58671284.log) [nvidia-smi.log](https://github.com/user-attachments/files/20960337/nvidia-smi.log) [nvidia-smi-q.log](https://github.com/user-attachments/files/20960340/nvidia-smi-q.log) In some cases, even after retrying the request, the model was still stuck on "loading cache slot", causing the program to be completely frozen. I also tried to use 32b Q8 model (`qwen2.5vl:32b-q8_0`) with flash attention disabled, but this causes various "context limit hit - shifting" errors: ``` time=2025-06-28T13:20:24.464+02:00 level=INFO source=server.go:630 msg="llama runner started in 6.46 seconds" time=2025-06-28T13:20:24.500+02:00 level=DEBUG source=sched.go:495 msg="finished setting up" runner.name=registry.ollama.ai/library/qwen2.5vl:32b-q8_0 runner.inference=cuda runner.devices=1 runner.size="39.1 GiB" runner.vram="39.1 GiB" runner.parallel=2 runner.pid=1860561 runner.model=/tmp/58671511/models/blobs/sha256-d63254d428dbd460e561cc95dc5d77a0beb27990ce984b0a52620d353230f01c runner.num_ctx=8192 time=2025-06-28T13:20:24.559+02:00 level=DEBUG source=server.go:729 msg="completion request" images=1 prompt=1043 format="" time=2025-06-28T13:20:24.612+02:00 level=DEBUG source=vocabulary.go:52 msg="adding bos token to prompt" id=[0] time=2025-06-28T13:20:24.809+02:00 level=DEBUG source=vocabulary.go:52 msg="adding bos token to prompt" id=[0] time=2025-06-28T13:20:24.810+02:00 level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=0 prompt=1467 used=0 remaining=1467 time=2025-06-28T13:21:40.142+02:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=4 discard=2046 time=2025-06-28T13:22:37.397+02:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=4 discard=2046 time=2025-06-28T13:23:35.472+02:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=4 discard=2046 time=2025-06-28T13:24:32.413+02:00 level=DEBUG source=cache.go:272 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=4 discard=2046 ``` Even though using even larger model but with Q4 quantization (`qwen2.5vl:72b-q4_K_M`) fits fine on the GPU. These are the full logs: [stderr-58671511.log](https://github.com/user-attachments/files/20960404/stderr-58671511.log) [stdout-58671511.log](https://github.com/user-attachments/files/20960403/stdout-58671511.log) [nvidia-smi.log](https://github.com/user-attachments/files/20960405/nvidia-smi.log) [nvidia-smi-q.log](https://github.com/user-attachments/files/20960406/nvidia-smi-q.log) ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.9.2
GiteaMirror added the bug label 2026-04-22 15:35:02 -05:00
Author
Owner

@rick-github commented on GitHub (Jun 29, 2025):

This might be another function of the environment. Using ollama/ollama:0.9.2 on docker with a 12G 4070, I get these results:

model flash eval duration eval rate
qwen2.5vl:7b-q4_K_M 0 5.23s 82.40
qwen2.5vl:7b-q4_K_M 1/fp16 5.12s 84.20
qwen2.5vl:7b-q8_0 0 8.11s 53.40
qwen2.5vl:7b-q8_0 1/fp16 8.07s 53.68
qwen2.5vl:7b-fp16 0 41.00s 12.07
qwen2.5vl:7b-fp16 1/fp16 41.38s 11.96
qwen2.5vl:32b-q4_K_M 0 103.60s 4.81
qwen2.5vl:32b-q4_K_M 1/fp16 107.72s 4.60
qwen2.5vl:72b-q4_K_M 0 287.78s 1.46
qwen2.5vl:72b-q4_K_M 1/fp16 302.30s 1.39

Other than the slowdown because of CPU offloading, there were no cache slot loading delays. The typewritten page used for testing was this one.

Try setting OLLAMA_DEBUG=2. This will generate a lot more logging which may show why the model is stalling.

<!-- gh-comment-id:3017274026 --> @rick-github commented on GitHub (Jun 29, 2025): This might be another function of the environment. Using ollama/ollama:0.9.2 on docker with a 12G 4070, I get these results: | model | flash | eval duration | eval rate | |--|--|--|--| | qwen2.5vl:7b-q4_K_M | 0 | 5.23s | 82.40 | | qwen2.5vl:7b-q4_K_M | 1/fp16 | 5.12s | 84.20 | | qwen2.5vl:7b-q8_0 | 0 | 8.11s | 53.40 | | qwen2.5vl:7b-q8_0 | 1/fp16 | 8.07s | 53.68 | | qwen2.5vl:7b-fp16 | 0 | 41.00s | 12.07 | | qwen2.5vl:7b-fp16 | 1/fp16 | 41.38s | 11.96 | | qwen2.5vl:32b-q4_K_M | 0 | 103.60s | 4.81 | | qwen2.5vl:32b-q4_K_M | 1/fp16 | 107.72s | 4.60 | | qwen2.5vl:72b-q4_K_M | 0 | 287.78s | 1.46 | | qwen2.5vl:72b-q4_K_M | 1/fp16 | 302.30s | 1.39 | Other than the slowdown because of CPU offloading, there were no cache slot loading delays. The typewritten page used for testing was [this](https://funwitholdstuff.livejournal.com/1081.html) one. Try setting `OLLAMA_DEBUG=2`. This will generate a lot more logging which may show why the model is stalling.
Author
Owner

@filips123 commented on GitHub (Jun 30, 2025):

I've now unset HIP_VISIBLE_DEVICES and ROCR_VISIBLE_DEVICES and updated to Ollama 0.9.3. It does detect the GPU now, but I still have the same problem with flash attenton and Q8 model.

I had to compress logs because they were too large... The Ollama log is in ollama.log.

  • Logs of running 32b Q4 model without flash attention. This worked fine and finished in less than 5 minutes.

    logs-093-32b-q4-no-fa.zip

  • Logs of running 32b Q4 model with flash attention. It was very slow, the first request took more than 10 minutes, was retried, then finally succeeded. Subsequent requests were faster, but still slower than without FA. I cancelled the job after around 15 min.

    logs-093-32b-q4-fa.zip

  • Logs of running 32b Q8 model without flash attention. It was also very slow, not a single request was finished. I cancelled the job after around 15 min.

    logs-093-32b-q8-no-fa.zip

<!-- gh-comment-id:3018381820 --> @filips123 commented on GitHub (Jun 30, 2025): I've now unset `HIP_VISIBLE_DEVICES` and `ROCR_VISIBLE_DEVICES` and updated to Ollama 0.9.3. It does detect the GPU now, but I still have the same problem with flash attenton and Q8 model. I had to compress logs because they were too large... The Ollama log is in `ollama.log`. * Logs of running 32b Q4 model without flash attention. This worked fine and finished in less than 5 minutes. [logs-093-32b-q4-no-fa.zip](https://github.com/user-attachments/files/20976380/logs-093-32b-q4-no-fa.zip) * Logs of running 32b Q4 model with flash attention. It was very slow, the first request took more than 10 minutes, was retried, then finally succeeded. Subsequent requests were faster, but still slower than without FA. I cancelled the job after around 15 min. [logs-093-32b-q4-fa.zip](https://github.com/user-attachments/files/20976447/logs-093-32b-q4-fa.zip) * Logs of running 32b Q8 model without flash attention. It was also very slow, not a single request was finished. I cancelled the job after around 15 min. [logs-093-32b-q8-no-fa.zip](https://github.com/user-attachments/files/20976450/logs-093-32b-q8-no-fa.zip)
Author
Owner

@rick-github commented on GitHub (Jun 30, 2025):

The model is losing coherence. In 32b q4 with FA, the first request generates tokens that de-tokenize as:

1271, november 17. Falkenberg.\n\nFriderik s Falkenbergberga proda 
Nem.viteškemu redu v Ljubljani šest hub v Logu in ob Gradadaščici za 55 
mark ogl.\n\nOrig.perg. v Državnem arhivu Slovenije v Ljubljani\n\nReg.: MHVK 
XV (1860), str.97.\n\nPrim.: F.Richter, Gesch.d.Stadt Lai- bach v Klnunovem 
Archivu f.L.d.H.Krain II.-III, 193; F.Zwitter, Star.kranjska mesta,... str.21; 
J.Žontar, Banke in bankirnji... str.22, 32, op.21; M.Kos, Srednjeveška 
Ljubljana, str.41, op. 151, 153.\n\nIn nomine Iesu Christi Christi amen. Mora temporis
transeunte actus temporis universaliter transeunt memoria ab humana, si non
scriptur- rarum testimonio perhemmantur. Quare ego Fridericus de Valchenberch
confiteor presentes presencium per tenorem viuversis presentes 
propri- uidentibus et uisuris, quod sex proprios meos mansos sitos in Awa et et 
circa decur- sum minoris fluminis dicti Laybach iuxta ciuitatem Laybacensem 
dedi et uendidi fratribus et ordini domus Theutonicae pro quinquaginta marcis 
denariis Aquilwgen- sium pura fide cum omnibus iuribus et atti- nentibus 
mansorum dictorum videlicet pratis, pascuis, nemoribus, molis sev molarum locis 
inquirendis peribiter et molis quesitis. Quorum man sorum duo erant 
hereditariae prop- prii domine Chvnevndis vxoris mee que ob dilectionem meam 
libere ac spontanea ius proprietatis dedit fratribus et ordini uide supra si si 
habere scilicet mansis, quos meuslic ciuis vna mecum cum dedit

At this point it starts looping, generating the tokens 49097 and 33395 or " omnibus" until the client timed out at 10 minutes.

Similarly for 32b q8 without FA, the model generates:

1271, november 17. Falkenberg.\n\nFriderik s Falkenbergberga proda 
Nem.viteškemu redu v Ljubljani šest hub v Logu in ob Gradadaščici za 55 
mark ogl.\n\nOrig.perg. v Državnem arhivu Slovenije v Ljubljani\n\nReg.: MHVK 
XV (1860), str.97.\n\nPrim.: F.Richter, Gesch.d.Stadt Lai- bach v Klonunovem 
Archivu f.L.d.H.Krain II.-III, 193; F.Zwitter, Star.kranjska mesta,... str.21; 
J.Žontar, Banke in bankerirji... str.22, 32, op.21; M.Kos, Srednjeveška 
Ljubljana, str.41, op. 151, 153.\n\nIn nomine Iesu Christi amen. Mora temporis 
transeunte actus temporis universaliter transeunt memoria ab humana, si non 
scripturis rarum testimonio perhemmantur. Quare ego Fridericus de Valchenberch 
confiteor presentes presencium per tenorem viuversis

and then starts looping on the sequence "presentes proprios uidentibus et uisuris, quod sex".

It's not clear why the model is losing coherence. Basically the model loses track of what it's doing and just starts generating a sequence of tokens that never includes an end of sequence (EOS) token. ollama has some built-in heuristics to try to catch this, but they take a long time to trigger - in these cases, the clients finished first.

You can recover early from this by setting num_predict. This will cause ollama to terminate the inference and return the results when the number of generated tokens exceeds the threshold in num_predict.

<!-- gh-comment-id:3018649502 --> @rick-github commented on GitHub (Jun 30, 2025): The model is losing coherence. In 32b q4 with FA, the first request generates tokens that de-tokenize as: ``` 1271, november 17. Falkenberg.\n\nFriderik s Falkenbergberga proda Nem.viteškemu redu v Ljubljani šest hub v Logu in ob Gradadaščici za 55 mark ogl.\n\nOrig.perg. v Državnem arhivu Slovenije v Ljubljani\n\nReg.: MHVK XV (1860), str.97.\n\nPrim.: F.Richter, Gesch.d.Stadt Lai- bach v Klnunovem Archivu f.L.d.H.Krain II.-III, 193; F.Zwitter, Star.kranjska mesta,... str.21; J.Žontar, Banke in bankirnji... str.22, 32, op.21; M.Kos, Srednjeveška Ljubljana, str.41, op. 151, 153.\n\nIn nomine Iesu Christi Christi amen. Mora temporis transeunte actus temporis universaliter transeunt memoria ab humana, si non scriptur- rarum testimonio perhemmantur. Quare ego Fridericus de Valchenberch confiteor presentes presencium per tenorem viuversis presentes propri- uidentibus et uisuris, quod sex proprios meos mansos sitos in Awa et et circa decur- sum minoris fluminis dicti Laybach iuxta ciuitatem Laybacensem dedi et uendidi fratribus et ordini domus Theutonicae pro quinquaginta marcis denariis Aquilwgen- sium pura fide cum omnibus iuribus et atti- nentibus mansorum dictorum videlicet pratis, pascuis, nemoribus, molis sev molarum locis inquirendis peribiter et molis quesitis. Quorum man sorum duo erant hereditariae prop- prii domine Chvnevndis vxoris mee que ob dilectionem meam libere ac spontanea ius proprietatis dedit fratribus et ordini uide supra si si habere scilicet mansis, quos meuslic ciuis vna mecum cum dedit ``` At this point it starts looping, generating the tokens 49097 and 33395 or " omnibus" until the client timed out at 10 minutes. Similarly for 32b q8 without FA, the model generates: ``` 1271, november 17. Falkenberg.\n\nFriderik s Falkenbergberga proda Nem.viteškemu redu v Ljubljani šest hub v Logu in ob Gradadaščici za 55 mark ogl.\n\nOrig.perg. v Državnem arhivu Slovenije v Ljubljani\n\nReg.: MHVK XV (1860), str.97.\n\nPrim.: F.Richter, Gesch.d.Stadt Lai- bach v Klonunovem Archivu f.L.d.H.Krain II.-III, 193; F.Zwitter, Star.kranjska mesta,... str.21; J.Žontar, Banke in bankerirji... str.22, 32, op.21; M.Kos, Srednjeveška Ljubljana, str.41, op. 151, 153.\n\nIn nomine Iesu Christi amen. Mora temporis transeunte actus temporis universaliter transeunt memoria ab humana, si non scripturis rarum testimonio perhemmantur. Quare ego Fridericus de Valchenberch confiteor presentes presencium per tenorem viuversis ``` and then starts looping on the sequence "presentes proprios uidentibus et uisuris, quod sex". It's not clear why the model is losing coherence. Basically the model loses track of what it's doing and just starts generating a sequence of tokens that never includes an end of sequence (EOS) token. ollama has some built-in heuristics to try to catch this, but they take a long time to trigger - in these cases, the clients finished first. You can recover early from this by setting `num_predict`. This will cause ollama to terminate the inference and return the results when the number of generated tokens exceeds the threshold in `num_predict`.
Author
Owner

@filips123 commented on GitHub (Jun 30, 2025):

You can recover early from this by setting num_predict. This will cause ollama to terminate the inference and return the results when the number of generated tokens exceeds the threshold in num_predict.

That will just cut off the result and won't fix the looping, right?

I have the temperature set to 0, if this is relevant. Other settings are left as default, and I'm using chat.completions.create from the OpenAI Python library. I can share the full code and documents if that would be useful.

Maybe this is similar issue as #10767? I'll try some other visual models other than Qwen to see if they have the same issue.

<!-- gh-comment-id:3018753557 --> @filips123 commented on GitHub (Jun 30, 2025): > You can recover early from this by setting `num_predict`. This will cause ollama to terminate the inference and return the results when the number of generated tokens exceeds the threshold in `num_predict`. That will just cut off the result and won't fix the looping, right? I have the temperature set to 0, if this is relevant. Other settings are left as default, and I'm using `chat.completions.create` from the OpenAI Python library. I can share the full code and documents if that would be useful. Maybe this is similar issue as #10767? I'll try some other visual models other than Qwen to see if they have the same issue.
Author
Owner

@rick-github commented on GitHub (Jun 30, 2025):

That will just cut off the result and won't fix the looping, right?

Correct. If the inference is terminated because of excess token generation then it's likely the results aren't useful anyway. You can detect this via the done_reason field in the response, it will be set to length for a termination due to num_predict (normally stop).

<!-- gh-comment-id:3018772891 --> @rick-github commented on GitHub (Jun 30, 2025): > That will just cut off the result and won't fix the looping, right? Correct. If the inference is terminated because of excess token generation then it's likely the results aren't useful anyway. You can detect this via the `done_reason` field in the response, it will be set to `length` for a termination due to `num_predict` (normally `stop`).
Author
Owner

@rick-github commented on GitHub (Jun 30, 2025):

Would it be possible for you to share the image that causes the looping?

<!-- gh-comment-id:3018777754 --> @rick-github commented on GitHub (Jun 30, 2025): Would it be possible for you to share the image that causes the looping?
Author
Owner

@filips123 commented on GitHub (Jun 30, 2025):

This is the image: https://github.com/user-attachments/assets/d392c3d9-b8fb-4d14-8974-15d843f937bb

And the full code is available here.

I also tried to change the prompt to just "Extract the text on the image. Respond only with the extracted text.". Now it starts repeating at:

time=2025-06-30T13:39:47.953+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=is from=[285]
time=2025-06-30T13:39:48.145+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=, from=[11]
time=2025-06-30T13:39:48.337+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" f" from=[282]
time=2025-06-30T13:39:48.609+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=idel from=[26802]
time=2025-06-30T13:39:48.805+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=is from=[285]
time=2025-06-30T13:39:49.063+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" me" from=[752]
time=2025-06-30T13:39:49.250+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=us from=[355]
time=2025-06-30T13:39:49.435+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" ci" from=[11825]
time=2025-06-30T13:39:49.625+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=uis from=[9241]
time=2025-06-30T13:39:49.821+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" una" from=[5093]
time=2025-06-30T13:39:50.001+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" me" from=[752]
time=2025-06-30T13:39:50.329+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=cum from=[59253]
time=2025-06-30T13:39:50.531+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" cum" from=[12177]
time=2025-06-30T13:39:50.717+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" d" from=[294]
time=2025-06-30T13:39:50.903+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=edit from=[3587]
time=2025-06-30T13:39:51.125+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" omn" from=[49097]
time=2025-06-30T13:39:51.325+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=ibus from=[33395]
time=2025-06-30T13:39:51.605+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" omn" from=[49097]
time=2025-06-30T13:39:51.783+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=ibus from=[33395]
time=2025-06-30T13:39:52.099+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" omn" from=[49097]
time=2025-06-30T13:39:52.437+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=ibus from=[33395]
<!-- gh-comment-id:3018812912 --> @filips123 commented on GitHub (Jun 30, 2025): This is the image: https://github.com/user-attachments/assets/d392c3d9-b8fb-4d14-8974-15d843f937bb And the full code is available [here](https://github.com/Vida26365/SrednjeveskiArhivi/tree/sling/scripts). I also tried to change the prompt to just "Extract the text on the image. Respond only with the extracted text.". Now it starts repeating at: ``` time=2025-06-30T13:39:47.953+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=is from=[285] time=2025-06-30T13:39:48.145+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=, from=[11] time=2025-06-30T13:39:48.337+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" f" from=[282] time=2025-06-30T13:39:48.609+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=idel from=[26802] time=2025-06-30T13:39:48.805+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=is from=[285] time=2025-06-30T13:39:49.063+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" me" from=[752] time=2025-06-30T13:39:49.250+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=us from=[355] time=2025-06-30T13:39:49.435+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" ci" from=[11825] time=2025-06-30T13:39:49.625+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=uis from=[9241] time=2025-06-30T13:39:49.821+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" una" from=[5093] time=2025-06-30T13:39:50.001+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" me" from=[752] time=2025-06-30T13:39:50.329+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=cum from=[59253] time=2025-06-30T13:39:50.531+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" cum" from=[12177] time=2025-06-30T13:39:50.717+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" d" from=[294] time=2025-06-30T13:39:50.903+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=edit from=[3587] time=2025-06-30T13:39:51.125+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" omn" from=[49097] time=2025-06-30T13:39:51.325+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=ibus from=[33395] time=2025-06-30T13:39:51.605+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" omn" from=[49097] time=2025-06-30T13:39:51.783+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=ibus from=[33395] time=2025-06-30T13:39:52.099+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=" omn" from=[49097] time=2025-06-30T13:39:52.437+02:00 level=TRACE source=bytepairencoding.go:246 msg=decoded string=ibus from=[33395] ```
Author
Owner

@rick-github commented on GitHub (Jun 30, 2025):

The image reliably causes looping for qwen2.5vl:7b-q4_K_M on two RTX 4070s and an RTX 3080. The 4070s loop on the same text, the 3080 loops with different text. An AMD 8060s also exhibits this behaviour but not as reliably, it will usually complete the query but occasionally falls into a loop in the early part of the Latinx text.

<!-- gh-comment-id:3019193833 --> @rick-github commented on GitHub (Jun 30, 2025): The image reliably causes looping for qwen2.5vl:7b-q4_K_M on two RTX 4070s and an RTX 3080. The 4070s loop on the same text, the 3080 loops with different text. An AMD 8060s also exhibits this behaviour but not as reliably, it will usually complete the query but occasionally falls into a loop in the early part of the Latinx text.
Author
Owner

@filips123 commented on GitHub (Jun 30, 2025):

I can also reproduce this locally on RTX 3060 with 7b Q4, even without flash attention enabled.

I also tried slightly resizing the image before loading it, and that seems to have fixed the problem. However, I have a lot of these documents that I need to run OCR on, so guessing which image size causes looping and resizing images doesn't seem to be a reliable soluton.

So, why does this happen and is it possible to reliably prevent it?

<!-- gh-comment-id:3020397374 --> @filips123 commented on GitHub (Jun 30, 2025): I can also reproduce this locally on RTX 3060 with 7b Q4, even without flash attention enabled. I also tried slightly resizing the image before loading it, and that seems to have fixed the problem. However, I have a lot of these documents that I need to run OCR on, so guessing which image size causes looping and resizing images doesn't seem to be a reliable soluton. So, why does this happen and is it possible to reliably prevent it?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33157