[GH-ISSUE #2303] Large number of images causes hanging/error #47840

Closed
opened 2026-04-28 05:28:02 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @jmorganca on GitHub (Feb 1, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2303

If submitting too many images via the images API parameter in /api/generate or /api/chat

[1706759134] slot 0 - loaded image
[1706759135] slot 0 - loaded image
[1706759135] slot 0 - loaded image
[1706759135] slot 0 - loaded image
[1706759135] slot 0 - loaded image
[1706759135] slot 0 is processing [task id: 0]
[1706759135] slot 0 : kv cache rm - [0, end)
[1706759135] slot 0 - encoding image [id: 0]
[1706759135] slot 0 - encoding image [id: 1]
[1706759136] slot 0 - encoding image [id: 2]
[1706759136] slot 0 - encoding image [id: 3]
[1706759136] slot 0 - encoding image [id: 4]
[1706759144] ingest_images : failed to eval image
[1706759144] failed processing images
[1706759144] unexpected error in llama server update_slots - exiting main loop
[1706759144] 
llama server shutting down

This seems to be because llama_decode returns an error due to an insufficient context window after decoding the model

Originally created by @jmorganca on GitHub (Feb 1, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2303 If submitting too many images via the `images` API parameter in `/api/generate` or` /api/chat` ``` [1706759134] slot 0 - loaded image [1706759135] slot 0 - loaded image [1706759135] slot 0 - loaded image [1706759135] slot 0 - loaded image [1706759135] slot 0 - loaded image [1706759135] slot 0 is processing [task id: 0] [1706759135] slot 0 : kv cache rm - [0, end) [1706759135] slot 0 - encoding image [id: 0] [1706759135] slot 0 - encoding image [id: 1] [1706759136] slot 0 - encoding image [id: 2] [1706759136] slot 0 - encoding image [id: 3] [1706759136] slot 0 - encoding image [id: 4] [1706759144] ingest_images : failed to eval image [1706759144] failed processing images [1706759144] unexpected error in llama server update_slots - exiting main loop [1706759144] llama server shutting down ``` This seems to be because `llama_decode` returns an error due to an insufficient context window after decoding the model
GiteaMirror added the bug label 2026-04-28 05:28:02 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47840