mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 19:08:59 -05:00
[GH-ISSUE #19518] issue: Duplicate images displayed in gemini-3-pro-image-preview chats #57576
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @davidpede on GitHub (Nov 26, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/19518
Check Existing Issues
Installation Method
Docker
Open WebUI Version
v0.6.40
Ollama Version (if applicable)
No response
Operating System
Ubuntu
Browser (if applicable)
Edge
Confirmation
README.md.Expected Behavior
I am direct chatting with model 'gemini-3-pro-image-preview' (provided by OpenRouter). I'm not using the in-built Image Generation feature.
One image should be displayed when returned by the model.
Actual Behavior
Three duplicate images are displayed instead.
I have disabled the in-built Image Generation feature as suggested here: https://github.com/open-webui/open-webui/issues/18998#issuecomment-3574891300
Model 'google/gemini-2.5-flash-image' displays one image correctly.
Steps to Reproduce
Set the following in docker environment:
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE=10485760ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION=trueAsk 'gemini-3-pro-image-preview' (via openrouter) for an image (3 displayed)
Ask 'gemini-2.5-flash-image' (via openrouter) for an image (1 displayed)
Logs & Screenshots
Additional Information
No response
@Classic298 commented on GitHub (Nov 26, 2025):
@ShirasawaSama
@ShirasawaSama commented on GitHub (Nov 26, 2025):
I apologize for not being able to troubleshoot this issue today due to other matters. I will do my best to address it tomorrow or attempt a fix in the background.
@Classic298 commented on GitHub (Nov 26, 2025):
thanks!
@davidpede commented on GitHub (Nov 26, 2025):
Thanks!
@tjbck commented on GitHub (Nov 26, 2025):
If you're using open router models, they're entirely separate from our built-in image generation/edit pipeline.
@art-vish commented on GitHub (Jan 13, 2026):
I am experiencing an issue where every image generated (or returned) by the model is displayed three times in the chat interface. This happens when using the latest version of Open WebUI with LiteLLM as a proxy for open-router/google/gemini-3-pro-image-preview.
My Setup:
-- Image Generation is DISABLED in the WebUI settings.
-- Image Edit is DISABLED in the WebUI settings.
ENVs:
LiteLLM part of conf:
@Classic298 commented on GitHub (Jan 14, 2026):
@ShirasawaSama what are potential misconfiguration issues here?
@Joly0 commented on GitHub (Jan 30, 2026):
Can this issue please be re-opened? I am hitting the same error with litellm and duplicate images being shown:
@Classic298 commented on GitHub (Jan 30, 2026):
I cannot reproduce.
If you have a way to 100% reproduce this, even on a fresh clean setup, then open a new issue with reproduction steps
@Joly0 commented on GitHub (Jan 30, 2026):
I mean, i just setup litellm and configured it in open-webui and directly got this issue, so seems to me like this is reproducible 100% on my end
@VyacheslavTeplyakov commented on GitHub (Feb 4, 2026):
v0.7.2
Still there
@Classic298 commented on GitHub (Feb 4, 2026):
without clear steps to reproduce this will be hard to fix
@VyacheslavTeplyakov commented on GitHub (Feb 5, 2026):
@Classic298 commented on GitHub (Feb 5, 2026):
"Clean docker install v0.7.2 + openrouter + google/gemini-3-pro-image-preview + prompt "rabbit""
Ok i will test this later today. But I already tested this exact same setup before with gemini-2.5-flash-image and i could not reproduce.
@VyacheslavTeplyakov commented on GitHub (Feb 5, 2026):
I can provide you with any information or logs you need, just ask.
@Joly0 commented on GitHub (Feb 5, 2026):
Btw, its correct that gemini-2.5-flash-image doesnt have this problem. With the exact same settings, 2.5 only generates 1 image while 3.0-pro-image-preview generates 2.
I added both to my litellm config with the exact same config (except the background model, but both from openrouter), only changed the model in owui and i even used the exact same chat with the exact same message (i changed the model and then re-send my original prompt).
@Classic298 commented on GitHub (Feb 6, 2026):
Fresh setup.......
Openrouter
gemini 3 pro image preview
cannot reproduce
Guys if you want me to fix this imma need reproduction steps
@Joly0 commented on GitHub (Feb 6, 2026):
I think the problem (atleast for me reliably) is not when i select the model as the chat model, but when i have it configured (thats why i mentioned litellm) in the admin image settings and let the normal chat model (gpt-5.2-chat for me) use the image generation using gemini-3.0-pro-preview as a tool (so using the image settings)
@Joly0 commented on GitHub (Feb 6, 2026):
Here in the same chat
My settings are as followed:
But these are my settings and the steps i did to get the problem
@Joly0 commented on GitHub (Feb 16, 2026):
Hey @Classic298 was wondering with my last message if you were able to replicate the issue?
@Classic298 commented on GitHub (Feb 16, 2026):
sorry i dont have litellm set up locally. i can try in another environment. what do you want me to try exactly and how are the models configured precisely (in litellm also) thanks.
@Joly0 commented on GitHub (Feb 16, 2026):
I think except for the LiteLLM config my previous message should include everything needed as information.
LiteLLM config for me is this:
With this config gemini-2.5 works while 3.0 shows the error from my previous image. Its important to use a normal chat model (either in native tool calling more or default mode, for me works everytime in native mode, but not sure if important) and use the oweui built-in image generation/editing feature. If you use the image mode as the chat model, this bug doesnt appear, its only when using a normal chat model (gpt-5.2-chat in my case) and use the image generation feature
@Classic298 commented on GitHub (Feb 28, 2026):
@Joly0 just for reference, i tried this recently and still could not reproduce.