[GH-ISSUE #19518] issue: Duplicate images displayed in gemini-3-pro-image-preview chats #57576

Closed
opened 2026-05-05 21:08:38 -05:00 by GiteaMirror · 23 comments
Owner

Originally created by @davidpede on GitHub (Nov 26, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/19518

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

v0.6.40

Ollama Version (if applicable)

No response

Operating System

Ubuntu

Browser (if applicable)

Edge

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

I am direct chatting with model 'gemini-3-pro-image-preview' (provided by OpenRouter). I'm not using the in-built Image Generation feature.

One image should be displayed when returned by the model.

Actual Behavior

Three duplicate images are displayed instead.

I have disabled the in-built Image Generation feature as suggested here: https://github.com/open-webui/open-webui/issues/18998#issuecomment-3574891300

Model 'google/gemini-2.5-flash-image' displays one image correctly.

Steps to Reproduce

Set the following in docker environment:
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE=10485760
ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION=true

Ask 'gemini-3-pro-image-preview' (via openrouter) for an image (3 displayed)
Ask 'gemini-2.5-flash-image' (via openrouter) for an image (1 displayed)

Logs & Screenshots

Image Image

Additional Information

No response

Originally created by @davidpede on GitHub (Nov 26, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/19518 ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!). - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version v0.6.40 ### Ollama Version (if applicable) _No response_ ### Operating System Ubuntu ### Browser (if applicable) Edge ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior I am direct chatting with model 'gemini-3-pro-image-preview' (provided by OpenRouter). I'm not using the in-built Image Generation feature. One image should be displayed when returned by the model. ### Actual Behavior Three duplicate images are displayed instead. I have disabled the in-built Image Generation feature as suggested here: https://github.com/open-webui/open-webui/issues/18998#issuecomment-3574891300 Model 'google/gemini-2.5-flash-image' displays one image correctly. ### Steps to Reproduce Set the following in docker environment: `CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE=10485760` `ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION=true` Ask 'gemini-3-pro-image-preview' (via openrouter) for an image (3 displayed) Ask 'gemini-2.5-flash-image' (via openrouter) for an image (1 displayed) ### Logs & Screenshots <img width="775" height="920" alt="Image" src="https://github.com/user-attachments/assets/6c7a815d-50be-4699-a498-2425d6677127" /> <img width="905" height="553" alt="Image" src="https://github.com/user-attachments/assets/9665cda1-8fe0-4dcc-b578-f735476ab272" /> ### Additional Information _No response_
GiteaMirror added the bug label 2026-05-05 21:08:38 -05:00
Author
Owner

@Classic298 commented on GitHub (Nov 26, 2025):

@ShirasawaSama

<!-- gh-comment-id:3581570846 --> @Classic298 commented on GitHub (Nov 26, 2025): @ShirasawaSama
Author
Owner

@ShirasawaSama commented on GitHub (Nov 26, 2025):

I apologize for not being able to troubleshoot this issue today due to other matters. I will do my best to address it tomorrow or attempt a fix in the background.

<!-- gh-comment-id:3581585347 --> @ShirasawaSama commented on GitHub (Nov 26, 2025): I apologize for not being able to troubleshoot this issue today due to other matters. I will do my best to address it tomorrow or attempt a fix in the background.
Author
Owner

@Classic298 commented on GitHub (Nov 26, 2025):

thanks!

<!-- gh-comment-id:3581595227 --> @Classic298 commented on GitHub (Nov 26, 2025): thanks!
Author
Owner

@davidpede commented on GitHub (Nov 26, 2025):

Thanks!

<!-- gh-comment-id:3581727508 --> @davidpede commented on GitHub (Nov 26, 2025): Thanks!
Author
Owner

@tjbck commented on GitHub (Nov 26, 2025):

If you're using open router models, they're entirely separate from our built-in image generation/edit pipeline.

<!-- gh-comment-id:3583364612 --> @tjbck commented on GitHub (Nov 26, 2025): If you're using open router models, they're entirely separate from our built-in image generation/edit pipeline.
Author
Owner

@art-vish commented on GitHub (Jan 13, 2026):

I am experiencing an issue where every image generated (or returned) by the model is displayed three times in the chat interface. This happens when using the latest version of Open WebUI with LiteLLM as a proxy for open-router/google/gemini-3-pro-image-preview.

My Setup:

  • Open WebUI Version: Latest (pulled today).
  • Backend: LiteLLM.
  • Model: google/gemini-3-pro-image-preview.
  • Settings:
    -- Image Generation is DISABLED in the WebUI settings.
    -- Image Edit is DISABLED in the WebUI settings.

ENVs:

CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE: 20971520
ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION: "true"
REPLACE_IMAGE_URLS_IN_CHAT_RESPONSE: "true"

LiteLLM part of conf:

custom_llm_provider: openai
      stream: false
      extra_body:
        modalities: ["image", "text"]
Image
<!-- gh-comment-id:3745586245 --> @art-vish commented on GitHub (Jan 13, 2026): I am experiencing an issue where every image generated (or returned) by the model is displayed three times in the chat interface. This happens when using the latest version of Open WebUI with LiteLLM as a proxy for open-router/google/gemini-3-pro-image-preview. **My Setup:** - Open WebUI Version: Latest (pulled today). - Backend: LiteLLM. - Model: google/gemini-3-pro-image-preview. - Settings: -- Image Generation is DISABLED in the WebUI settings. -- Image Edit is DISABLED in the WebUI settings. ENVs: ``` CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE: 20971520 ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION: "true" REPLACE_IMAGE_URLS_IN_CHAT_RESPONSE: "true" ``` LiteLLM part of conf: ``` custom_llm_provider: openai stream: false extra_body: modalities: ["image", "text"] ``` <img width="835" height="857" alt="Image" src="https://github.com/user-attachments/assets/6e073338-ed13-4bb5-bdad-110ed12b5e85" />
Author
Owner

@Classic298 commented on GitHub (Jan 14, 2026):

@ShirasawaSama what are potential misconfiguration issues here?

<!-- gh-comment-id:3749300844 --> @Classic298 commented on GitHub (Jan 14, 2026): @ShirasawaSama what are potential misconfiguration issues here?
Author
Owner

@Joly0 commented on GitHub (Jan 30, 2026):

Can this issue please be re-opened? I am hitting the same error with litellm and duplicate images being shown:

Image Image
<!-- gh-comment-id:3823166111 --> @Joly0 commented on GitHub (Jan 30, 2026): Can this issue please be re-opened? I am hitting the same error with litellm and duplicate images being shown: <img width="909" height="951" alt="Image" src="https://github.com/user-attachments/assets/c419c557-2e74-4f25-b955-8de439899665" /> <img width="1044" height="954" alt="Image" src="https://github.com/user-attachments/assets/bdaeeaae-6439-466b-9717-7f2390db6c11" />
Author
Owner

@Classic298 commented on GitHub (Jan 30, 2026):

I cannot reproduce.

If you have a way to 100% reproduce this, even on a fresh clean setup, then open a new issue with reproduction steps

<!-- gh-comment-id:3823188652 --> @Classic298 commented on GitHub (Jan 30, 2026): I cannot reproduce. If you have a way to 100% reproduce this, even on a fresh clean setup, then open a new issue with reproduction steps
Author
Owner

@Joly0 commented on GitHub (Jan 30, 2026):

I cannot reproduce.

If you have a way to 100% reproduce this, even on a fresh clean setup, then open a new issue with reproduction steps

I mean, i just setup litellm and configured it in open-webui and directly got this issue, so seems to me like this is reproducible 100% on my end

<!-- gh-comment-id:3823204027 --> @Joly0 commented on GitHub (Jan 30, 2026): > I cannot reproduce. > > If you have a way to 100% reproduce this, even on a fresh clean setup, then open a new issue with reproduction steps I mean, i just setup litellm and configured it in open-webui and directly got this issue, so seems to me like this is reproducible 100% on my end
Author
Owner

@VyacheslavTeplyakov commented on GitHub (Feb 4, 2026):

v0.7.2
Still there

Image
<!-- gh-comment-id:3848114530 --> @VyacheslavTeplyakov commented on GitHub (Feb 4, 2026): v0.7.2 Still there <img width="1304" height="1282" alt="Image" src="https://github.com/user-attachments/assets/0ab64b55-fb9d-4d88-8a9b-6a2f1f1e3409" />
Author
Owner

@Classic298 commented on GitHub (Feb 4, 2026):

without clear steps to reproduce this will be hard to fix

<!-- gh-comment-id:3848767139 --> @Classic298 commented on GitHub (Feb 4, 2026): without clear steps to reproduce this will be hard to fix
Author
Owner

@VyacheslavTeplyakov commented on GitHub (Feb 5, 2026):

without clear steps to reproduce this will be hard to fix
EZ
Clean docker install v0.7.2 + openrouter + google/gemini-3-pro-image-preview + prompt "rabbit"

<!-- gh-comment-id:3851285798 --> @VyacheslavTeplyakov commented on GitHub (Feb 5, 2026): > without clear steps to reproduce this will be hard to fix EZ Clean docker install v0.7.2 + openrouter + google/gemini-3-pro-image-preview + prompt "rabbit"
Author
Owner

@Classic298 commented on GitHub (Feb 5, 2026):

"Clean docker install v0.7.2 + openrouter + google/gemini-3-pro-image-preview + prompt "rabbit""

Ok i will test this later today. But I already tested this exact same setup before with gemini-2.5-flash-image and i could not reproduce.

<!-- gh-comment-id:3851560376 --> @Classic298 commented on GitHub (Feb 5, 2026): "Clean docker install v0.7.2 + openrouter + google/gemini-3-pro-image-preview + prompt "rabbit"" Ok i will test this later today. But I already tested this exact same setup before with gemini-2.5-flash-image and i could not reproduce.
Author
Owner

@VyacheslavTeplyakov commented on GitHub (Feb 5, 2026):

"Clean docker install v0.7.2 + openrouter + google/gemini-3-pro-image-preview + prompt "rabbit""

Ok i will test this later today. But I already tested this exact same setup before with gemini-2.5-flash-image and i could not reproduce.

I can provide you with any information or logs you need, just ask.

<!-- gh-comment-id:3851588625 --> @VyacheslavTeplyakov commented on GitHub (Feb 5, 2026): > "Clean docker install v0.7.2 + openrouter + google/gemini-3-pro-image-preview + prompt "rabbit"" > > Ok i will test this later today. But I already tested this exact same setup before with gemini-2.5-flash-image and i could not reproduce. I can provide you with any information or logs you need, just ask.
Author
Owner

@Joly0 commented on GitHub (Feb 5, 2026):

"Clean docker install v0.7.2 + openrouter + google/gemini-3-pro-image-preview + prompt "rabbit""

Ok i will test this later today. But I already tested this exact same setup before with gemini-2.5-flash-image and i could not reproduce.

Btw, its correct that gemini-2.5-flash-image doesnt have this problem. With the exact same settings, 2.5 only generates 1 image while 3.0-pro-image-preview generates 2.
I added both to my litellm config with the exact same config (except the background model, but both from openrouter), only changed the model in owui and i even used the exact same chat with the exact same message (i changed the model and then re-send my original prompt).

<!-- gh-comment-id:3854020995 --> @Joly0 commented on GitHub (Feb 5, 2026): > "Clean docker install v0.7.2 + openrouter + google/gemini-3-pro-image-preview + prompt "rabbit"" > > Ok i will test this later today. But I already tested this exact same setup before with gemini-2.5-flash-image and i could not reproduce. Btw, its correct that gemini-2.5-flash-image doesnt have this problem. With the exact same settings, 2.5 only generates 1 image while 3.0-pro-image-preview generates 2. I added both to my litellm config with the exact same config (except the background model, but both from openrouter), only changed the model in owui and i even used the exact same chat with the exact same message (i changed the model and then re-send my original prompt).
Author
Owner

@Classic298 commented on GitHub (Feb 6, 2026):

Image

Fresh setup.......
Openrouter
gemini 3 pro image preview

cannot reproduce

Guys if you want me to fix this imma need reproduction steps

<!-- gh-comment-id:3861282205 --> @Classic298 commented on GitHub (Feb 6, 2026): <img width="1922" height="899" alt="Image" src="https://github.com/user-attachments/assets/126a8485-833b-4f24-9141-74a42d208c39" /> Fresh setup....... Openrouter gemini 3 pro image preview cannot reproduce Guys if you want me to fix this imma need reproduction steps
Author
Owner

@Joly0 commented on GitHub (Feb 6, 2026):

Image

Fresh setup....... Openrouter gemini 3 pro image preview

cannot reproduce

Guys if you want me to fix this imma need reproduction steps

I think the problem (atleast for me reliably) is not when i select the model as the chat model, but when i have it configured (thats why i mentioned litellm) in the admin image settings and let the normal chat model (gpt-5.2-chat for me) use the image generation using gemini-3.0-pro-preview as a tool (so using the image settings)

<!-- gh-comment-id:3861348067 --> @Joly0 commented on GitHub (Feb 6, 2026): > <img alt="Image" width="1922" height="899" src="https://private-user-images.githubusercontent.com/27028174/546304587-126a8485-833b-4f24-9141-74a42d208c39.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzAzOTU1MzAsIm5iZiI6MTc3MDM5NTIzMCwicGF0aCI6Ii8yNzAyODE3NC81NDYzMDQ1ODctMTI2YTg0ODUtODMzYi00ZjI0LTkxNDEtNzRhNDJkMjA4YzM5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNjAyMDYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjYwMjA2VDE2MjcxMFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWE1YzYxMjc5MzgxMGM2MGFhZjgxYTUyNjM0MDY1NDRlMzdjZjM2NjNhNTVmNGE2ZDMxMDllMDAzNjZjN2ExOWYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.TlYX3zKnhBuSp6dfPY3EIN-9QuD0kHv2tNdL6ZQZa7k"> > > Fresh setup....... Openrouter gemini 3 pro image preview > > cannot reproduce > > Guys if you want me to fix this imma need reproduction steps I think the problem (atleast for me reliably) is not when i select the model as the chat model, but when i have it configured (thats why i mentioned litellm) in the admin image settings and let the normal chat model (gpt-5.2-chat for me) use the image generation using gemini-3.0-pro-preview as a tool (so using the image settings)
Author
Owner

@Joly0 commented on GitHub (Feb 6, 2026):

Here in the same chat

Image Image Image

My settings are as followed:

Image I use "gpt-5.2-chat" as the chat model through openrouter and gemini-3.0-pro-preview also from openrouter, but routed through litellm (as openrouter models are not directly supported in the image settings). Also, what seems to be important, is to have the chat model configured with "native function calling" enabled and in the chat the "image" toggle is turned off. Though it doesnt seem to make a difference if i enable or disable that.

But these are my settings and the steps i did to get the problem

<!-- gh-comment-id:3861456497 --> @Joly0 commented on GitHub (Feb 6, 2026): Here in the same chat <img width="1161" height="1185" alt="Image" src="https://github.com/user-attachments/assets/acc08e41-30d3-4550-9e80-665170df2ee8" /> <img width="949" height="1198" alt="Image" src="https://github.com/user-attachments/assets/00d86ac8-f132-41c0-9354-b28a73342445" /> <img width="1172" height="811" alt="Image" src="https://github.com/user-attachments/assets/481590dc-72bb-4c58-be6c-bc451706875c" /> My settings are as followed: <img width="2067" height="713" alt="Image" src="https://github.com/user-attachments/assets/baa01156-47f1-4290-a666-21813cde1be0" /> I use "gpt-5.2-chat" as the chat model through openrouter and gemini-3.0-pro-preview also from openrouter, but routed through litellm (as openrouter models are not directly supported in the image settings). Also, what seems to be important, is to have the chat model configured with "native function calling" enabled and in the chat the "image" toggle is turned off. Though it doesnt seem to make a difference if i enable or disable that. But these are my settings and the steps i did to get the problem
Author
Owner

@Joly0 commented on GitHub (Feb 16, 2026):

Hey @Classic298 was wondering with my last message if you were able to replicate the issue?

<!-- gh-comment-id:3905985047 --> @Joly0 commented on GitHub (Feb 16, 2026): Hey @Classic298 was wondering with my last message if you were able to replicate the issue?
Author
Owner

@Classic298 commented on GitHub (Feb 16, 2026):

sorry i dont have litellm set up locally. i can try in another environment. what do you want me to try exactly and how are the models configured precisely (in litellm also) thanks.

<!-- gh-comment-id:3906858806 --> @Classic298 commented on GitHub (Feb 16, 2026): sorry i dont have litellm set up locally. i can try in another environment. what do you want me to try exactly and how are the models configured precisely (in litellm also) thanks.
Author
Owner

@Joly0 commented on GitHub (Feb 16, 2026):

I think except for the LiteLLM config my previous message should include everything needed as information.
LiteLLM config for me is this:

Image
model_list:
  - model_name: gemini-3-pro-image-preview
    litellm_params:
      model: openrouter/google/gemini-3-pro-image-preview
      api_key: os.environ/OPENROUTER_API_KEY
      api_base: os.environ/OPENROUTER_API_BASE
      # drop_params: true
      additional_drop_params: ["response_format"]
  - model_name: gemini-2.5-flash-image
    litellm_params:
      model: openrouter/google/gemini-2.5-flash-image
      api_key: os.environ/OPENROUTER_API_KEY
      api_base: os.environ/OPENROUTER_API_BASE
      # drop_params: true
      additional_drop_params: ["response_format"]

With this config gemini-2.5 works while 3.0 shows the error from my previous image. Its important to use a normal chat model (either in native tool calling more or default mode, for me works everytime in native mode, but not sure if important) and use the oweui built-in image generation/editing feature. If you use the image mode as the chat model, this bug doesnt appear, its only when using a normal chat model (gpt-5.2-chat in my case) and use the image generation feature

<!-- gh-comment-id:3907545261 --> @Joly0 commented on GitHub (Feb 16, 2026): I think except for the LiteLLM config my previous message should include everything needed as information. LiteLLM config for me is this: <img width="533" height="288" alt="Image" src="https://github.com/user-attachments/assets/ac80cb87-6556-4261-9947-3824a779fd33" /> ``` model_list: - model_name: gemini-3-pro-image-preview litellm_params: model: openrouter/google/gemini-3-pro-image-preview api_key: os.environ/OPENROUTER_API_KEY api_base: os.environ/OPENROUTER_API_BASE # drop_params: true additional_drop_params: ["response_format"] - model_name: gemini-2.5-flash-image litellm_params: model: openrouter/google/gemini-2.5-flash-image api_key: os.environ/OPENROUTER_API_KEY api_base: os.environ/OPENROUTER_API_BASE # drop_params: true additional_drop_params: ["response_format"] ``` With this config gemini-2.5 works while 3.0 shows the error from my previous image. Its important to use a normal chat model (either in native tool calling more or default mode, for me works everytime in native mode, but not sure if important) and use the oweui built-in image generation/editing feature. If you use the image mode as the chat model, this bug doesnt appear, its only when using a normal chat model (gpt-5.2-chat in my case) and use the image generation feature
Author
Owner

@Classic298 commented on GitHub (Feb 28, 2026):

@Joly0 just for reference, i tried this recently and still could not reproduce.

<!-- gh-comment-id:3977702000 --> @Classic298 commented on GitHub (Feb 28, 2026): @Joly0 just for reference, i tried this recently and still could not reproduce.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#57576