[GH-ISSUE #14973] issue: Chat context is lost when multiple models are used simultaneously #56093

Closed
opened 2026-05-05 18:40:02 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @reduardo7 on GitHub (Jun 13, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/14973

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

v0.6.14

Ollama Version (if applicable)

No response

Operating System

Kubernetes

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

Each model should retain and continue its own conversation context within the same chat thread.

Actual Behavior

When continuing a conversation with multiple models selected in the same chat, the context is not retained properly. Replies don’t seem to consider the original prompt and previous interactions.

Steps to Reproduce

  1. Select 5 random models in a new chat.
  2. Prompt: “Give me any female name.”
    ⟶ Each model responds with a different name.
  3. Prompt: “Can you repeat the name you gave me?”
    ⟶ Expected: Each model replies with the name it originally provided.
    ⟶ Actual: All models responded with exactly the same name: “Laura”.
  4. Continue test with: “Can you repeat the name you gave me for each model?”

Obtained responses:

  • “Sure, the name I gave you for each model is: Laura.”
  • “GPT-3.5: Laura GPT-4: Laura” (These models weren’t even selected)
  • “Laura, Laura, Laura, Laura, Laura, Laura, Laura, Laura.” (Seems like everything got mixed?)
  • “Sure, the name I gave you is: Laura.” (It looks like only one response was used?)
  • “I don’t understand which models you're referring to, I only gave you one name: Laura. Do you want more names?”

Logs & Screenshots

Image
Image
Image

Additional Information

No response

Originally created by @reduardo7 on GitHub (Jun 13, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/14973 ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version v0.6.14 ### Ollama Version (if applicable) _No response_ ### Operating System Kubernetes ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [ ] I have included the browser console logs. - [ ] I have included the Docker container logs. - [ ] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [ ] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior Each model should retain and continue its own conversation context within the same chat thread. ### Actual Behavior When continuing a conversation with multiple models selected in the same chat, the context is not retained properly. Replies don’t seem to consider the original prompt and previous interactions. ### Steps to Reproduce 1. Select 5 random models in a new chat. 2. Prompt: “Give me any female name.” ⟶ Each model responds with a different name. 3. Prompt: “Can you repeat the name you gave me?” ⟶ Expected: Each model replies with the name it originally provided. ⟶ Actual: All models responded with exactly the same name: “Laura”. 4. Continue test with: “Can you repeat the name you gave me for each model?” Obtained responses: - “Sure, the name I gave you for each model is: Laura.” - “GPT-3.5: Laura GPT-4: Laura” (These models weren’t even selected) - “Laura, Laura, Laura, Laura, Laura, Laura, Laura, Laura.” (Seems like everything got mixed?) - “Sure, the name I gave you is: Laura.” (It looks like only one response was used?) - “I don’t understand which models you're referring to, I only gave you one name: Laura. Do you want more names?” ### Logs & Screenshots ![Image](https://github.com/user-attachments/assets/731805b3-8474-4c2a-8b50-c62e318858b7) ![Image](https://github.com/user-attachments/assets/f686baa6-a710-4743-a737-992a44f7ccfc) <img width="1007" alt="Image" src="https://github.com/user-attachments/assets/e3858c11-750f-4052-b98d-4c0861d0b7d4" /> ### Additional Information _No response_
GiteaMirror added the bug label 2026-05-05 18:40:02 -05:00
Author
Owner

@Classic298 commented on GitHub (Jun 13, 2025):

Not a bug but maybe a display bug

You select one of the three responses actually. See that the middle response wifh "laura" has an ever so slightly brighter box. You selected this Response

And this is the response that will be in the context

<!-- gh-comment-id:2971721734 --> @Classic298 commented on GitHub (Jun 13, 2025): Not a bug but maybe a display bug You select one of the three responses actually. See that the middle response wifh "laura" has an ever so slightly brighter box. You selected this Response And this is the response that will be in the context
Author
Owner

@tjbck commented on GitHub (Jun 14, 2025):

Intended behaviour.

<!-- gh-comment-id:2972743816 --> @tjbck commented on GitHub (Jun 14, 2025): Intended behaviour.
Author
Owner

@reduardo7 commented on GitHub (Jun 14, 2025):

Not a bug but maybe a display bug

You select one of the three responses actually. See that the middle response wifh "laura" has an ever so slightly brighter box. You selected this Response

And this is the response that will be in the context

Thanks for the quick reply! Honestly, it's imperceptible that one of the responses is selected. I understand this is more of a UI/UX issue. It's true that the UI probably needs a bit of improvement for this...

<!-- gh-comment-id:2973015538 --> @reduardo7 commented on GitHub (Jun 14, 2025): > Not a bug but maybe a display bug > > You select one of the three responses actually. See that the middle response wifh "laura" has an ever so slightly brighter box. You selected this Response > > And this is the response that will be in the context Thanks for the quick reply! Honestly, it's imperceptible that one of the responses is selected. I understand this is more of a UI/UX issue. It's true that the UI probably needs a bit of improvement for this...
Author
Owner

@Classic298 commented on GitHub (Jun 14, 2025):

working on a PR to address this....

<!-- gh-comment-id:2973028274 --> @Classic298 commented on GitHub (Jun 14, 2025): working on a PR to address this....
Author
Owner

@reduardo7 commented on GitHub (Jun 14, 2025):

working on a PR to address this....

Hi @Classic298 ! Probably fixed at https://github.com/open-webui/open-webui/pull/14984

<!-- gh-comment-id:2973032046 --> @reduardo7 commented on GitHub (Jun 14, 2025): > working on a PR to address this.... Hi @Classic298 ! Probably fixed at https://github.com/open-webui/open-webui/pull/14984
Author
Owner

@Classic298 commented on GitHub (Jun 14, 2025):

haha you're very quick. I'll submit my PR anyways. Let @tjbck decide which or how to implement it. I think tim will agree that SOME change will be necessary because its basically impossible at the moment to see it.

My implementation is ready in a minute, it will be very clear with my implementation which message is selected.

<!-- gh-comment-id:2973038403 --> @Classic298 commented on GitHub (Jun 14, 2025): haha you're very quick. I'll submit my PR anyways. Let @tjbck decide which or how to implement it. I think tim will agree that SOME change will be necessary because its basically impossible at the moment to see it. My implementation is ready in a minute, it will be very clear with my implementation which message is selected.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#56093