mirror of
https://github.com/open-webui/open-webui.git
synced 2026-03-22 06:02:06 -05:00
issue: Conversation with multiple models incorrectly reuses the context of only one model #5405
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Jefferderp on GitHub (May 30, 2025).
Check Existing Issues
Installation Method
Pip Install
Open WebUI Version
v0.6.12
Ollama Version (if applicable)
No response
Operating System
Debian 12
Browser (if applicable)
No response
Confirmation
README.md.Expected Behavior
When chatting with multiple models side-by-side, it's expected that each model will maintain its own separate context history.
Actual Behavior
When chatting with multiple models side-by-side, I'm observing that only one model's previous replies are being used as the context for all models. Sometimes the leftmost model, and sometimes the rightmost. See screenshot:
Steps to Reproduce
Logs & Screenshots
N/A
Additional Information
To my knowledge, this behavior was not exhibited prior to a more recent version. I can look into my update history if helpful to you.
If this is intended behavior, I'd be interested to learn why.
@tjbck commented on GitHub (May 31, 2025):
Intended behaviour, you're choosing one of the responses and then continuing the conversation.
@devdev999 commented on GitHub (Jun 1, 2025):
In this case, what is the logic to select the response that is used for continuation?
@RodolfoCastanheira commented on GitHub (Jun 3, 2025):
There is a subtle indicator of the selected answer. You select it by clicking it.
@Jefferderp commented on GitHub (Jun 3, 2025):
This behavior wasn't evident to me until now. The highlighting effect in dark mode is way too subtle, though I can see it now that I know to look.
I previously observed when clicking on different answers that sometimes my follow-up replies would disappear until I refreshed the page. This is because I was switching conversation branches. So what I thought was a UI glitch was actually a feature...
I think this should be better illustrated for the sake of all casual users. Until now, I was under the impression that prompting multiple models was for holding parallel conversations, not for picking the best answer and continuing from there. While the current workflow makes more sense, it's not visually intuitive.
@Classic298 commented on GitHub (Jun 3, 2025):
ah yes i see it but it is way too sublte. there is like a 1% color/brightness difference max