mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 03:18:23 -05:00
Chatting with two models at once, second request takes history (first answer) from one model and uses it in request to both models #4739 #1842
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @vlsav on GitHub (Aug 22, 2024).
Same issue as in #4739
Checked with open-webui v0.3.15
Is it possible to have separate chats with several models, when particular model responses are sent only to this model, not to others?
@tjbck commented on GitHub (Aug 22, 2024):
0.3.15 doesn't have the issue, please make sure you've updated to the latest.
@vlsav commented on GitHub (Aug 22, 2024):
@tjbck please have a look on my screenshot, on second request and second answer from two models. It is evident, that both models receive answer on first question from "model 2" in context with second request. I am wondering if this is an expected behavior or not and if it is possible to achieve completely separate request/answer flow for two models, without answers merge.
encounter an issue with screentshot, so just copy/paste from chat window
You
tell me short one line advice about business
Model1
"Focus on providing value to your customers."
Model2
"Build a business that solves a problem, not just a product that people want."
You
please write your advice in other words
Model1
Focus on creating value by solving a problem, rather than just selling a product.
Model2
"Create a business that answers a need, not just a product that people desire."
@tjbck commented on GitHub (Aug 22, 2024):
This is an expected behaviour, the subsequent requests will utilise selected messages is in the chat history.
@vlsav commented on GitHub (Aug 22, 2024):
@tjbck Thank you for clarification. I see, so only one response (selected, or merged if chosen) is used. So it is more relevant to get better result from several models, not to compare different models answers. Do you have plans to utilize separate requests/answers per model?