mirror of
https://github.com/open-webui/open-webui.git
synced 2026-03-22 14:13:08 -05:00
issue: Default model is changed when switching between chats #6399
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @tigran123 on GitHub (Sep 14, 2025).
Check Existing Issues
Installation Method
Pip Install
Open WebUI Version
0.6.28
Ollama Version (if applicable)
0.11.10
Operating System
Ubuntu Linux 22.04.5
Browser (if applicable)
Chrome 138.0.7204.168
Confirmation
README.md.Expected Behavior
If a model is set as default, then selecting some chat (with a different model) and then clicking on "New Chat" should use the default model and not the model that happens to be used in that particular chat. Otherwise what is the meaning of "default model"?
Actual Behavior
Switching to a chat sets the default model (to be used for subsequent chats created via "New Chat") to be the model, used in this particular chat.
Steps to Reproduce
gpt-oss:120bgpt-oss:20bSo the bug is that we have to click on "New Chat" twice for the model to be set correctly.Logs & Screenshots
Additional Information
No response
@tusharrrr1 commented on GitHub (Sep 14, 2025):
In the component / function that handles New Chat creation (likely in frontend/src/components/chat/NewChatButton.tsx or wherever chat state is initialized).
On New Chat, always initialize with the global default model.
Do not inherit the model from the currently selected chat.
Allow me to make these changes if possible
@tjbck commented on GitHub (Sep 15, 2025):
Intended behaviour here, the model selection will be inherited from the previous chat. If you click on the new chat again, it'll default to the default model.
@tigran123 commented on GitHub (Sep 15, 2025):
Ok, understood. It is a bit counter-intuitive, but perfectly acceptable. And it gives an extra function of being able to quickly have a new chat with the same model.
Generally, I am very happy with open-webui, btw. Until recently I was boasting that my Sigma AI is the only AI web chat system in existence that handles LaTeX formulae on input as well as output (in ChatGPT only on output), but now I see that open-webui handles this just as good as Sigma does.
The only thing that is lacking in open-webui (but present in Sigma AI) is the ability to chat to multiple LLMs simultaneously. I gave a presentation about it in Oxford University as part of their AI course (see https://lifelong-learning.ox.ac.uk/tutors/27519) and there was some interest to this idea. Maybe some day you can implement this in open-webui as well (or I may help, if I am not too old yet -- retired now :)
@Classic298 commented on GitHub (Sep 15, 2025):
@tigran123
This is possible in Open WebUI as well
Press the little + button next to the model selector
@rgaricano commented on GitHub (Sep 15, 2025):
or the multimodel function together with hotswap agent, for intermodel chats: https://github.com/pkeffect/functions/tree/main/functions/filters/multimodel
@tigran123 commented on GitHub (Sep 15, 2025):
Tried it now, thank you. It is not quite multi-LLM support -- it could be called "simultaneous LLM support". What I meant is the ability for many LLMs (not necessarily different -- they could be clones of the same LLM with different parameters, system prompt, etc.) to actually interact with each other. Specifically, there are two modes:
Parallel Mode -- all LLMs work on the prompt independently, i.e. they do not see each other's answer. This is "unbiased", i.e. each LLM provides its own solution to the problem independently. But they still see all of the context, including the other fellow LLM's replies (to previous prompts), of course.
Sequential Mode -- each LLM (in a particular order, which could be configurable) sees the prompt plus all the answers of the previous (active) LLMs. This way one can give tasks like:
gpt-4: calculate Dirichlet integral
llama3: verify gpt-4's calculation and provide an alternative method
for brainstorming it is quite useful, but of course the system prompt for each LLM has to be carefully written, i.e. the LLM has to be told that it is participating in a multi-LLM (plus one human) conversation and should prefix its replies with its name and treat the other LLMs prefixed replies accordingly, etc.
In Sigma AI all this works fine, but right now the system is broken because the model that generates the title (llama3) was decommissioned by Groq, so I just need to make a trivial change to switch to a more recent model. But seeing how wondeful open-webui is (seriously!) I am hesitating whether it is worth continuing to develop Sigma AI -- maybe I should learn this "Svelte" framework (never heard of it before) and help you guys instead...
@tigran123 commented on GitHub (Sep 15, 2025):
Oh, this is more like what I was talking about, yes. I will look into this later, thank you!