issue: Default model is changed when switching between chats #6399

Closed
opened 2025-11-11 16:54:02 -06:00 by GiteaMirror · 7 comments
Owner

Originally created by @tigran123 on GitHub (Sep 14, 2025).

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Pip Install

Open WebUI Version

0.6.28

Ollama Version (if applicable)

0.11.10

Operating System

Ubuntu Linux 22.04.5

Browser (if applicable)

Chrome 138.0.7204.168

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

If a model is set as default, then selecting some chat (with a different model) and then clicking on "New Chat" should use the default model and not the model that happens to be used in that particular chat. Otherwise what is the meaning of "default model"?

Actual Behavior

Switching to a chat sets the default model (to be used for subsequent chats created via "New Chat") to be the model, used in this particular chat.

Steps to Reproduce

  1. Start a chat and select a model, say, gpt-oss:120b
  2. Click on "New Chat" and select some other model, say, gpt-oss:20b
  3. Click on "Set as default" underneath the model selector
  4. Observe the green message "Default model updated"
  5. Now, you can either do something in this chat or even do nothing, it does not matter. In all cases, the default model should have been set, as per green info message
  6. Switch to some other chat with a different model (gpt-oss:120b) (created at step 1)
  7. Click on "New Chat"
  8. Observe that the model is reset to gpt-oss:120b
  9. Now, this is interesting... Click on "New Chat" again -- observe the model is set to default, i.e. gpt-oss:20b So the bug is that we have to click on "New Chat" twice for the model to be set correctly.

Logs & Screenshots

Image

Additional Information

No response

Originally created by @tigran123 on GitHub (Sep 14, 2025). ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Pip Install ### Open WebUI Version 0.6.28 ### Ollama Version (if applicable) 0.11.10 ### Operating System Ubuntu Linux 22.04.5 ### Browser (if applicable) Chrome 138.0.7204.168 ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior If a model is set as default, then selecting some chat (with a different model) and then clicking on "New Chat" should use the default model and not the model that happens to be used in that particular chat. Otherwise what is the meaning of "default model"? ### Actual Behavior Switching to a chat sets the default model (to be used for subsequent chats created via "New Chat") to be the model, used in this particular chat. ### Steps to Reproduce 1. Start a chat and select a model, say, gpt-oss:120b 2. Click on "New Chat" and select some other model, say, gpt-oss:20b 3. Click on "Set as default" underneath the model selector 4. Observe the green message "Default model updated" 5. Now, you can either do something in this chat or even do nothing, it does not matter. In all cases, the default model should have been set, as per green info message 6. Switch to some other chat with a different model (gpt-oss:120b) (created at step 1) 7. Click on "New Chat" 8. Observe that the model is reset to `gpt-oss:120b` 9. Now, this is interesting... Click on "New Chat" again -- observe the model is set to default, i.e. `gpt-oss:20b` So the bug is that we have to click on "New Chat" _twice_ for the model to be set correctly. ### Logs & Screenshots <img width="2138" height="891" alt="Image" src="https://github.com/user-attachments/assets/3f8e6a38-f318-4a1e-95c8-8a7e9eff92db" /> ### Additional Information _No response_
GiteaMirror added the bug label 2025-11-11 16:54:02 -06:00
Author
Owner

@tusharrrr1 commented on GitHub (Sep 14, 2025):

In the component / function that handles New Chat creation (likely in frontend/src/components/chat/NewChatButton.tsx or wherever chat state is initialized).

On New Chat, always initialize with the global default model.

Do not inherit the model from the currently selected chat.

Allow me to make these changes if possible

@tusharrrr1 commented on GitHub (Sep 14, 2025): In the component / function that handles New Chat creation (likely in frontend/src/components/chat/NewChatButton.tsx or wherever chat state is initialized). On New Chat, always initialize with the global default model. Do not inherit the model from the currently selected chat. Allow me to make these changes if possible
Author
Owner

@tjbck commented on GitHub (Sep 15, 2025):

Intended behaviour here, the model selection will be inherited from the previous chat. If you click on the new chat again, it'll default to the default model.

@tjbck commented on GitHub (Sep 15, 2025): Intended behaviour here, the model selection will be inherited from the previous chat. If you click on the new chat again, it'll default to the default model.
Author
Owner

@tigran123 commented on GitHub (Sep 15, 2025):

Ok, understood. It is a bit counter-intuitive, but perfectly acceptable. And it gives an extra function of being able to quickly have a new chat with the same model.

Generally, I am very happy with open-webui, btw. Until recently I was boasting that my Sigma AI is the only AI web chat system in existence that handles LaTeX formulae on input as well as output (in ChatGPT only on output), but now I see that open-webui handles this just as good as Sigma does.

The only thing that is lacking in open-webui (but present in Sigma AI) is the ability to chat to multiple LLMs simultaneously. I gave a presentation about it in Oxford University as part of their AI course (see https://lifelong-learning.ox.ac.uk/tutors/27519) and there was some interest to this idea. Maybe some day you can implement this in open-webui as well (or I may help, if I am not too old yet -- retired now :)

@tigran123 commented on GitHub (Sep 15, 2025): Ok, understood. It is a bit counter-intuitive, but perfectly acceptable. And it gives an extra function of being able to quickly have a new chat with the same model. Generally, I am very happy with open-webui, btw. Until recently I was boasting that my Sigma AI is the _only_ AI web chat system in existence that handles LaTeX formulae on input as well as output (in ChatGPT only on output), but now I see that open-webui handles this just as good as Sigma does. The only thing that is lacking in open-webui (but present in Sigma AI) is the ability to chat to multiple LLMs simultaneously. I gave a presentation about it in Oxford University as part of their AI course (see https://lifelong-learning.ox.ac.uk/tutors/27519) and there was some interest to this idea. Maybe some day you can implement this in open-webui as well (or I may help, if I am not too old yet -- retired now :)
Author
Owner

@Classic298 commented on GitHub (Sep 15, 2025):

@tigran123

The only thing that is lacking in open-webui (but present in Sigma AI) is the ability to chat to multiple LLMs simultaneously

This is possible in Open WebUI as well

Press the little + button next to the model selector

@Classic298 commented on GitHub (Sep 15, 2025): @tigran123 > The only thing that is lacking in open-webui (but present in Sigma AI) is the ability to chat to multiple LLMs simultaneously This is possible in Open WebUI as well Press the little + button next to the model selector
Author
Owner

@rgaricano commented on GitHub (Sep 15, 2025):

or the multimodel function together with hotswap agent, for intermodel chats: https://github.com/pkeffect/functions/tree/main/functions/filters/multimodel

@rgaricano commented on GitHub (Sep 15, 2025): or the multimodel function together with hotswap agent, for intermodel chats: https://github.com/pkeffect/functions/tree/main/functions/filters/multimodel
Author
Owner

@tigran123 commented on GitHub (Sep 15, 2025):

@tigran123
This is possible in Open WebUI as well

Press the little + button next to the model selector

Tried it now, thank you. It is not quite multi-LLM support -- it could be called "simultaneous LLM support". What I meant is the ability for many LLMs (not necessarily different -- they could be clones of the same LLM with different parameters, system prompt, etc.) to actually interact with each other. Specifically, there are two modes:

  1. Parallel Mode -- all LLMs work on the prompt independently, i.e. they do not see each other's answer. This is "unbiased", i.e. each LLM provides its own solution to the problem independently. But they still see all of the context, including the other fellow LLM's replies (to previous prompts), of course.

  2. Sequential Mode -- each LLM (in a particular order, which could be configurable) sees the prompt plus all the answers of the previous (active) LLMs. This way one can give tasks like:

gpt-4: calculate Dirichlet integral
llama3: verify gpt-4's calculation and provide an alternative method

for brainstorming it is quite useful, but of course the system prompt for each LLM has to be carefully written, i.e. the LLM has to be told that it is participating in a multi-LLM (plus one human) conversation and should prefix its replies with its name and treat the other LLMs prefixed replies accordingly, etc.

In Sigma AI all this works fine, but right now the system is broken because the model that generates the title (llama3) was decommissioned by Groq, so I just need to make a trivial change to switch to a more recent model. But seeing how wondeful open-webui is (seriously!) I am hesitating whether it is worth continuing to develop Sigma AI -- maybe I should learn this "Svelte" framework (never heard of it before) and help you guys instead...

@tigran123 commented on GitHub (Sep 15, 2025): > [@tigran123](https://github.com/tigran123) > This is possible in Open WebUI as well > > Press the little + button next to the model selector Tried it now, thank you. It is not quite multi-LLM support -- it could be called "simultaneous LLM support". What I meant is the ability for many LLMs (not necessarily different -- they could be clones of the same LLM with different parameters, system prompt, etc.) to actually interact with each other. Specifically, there are two modes: 1. Parallel Mode -- all LLMs work on the prompt independently, i.e. they do not see each other's answer. This is "unbiased", i.e. each LLM provides its own solution to the problem independently. But they still see all of the context, including the other fellow LLM's replies (to previous prompts), of course. 2. Sequential Mode -- each LLM (in a particular order, which could be configurable) sees the prompt plus all the answers of the previous (active) LLMs. This way one can give tasks like: gpt-4: calculate Dirichlet integral llama3: verify gpt-4's calculation and provide an alternative method for brainstorming it is quite useful, but of course the system prompt for each LLM has to be carefully written, i.e. the LLM has to be told that it is participating in a multi-LLM (plus one human) conversation and should prefix its replies with its name and treat the other LLMs prefixed replies accordingly, etc. In Sigma AI all this works fine, but right now the system is broken because the model that generates the title (llama3) was decommissioned by Groq, so I just need to make a trivial change to switch to a more recent model. But seeing how wondeful open-webui is (seriously!) I am hesitating whether it is worth continuing to develop Sigma AI -- maybe I should learn this "Svelte" framework (never heard of it before) and help you guys instead...
Author
Owner

@tigran123 commented on GitHub (Sep 15, 2025):

or the multimodel function together with hotswap agent, for intermodel chats: https://github.com/pkeffect/functions/tree/main/functions/filters/multimodel

Oh, this is more like what I was talking about, yes. I will look into this later, thank you!

@tigran123 commented on GitHub (Sep 15, 2025): > or the multimodel function together with hotswap agent, for intermodel chats: https://github.com/pkeffect/functions/tree/main/functions/filters/multimodel Oh, this is more like what I was talking about, yes. I will look into this later, thank you!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#6399