mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 11:28:35 -05:00
issue: Follow-up queries not generated in temporary chats #5544
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @silentoplayz on GitHub (Jun 15, 2025).
Originally assigned to: @tjbck on GitHub.
Check Existing Issues
Installation Method
Docker
Open WebUI Version
v0.6.14
Ollama Version (if applicable)
v0.9.0
Operating System
Edition: Windows 11 Pro | Version: 24H2 | OS Build: 26100.4351 | Windows Feature Experience Pack: 1000.26100.107.0
Browser (if applicable)
LibreWolf v135.0.1-1 (Firefox)
Confirmation
README.md.Expected Behavior
When interacting with a model in a temporary chat, after the model finishes generating its response, a list of suggested follow-up queries (e.g., "Tell me more about X", "What are the implications of Y?") should appear below the response, similar to how they appear in non-temporary chats.
Actual Behavior
In temporary chats, follow-up queries are not generated or displayed after the model completes its response. This feature works correctly in non-temporary chats.
Steps to Reproduce
This issue consistently reproduces in temporary chats across different models and query types.
Logs & Screenshots
Additional Information
@decent-engineer-decent-datascientist commented on GitHub (Jun 15, 2025):
Completely unrelated, but how did you get that extremely cool organization in your models drop down? Specifically the tags per model and the tabs?
@silentoplayz commented on GitHub (Jun 15, 2025):
Model tags are the secret sauce behind that clean, organized dropdown!


Tags can be added to any model directly from its edit page in Open WebUI — it’s a breeze!
YES — you can even tag custom models from the
Workspacesection!Tags aren’t limited to models alone — they can also be applied to connections!

You can tag any connection added via the
Add ConnectionorEdit Connectionmodal — includingOpenAI API,Ollama API, andDirect Connections.Here’s a real-world example: my GroqCloud connection.

I added the tags
FreeandGroqCloudto the connection itself, not each model. Why? Because doing so automatically adds the same tags to every model that connection provides access to — saving me from repetitive tagging.From there, I can still manually add extra tags to specific models if needed — like adding a
Codetag to models that handle code generation.And here’s how to tag direct connections too:

Now, the horizontal tag bar at the top of the model dropdown?
That’s auto-generated — it appears whenever you’ve tagged models or connections in your Open WebUI instance.
This bar is super useful:

You can scroll through it to see all tags, and clicking any tag filters the dropdown in real time.
P.S: If you're wondering why I decided not to use
Prefix IDfor connections, it's because doing so would would treat each model provided by the connection as a new one, which would require me to manually update ALL model details again. Instead, I manually name each model inside of Open WebUI, but I will prefix the model's name with something likeGroq /orOllama /, which helps me to quickly find out if I am using a local or external model from the model dropdown. I'll admit though, this is counterintuitive when compared to how I utilize tags in Open WebUI on connections to simplify manual work to do. Also, note that if you have added tags to a connection, you will not see those tags applied to the models that the connection provides from the models edit page, but you will see those tags added to the models from the model dropdown.@decent-engineer-decent-datascientist commented on GitHub (Jun 16, 2025):
Thank you so much! This is so clean, can't believe I never leveraged this.
@tjbck commented on GitHub (Oct 2, 2025):
Addressed in dev.
@silentoplayz commented on GitHub (Oct 2, 2025):
Awesome! I have tested with local models, external models, and custom models and can confirm that the issue has been addressed.
Local model in temporary chat:

Custom workspace model with local model as base in temporary chat:

External model in temporary chat:

Custom workspace model with external model as base in temporary chat:
