mirror of
https://github.com/open-webui/open-webui.git
synced 2026-03-22 22:21:27 -05:00
Chat history UI does not show summary of o1 model chat #2208
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @codehopper-uk on GitHub (Sep 26, 2024).
Bug Report
Installation Method
Helm install
Environment
Open WebUI Version: [v0.3.23] according to helm release, but About on the UI says v0.3.30 (latest)
Ollama (if applicable): [e.g., v0.2.0, v0.1.32-rc1]
Operating System: image: ghcr.io/open-webui/open-webui
Browser (if applicable): Chrome Version 128.0.6613.138 (Official Build) (x86_64)
Confirmation:
Expected Behavior:
Chat history for o1 model queries to show summary of chat, as it does for other models eg 4o
Actual Behavior:
Chat history for o1 model with Chat Streaming off results in unlabeled/textless chat history.
Description
Bug Summary:
Chat history for o1 model with Chat Streaming off results in unlabeled/textless chat history. Chat Streaming is required to be turned off otherwise o1 models do not work as per issue 5490
Reproduction Details
Steps to Reproduce:
Logs and Screenshots
Screenshots/Screen Recordings (if applicable):
Note
If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!
@jonnywright commented on GitHub (Oct 4, 2024):
Same issue here but with different environment:
Bug Report
Installation Method
Docker via. docker compose v2 on a virtual host.
Environment
Open WebUI Version: v0.3.30
Ollama (if applicable): N/A
Operating System: Ubuntu Server 20.04
Browser (if applicable): Chrome 129.0.6668.71, Firefox 131.0
Confirmation:
@jorikvanveen commented on GitHub (Oct 18, 2024):
I made a hacky patch to fix this while we're waiting for a more permanent solution. It overrides the title generation model with 4o-mini whenever the original model string contains "o1".
I have a repo with this patch applied here: https://github.com/jorikvanveen/open-webui. Do note that I will likely not be maintaining it so don't rely on it for anything important.
@Ryan526 commented on GitHub (Oct 28, 2024):
is there a cap set for max_completion_tokens on title generation? If there is simply removing it will make this work. Same issue I had on a different repo.
@Ryan526 commented on GitHub (Oct 28, 2024):
21b8ca3459/backend/open_webui/main.py (L1574)The easy way would be to just remove this line. Its not required in the payload and in most cases (besides o1 models) the total tokens used is still going to be below 50 using that default prompt. Reasoning tokens in the o1 models are blowing through all those tokens giving you no reply to make a title. Until they give us a way to control reasoning tokens for o1 models there isn't any other way of doing it other than setting a separate model when using o1 or setting
max_completion_tokensto something like 5000+.@Ryan526 commented on GitHub (Oct 28, 2024):
I was gonna do a pull request to remove the max_completion_tokens when using o1 models but I realized thats kind of pointless when you are able to change your task model in the admin panel.
@DmitriyAlergant commented on GitHub (Nov 28, 2024):
Frankly, the main issue here is a poor choice of a default for Task Model to 'same model' - especially given some non-intuitivity of this part of the admin UI, and where it is located (under Interface).
Instead of fixing "Making o1 work and don't fail for title generation", the right question is how to make sure people don't default to "same model", which can be o1 for all tasks in the first place. And that applies to all tasks not only Title Generation.
I would suggest replacing the "Current Model" option for TASK_MODEL_EXTERNAL(which is the default) to spell "Auto-Select". Which BTW won't even require any config database migrations; The default value is actually simply "". We only change how we render it to the user.
When the default option begins showing itself as Auto-Select, we are now authorized to implement a heuristic selection function, something along the lines of...
@tjbck thoughts? A PR would be trivial to raise (maybe I'd do this) but first wanted your input on the approach to avoid wasting effort