bug: Title Generation not working when using OpenAI APIs #3093

Closed
opened 2025-11-11 15:22:21 -06:00 by GiteaMirror · 4 comments
Owner

Originally created by @Simi5599 on GitHub (Dec 25, 2024).

Bug Report

Installation Method

Docker

Environment

  • Open WebUI Version: 0.5.0

Confirmation:

  • I have read and followed all the instructions provided in the README.md.
  • I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.

Expected Behavior:

When asking an LMM something, OpenWeb UI should autogenerate the title for the chat

Actual Behavior:

The title is not autogenerated and the chat stays on "New Chat"

Reproduction Details

Steps to Reproduce:

  1. Ask the LMM something
  2. The LMM will answer but the chat's title won't be generated

Logs and Screenshots

Browser Console Logs:
No errors

Docker Container Logs:
Spotted the following error:

ERROR [open_webui.routers.openai] 400, message='Bad Request', url='https://api.openai.com/v1/chat/completions'
Traceback (most recent call last):
  File "/app/backend/open_webui/routers/openai.py", line 679, in generate_chat_completion
    r.raise_for_status()
  File "/usr/local/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 1157, in raise_for_status
    raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 400, message='Bad Request', url='https://api.openai.com/v1/chat/completions'
ERROR [open_webui.routers.tasks] Exception occurred
Traceback (most recent call last):
  File "/app/backend/open_webui/routers/openai.py", line 679, in generate_chat_completion
    r.raise_for_status()
  File "/usr/local/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 1157, in raise_for_status
    raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 400, message='Bad Request', url='https://api.openai.com/v1/chat/completions'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/backend/open_webui/routers/tasks.py", line 187, in generate_title
    return await generate_chat_completion(request, form_data=payload, user=user)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/open_webui/utils/chat.py", line 155, in generate_chat_completion
    return await generate_openai_chat_completion(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/backend/open_webui/routers/openai.py", line 691, in generate_chat_completion
    raise HTTPException(
fastapi.exceptions.HTTPException: 400: Setting 'max_tokens' and 'max_completion_tokens' at the same time is not supported.
Originally created by @Simi5599 on GitHub (Dec 25, 2024). # Bug Report ## Installation Method Docker ## Environment - **Open WebUI Version:** 0.5.0 **Confirmation:** - [x] I have read and followed all the instructions provided in the README.md. - [x] I am on the latest version of both Open WebUI and Ollama. - [ ] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below. ## Expected Behavior: When asking an LMM something, OpenWeb UI should autogenerate the title for the chat ## Actual Behavior: The title is not autogenerated and the chat stays on "New Chat" ## Reproduction Details **Steps to Reproduce:** 1. Ask the LMM something 2. The LMM will answer but the chat's title won't be generated ## Logs and Screenshots **Browser Console Logs:** No errors **Docker Container Logs:** Spotted the following error: ``` ERROR [open_webui.routers.openai] 400, message='Bad Request', url='https://api.openai.com/v1/chat/completions' Traceback (most recent call last): File "/app/backend/open_webui/routers/openai.py", line 679, in generate_chat_completion r.raise_for_status() File "/usr/local/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 1157, in raise_for_status raise ClientResponseError( aiohttp.client_exceptions.ClientResponseError: 400, message='Bad Request', url='https://api.openai.com/v1/chat/completions' ERROR [open_webui.routers.tasks] Exception occurred Traceback (most recent call last): File "/app/backend/open_webui/routers/openai.py", line 679, in generate_chat_completion r.raise_for_status() File "/usr/local/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 1157, in raise_for_status raise ClientResponseError( aiohttp.client_exceptions.ClientResponseError: 400, message='Bad Request', url='https://api.openai.com/v1/chat/completions' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/backend/open_webui/routers/tasks.py", line 187, in generate_title return await generate_chat_completion(request, form_data=payload, user=user) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/backend/open_webui/utils/chat.py", line 155, in generate_chat_completion return await generate_openai_chat_completion( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/backend/open_webui/routers/openai.py", line 691, in generate_chat_completion raise HTTPException( fastapi.exceptions.HTTPException: 400: Setting 'max_tokens' and 'max_completion_tokens' at the same time is not supported. ```
Author
Owner

@tjbck commented on GitHub (Dec 25, 2024):

Fixed on dev! Testing wanted here!

@tjbck commented on GitHub (Dec 25, 2024): Fixed on dev! Testing wanted here!
Author
Owner

@Simi5599 commented on GitHub (Dec 25, 2024):

Wonderful, i was just about making a PR with the same fix you did!

I will copy your code and test it on my environment in 20 min!

@Simi5599 commented on GitHub (Dec 25, 2024): Wonderful, i was just about making a PR with the same fix you did! I will copy your code and test it on my environment in 20 min!
Author
Owner

@Simi5599 commented on GitHub (Dec 26, 2024):

Can confirm this is working on my side! (OpenAI APIs)

Screenshot 2024-12-26 010734
Screenshot 2024-12-26 010737

@Simi5599 commented on GitHub (Dec 26, 2024): Can confirm this is working on my side! (OpenAI APIs) ![Screenshot 2024-12-26 010734](https://github.com/user-attachments/assets/d827ab8f-ea35-46d4-bb93-b7ca202c4ea1) ![Screenshot 2024-12-26 010737](https://github.com/user-attachments/assets/4488d185-f4b0-474f-840a-61418494b106)
Author
Owner

@msurma commented on GitHub (Dec 26, 2024):

Hit the same issue. Found the cause in chat request logic:

23bf71022e/src/lib/components/chat/Chat.svelte (L1508)

This condition controls title/tags generation tasks, but fails when you have a user-level system prompt configured in settings - new chats start with 2 messages [system, user] instead of 1, so generation never triggers.

@msurma commented on GitHub (Dec 26, 2024): Hit the same issue. Found the cause in chat request logic: https://github.com/open-webui/open-webui/blame/23bf71022e7d6401b8b84f2f1aa16e148eb120d6/src/lib/components/chat/Chat.svelte#L1508 This condition controls title/tags generation tasks, but fails when you have a user-level system prompt configured in settings - new chats start with 2 messages [system, user] instead of 1, so generation never triggers.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#3093