[GH-ISSUE #17405] issue: [Bug] Multiple Cascading Failures in Automatic Title Generation Prevent Any Customization #33803

Closed
opened 2026-04-25 07:41:16 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @ttbj2023 on GitHub (Sep 12, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/17405

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

0.6.28

Ollama Version (if applicable)

No response

Operating System

Ubuntu 24.04.3 LTS

Browser (if applicable)

firefox

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

  1. The custom title prompt entered in the UI settings should be saved correctly and used to generate titles, overriding the default prompt.
  2. Newly created public "virtual models" should be selectable in the "Task Model" dropdown menu.
  3. When a "Task Model" with its own system prompt is selected, that system prompt should replace the hardcoded default title prompt, not be sent alongside it.

Actual Behavior

The automatic title generation feature is currently impossible to customize due to a series of cascading bugs. The final working solution requires modifying the source code directly.

The failures occur in three stages:

  1. Custom Prompt Field Fails to Save: The primary input field under Settings > Interface for a custom title prompt does not persist its value. The backend logic seems to completely ignore this field upon saving, making the UI setting non-functional.

  2. Virtual Model Workaround Fails: The community-suggested workaround of creating a "virtual model" with a dedicated system prompt is blocked by a UI bug. Even when a new model is created and set to "Public," it does not appear in the "Task Model" dropdown list, sometimes showing a "This model is not public" error.

  3. Direct System Prompt Causes "Double Prompt Injection": Applying a system prompt directly to a base model (e.g., gemini-2.5-flash) and selecting it as the "Task Model" reveals a critical logic flaw. The final payload sent to the LLM contains both the custom system prompt and the hardcoded default title prompt, creating a conflicting and unpredictable instruction.

Steps to Reproduce

Failure 1: The "Custom Title Prompt" field does not save.

  1. Navigate to Admin Panel > Settings > Interface.
  2. Leave the Title Generation > For generating the title automatically field empty.
  3. Start a new chat. A title is generated successfully using the hardcoded default prompt.
  4. Go back to the settings and enter any text into the custom prompt field.
  5. Click "Save".
  6. Start a new chat. Result: No title is generated.

Failure 2: "Virtual Model" workaround is not functional.

  1. Navigate to Admin Panel > Models and click "Add a model".
  2. Create a new virtual model (e.g., named title-generator) with a Base Model and a custom System Prompt.
  3. Set Visibility to Public and save.
  4. Navigate to Admin Panel > Settings > Interface.
  5. Try to select the new model title-generator under Task Model.
  6. Result: The new model does not appear in the dropdown list, making this workaround unusable.

Failure 3: Assigning a model with a system prompt causes "Double Prompt Injection".

  1. Edit an existing, selectable model (e.g., gemini-2.5-flash) and add a custom title generation prompt to its System Prompt field.
  2. In Settings > Interface, select this modified model as the Task Model.
  3. Start a new chat.
  4. Check the logs of the request being sent to the LLM.
  5. Result: The payload contains two conflicting prompts (one system role with the custom prompt, and one user role with the default prompt).

Logs & Screenshots

Here are the screenshots showing the "Virtual Model" UI bug where a public model is not selectable:

Image Image

Here is the log snippet showing the critical "Double Prompt Injection" failure (Failure 3). Note the two conflicting tasks in the messages array:

"messages": [
    {
      "role": "system",
      "content": "### Task:\nSummarize the conversation and generate a concise title in Chinese.\n..."
    },
    {
      "role": "user",
      "content": "### Task:\nGenerate a concise, 3-5 word title with an emoji summarizing the chat history.\n...\n### Chat History:\n..."
    }
]


### Additional Information


#### **➡️ Additional Information**
*(补充信息)*
```markdown
The only currently functional solution to achieve a custom title prompt is to directly modify the `DEFAULT_TITLE_GENERATION_PROMPT_TEMPLATE` variable in the source code file `backend/open_webui/config.py` and rebuild the Docker image. This confirms that the issue lies deep within the application's logic for handling prompt customization.
Originally created by @ttbj2023 on GitHub (Sep 12, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/17405 ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version 0.6.28 ### Ollama Version (if applicable) _No response_ ### Operating System Ubuntu 24.04.3 LTS ### Browser (if applicable) firefox ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior 1. The custom title prompt entered in the UI settings should be saved correctly and used to generate titles, overriding the default prompt. 2. Newly created public "virtual models" should be selectable in the "Task Model" dropdown menu. 3. When a "Task Model" with its own system prompt is selected, that system prompt should **replace** the hardcoded default title prompt, not be sent alongside it. ### Actual Behavior The automatic title generation feature is currently impossible to customize due to a series of cascading bugs. The final working solution requires modifying the source code directly. The failures occur in three stages: 1. **Custom Prompt Field Fails to Save:** The primary input field under `Settings > Interface` for a custom title prompt does not persist its value. The backend logic seems to completely ignore this field upon saving, making the UI setting non-functional. 2. **Virtual Model Workaround Fails:** The community-suggested workaround of creating a "virtual model" with a dedicated system prompt is blocked by a UI bug. Even when a new model is created and set to "Public," it does not appear in the "Task Model" dropdown list, sometimes showing a "This model is not public" error. 3. **Direct System Prompt Causes "Double Prompt Injection":** Applying a system prompt directly to a base model (e.g., `gemini-2.5-flash`) and selecting it as the "Task Model" reveals a critical logic flaw. The final payload sent to the LLM contains **both** the custom system prompt and the hardcoded default title prompt, creating a conflicting and unpredictable instruction. ### Steps to Reproduce **Failure 1: The "Custom Title Prompt" field does not save.** 1. Navigate to `Admin Panel > Settings > Interface`. 2. Leave the `Title Generation > For generating the title automatically` field **empty**. 3. Start a new chat. A title is generated successfully using the hardcoded default prompt. 4. Go back to the settings and enter **any text** into the custom prompt field. 5. Click "Save". 6. Start a new chat. **Result:** No title is generated. **Failure 2: "Virtual Model" workaround is not functional.** 1. Navigate to `Admin Panel > Models` and click "Add a model". 2. Create a new virtual model (e.g., named `title-generator`) with a `Base Model` and a custom `System Prompt`. 3. Set `Visibility` to **Public** and save. 4. Navigate to `Admin Panel > Settings > Interface`. 5. Try to select the new model `title-generator` under `Task Model`. 6. **Result:** The new model does not appear in the dropdown list, making this workaround unusable. **Failure 3: Assigning a model with a system prompt causes "Double Prompt Injection".** 1. Edit an existing, selectable model (e.g., `gemini-2.5-flash`) and add a custom title generation prompt to its `System Prompt` field. 2. In `Settings > Interface`, select this modified model as the `Task Model`. 3. Start a new chat. 4. Check the logs of the request being sent to the LLM. 5. **Result:** The payload contains two conflicting prompts (one `system` role with the custom prompt, and one `user` role with the default prompt). ### Logs & Screenshots Here are the screenshots showing the "Virtual Model" UI bug where a public model is not selectable: <img width="1457" height="203" alt="Image" src="https://github.com/user-attachments/assets/5d6aa704-5612-4bba-95f9-4de4a2957b5a" /> <img width="1646" height="385" alt="Image" src="https://github.com/user-attachments/assets/c0e9dcc8-d962-494f-a1b3-a99fe2512f50" /> Here is the log snippet showing the critical "Double Prompt Injection" failure (Failure 3). Note the two conflicting tasks in the `messages` array: ```json "messages": [ { "role": "system", "content": "### Task:\nSummarize the conversation and generate a concise title in Chinese.\n..." }, { "role": "user", "content": "### Task:\nGenerate a concise, 3-5 word title with an emoji summarizing the chat history.\n...\n### Chat History:\n..." } ] ### Additional Information #### **➡️ Additional Information** *(补充信息)* ```markdown The only currently functional solution to achieve a custom title prompt is to directly modify the `DEFAULT_TITLE_GENERATION_PROMPT_TEMPLATE` variable in the source code file `backend/open_webui/config.py` and rebuild the Docker image. This confirms that the issue lies deep within the application's logic for handling prompt customization.
GiteaMirror added the bug label 2026-04-25 07:41:16 -05:00
Author
Owner

@tjbck commented on GitHub (Sep 15, 2025):

Only base model(s) are supported as task models.

<!-- gh-comment-id:3293110954 --> @tjbck commented on GitHub (Sep 15, 2025): Only base model(s) are supported as task models.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#33803