[GH-ISSUE #22187] Issue: Inconsistent Output Format Instructions in DEFAULT_FOLLOW_UP_GENERATION_PROMPT_TEMPLATE for LLM Follow-up Prompts #35182

Closed
opened 2026-04-25 09:25:16 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @listenerri on GitHub (Mar 3, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/22187

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
  • I am using the latest version of Open WebUI.

Installation Method

Pip Install

Open WebUI Version

v0.8.8

Ollama Version (if applicable)

No response

Operating System

Ubuntu24.04

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

LLMs should always receive explicit output format instructions, consistently returning follow-up prompts as a JSON object with key follow_ups.

Actual Behavior

The prompt template includes conflicting instructions, sometimes resulting in a top-level JSON array. Backend expects an object. This causes parsing failures and missing follow-up prompts in the UI.

79f0437980/backend/open_webui/config.py (L1963-L1965)

Steps to Reproduce

  1. Inspect the default prompt template for follow-up generation.
  2. Trigger follow-up suggestion and force model to return a top-level array.
  3. Observe backend does not parse and UI does not display suggestions.

Logs & Screenshots

No backend error, template snippet included for reference.

Additional Information

See analysis and recommended fix above.

Originally created by @listenerri on GitHub (Mar 3, 2026). Original GitHub issue: https://github.com/open-webui/open-webui/issues/22187 ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!). - [x] I am using the latest version of Open WebUI. ### Installation Method Pip Install ### Open WebUI Version v0.8.8 ### Ollama Version (if applicable) _No response_ ### Operating System Ubuntu24.04 ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior LLMs should always receive explicit output format instructions, consistently returning follow-up prompts as a JSON object with key `follow_ups`. ### Actual Behavior The prompt template includes conflicting instructions, sometimes resulting in a top-level JSON array. Backend expects an object. This causes parsing failures and missing follow-up prompts in the UI. https://github.com/open-webui/open-webui/blob/79f04379801622181ef9c591374a285eac4e1c4d/backend/open_webui/config.py#L1963-L1965 ### Steps to Reproduce 1. Inspect the default prompt template for follow-up generation. 2. Trigger follow-up suggestion and force model to return a top-level array. 3. Observe backend does not parse and UI does not display suggestions. ### Logs & Screenshots No backend error, template snippet included for reference. ### Additional Information See analysis and recommended fix above.
GiteaMirror added the bug label 2026-04-25 09:25:16 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#35182