From 8a6af40d9f3ce7f24f6e4142f71dc1ae5ff01963 Mon Sep 17 00:00:00 2001 From: Abdul Moiz Date: Sat, 7 Mar 2026 01:25:42 +0500 Subject: [PATCH] fix: correct conflicting output format instruction in follow-up generation prompt (#22212) The Guidelines section instructed LLMs to return "a JSON array of strings" while the Output section showed a JSON object with a "follow_ups" key. This mismatch caused some models to return a top-level array, which the frontend parser cannot handle (it looks for `{ }` delimiters and the `follow_ups` key). Updated the guideline to consistently request a JSON object matching the expected format. Fixes #22187 --- backend/open_webui/config.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/backend/open_webui/config.py b/backend/open_webui/config.py index f14c26e067..512bfdf7aa 100644 --- a/backend/open_webui/config.py +++ b/backend/open_webui/config.py @@ -1960,7 +1960,7 @@ Suggest 3-5 relevant follow-up questions or prompts that the user might naturall - Only suggest follow-ups that make sense given the chat content and do not repeat what was already covered. - If the conversation is very short or not specific, suggest more general (but relevant) follow-ups the user might ask. - Use the conversation's primary language; default to English if multilingual. -- Response must be a JSON array of strings, no extra text or formatting. +- Response must be a JSON object with a "follow_ups" key containing an array of strings, no extra text or formatting. ### Output: JSON format: { "follow_ups": ["Question 1?", "Question 2?", "Question 3?"] } ### Chat History: