fix: correct conflicting output format instruction in follow-up generation prompt (#22212)

The Guidelines section instructed LLMs to return "a JSON array of strings"
while the Output section showed a JSON object with a "follow_ups" key.
This mismatch caused some models to return a top-level array, which the
frontend parser cannot handle (it looks for `{ }` delimiters and the
`follow_ups` key). Updated the guideline to consistently request a JSON
object matching the expected format.

Fixes #22187
This commit is contained in:
Abdul Moiz
2026-03-07 01:25:42 +05:00
committed by GitHub
parent 9cf6108527
commit 8a6af40d9f

View File

@@ -1960,7 +1960,7 @@ Suggest 3-5 relevant follow-up questions or prompts that the user might naturall
- Only suggest follow-ups that make sense given the chat content and do not repeat what was already covered.
- If the conversation is very short or not specific, suggest more general (but relevant) follow-ups the user might ask.
- Use the conversation's primary language; default to English if multilingual.
- Response must be a JSON array of strings, no extra text or formatting.
- Response must be a JSON object with a "follow_ups" key containing an array of strings, no extra text or formatting.
### Output:
JSON format: { "follow_ups": ["Question 1?", "Question 2?", "Question 3?"] }
### Chat History: