mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 11:28:35 -05:00
[GH-ISSUE #23468] feat: support structured follow-up questions/options in assistant output #35521
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ShirasawaSama on GitHub (Apr 7, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/23468
Check Existing Issues
Verify Feature Scope
Problem Description
Many skills/workflows need to present users with follow-up questions or a small set of options after a response.
Today, users must manually type the next input, even when the model already “knows” the next suggested choices.
This creates extra friction, especially for guided flows where quick selection is more natural than free-form typing.
Desired Solution you'd like
Allow Open WebUI to detect and render structured follow-up content directly from model output.
For example, if the LLM output contains a dedicated tag/block (e.g. XML-like tag or JSON schema) describing follow-up prompts/options, the UI should render them as clickable chips/buttons below the message.
Expected behavior:
This would make skill-driven conversations much smoother and reduce manual typing for users.
Alternatives Considered
This is especially useful for skills that naturally provide a small finite set of next actions (e.g. classify intent, choose a mode, pick a parameter range, select a troubleshooting path).
A simple standardized output contract (tag/schema) would enable reliable rendering and better interoperability across skills.
Additional Context
No response