[GH-ISSUE #23468] feat: support structured follow-up questions/options in assistant output #35521

Closed
opened 2026-04-25 09:43:45 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @ShirasawaSama on GitHub (Apr 7, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/23468

Check Existing Issues

  • I have searched for all existing open AND closed issues and discussions for similar requests. I have found none that is comparable to my request.

Verify Feature Scope

  • I have read through and understood the scope definition for feature requests in the Issues section. I believe my feature request meets the definition and belongs in the Issues section instead of the Discussions.

Problem Description

Many skills/workflows need to present users with follow-up questions or a small set of options after a response.
Today, users must manually type the next input, even when the model already “knows” the next suggested choices.
This creates extra friction, especially for guided flows where quick selection is more natural than free-form typing.

Desired Solution you'd like

Allow Open WebUI to detect and render structured follow-up content directly from model output.

For example, if the LLM output contains a dedicated tag/block (e.g. XML-like tag or JSON schema) describing follow-up prompts/options, the UI should render them as clickable chips/buttons below the message.

Image

Expected behavior:

  1. If structured follow-up tags are present, render interactive follow-up options.
  2. Clicking an option should send it as the next user input automatically.
  3. If no tags are present, keep current behavior unchanged.
  4. Keep this format optional and backward-compatible for existing skills/prompts.

This would make skill-driven conversations much smoother and reduce manual typing for users.

Alternatives Considered

This is especially useful for skills that naturally provide a small finite set of next actions (e.g. classify intent, choose a mode, pick a parameter range, select a troubleshooting path).
A simple standardized output contract (tag/schema) would enable reliable rendering and better interoperability across skills.

Additional Context

No response

Originally created by @ShirasawaSama on GitHub (Apr 7, 2026). Original GitHub issue: https://github.com/open-webui/open-webui/issues/23468 ### Check Existing Issues - [x] I have searched for all existing **open AND closed** issues and discussions for similar requests. I have found none that is comparable to my request. ### Verify Feature Scope - [x] I have read through and understood the scope definition for feature requests in the Issues section. I believe my feature request meets the definition and belongs in the Issues section instead of the Discussions. ### Problem Description Many skills/workflows need to present users with follow-up questions or a small set of options after a response. Today, users must manually type the next input, even when the model already “knows” the next suggested choices. This creates extra friction, especially for guided flows where quick selection is more natural than free-form typing. ### Desired Solution you'd like Allow Open WebUI to detect and render structured follow-up content directly from model output. For example, if the LLM output contains a dedicated tag/block (e.g. XML-like tag or JSON schema) describing follow-up prompts/options, the UI should render them as clickable chips/buttons below the message. <img width="1284" height="1620" alt="Image" src="https://github.com/user-attachments/assets/c1b485ba-4116-4427-b21c-98b3300a0a9c" /> Expected behavior: 1. If structured follow-up tags are present, render interactive follow-up options. 2. Clicking an option should send it as the next user input automatically. 3. If no tags are present, keep current behavior unchanged. 4. Keep this format optional and backward-compatible for existing skills/prompts. This would make skill-driven conversations much smoother and reduce manual typing for users. ### Alternatives Considered This is especially useful for skills that naturally provide a small finite set of next actions (e.g. classify intent, choose a mode, pick a parameter range, select a troubleshooting path). A simple standardized output contract (tag/schema) would enable reliable rendering and better interoperability across skills. ### Additional Context _No response_
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#35521