[GH-ISSUE #22074] feat: Form for refining prompts #35161

Closed
opened 2026-04-25 09:23:24 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @dojje on GitHub (Mar 1, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/22074

Check Existing Issues

  • I have searched for all existing open AND closed issues and discussions for similar requests. I have found none that is comparable to my request.

Verify Feature Scope

  • I have read through and understood the scope definition for feature requests in the Issues section. I believe my feature request meets the definition and belongs in the Issues section instead of the Discussions.

Problem Description

When writing a prompt, sometimes I forget to include information critical to the response. This results in me having to retype the prompt with this information included. Sometimes I ask other chatbots on the web to read through my prompt and refine it by asking follow up questions about information I forgot to include.

Desired Solution you'd like

When writing a prompt, you can turn on a mode, let's call it "interactive prompt refinement". An AI model makes follow up questions as a form. Those can be Yes/No, multiple choice or an interactive text box. The user answers the form and then the AI-model writes a proper prompt according to prompting guidelines, such as assigning a role to the chatbot.

Alternatives Considered

I've considered asking an AI chatbot to create follow-up questions all the time, but it's hard to answer them in text and it get's messy when copying and pasting prompts.

Additional Context

Here is an exampel UI which I drew:

Image

I (Gemini) made a demo of this feature with a local llama3:8b model running locally to refine the prompt, then asks an openrouter AI the refined prompt.

import requests
import json

# Configuration
OLLAMA_URL = "http://localhost:11434/api/generate"
OPENROUTER_URL = "https://openrouter.ai/api/v1/chat/completions"
OPENROUTER_API_KEY = "YOUR_OPENROUTER_API_KEY"
SMALL_MODEL = "llama3:8b"  # Local model (Mistral, Llama, etc.)
LARGE_MODEL = "openrouter/free"  # Remote high-end model

SYSTEM_PROMPT_ARCHITECT = """
You are a 'Prompt Architect'. Your goal is to transform vague user requests into high-quality 'Super-Prompts' for a large AI.

### WORKFLOW:
1. Analyze user input. Identify missing context, persona, format, or tone.
2. If the input is sufficient: Set status to 'ready'.
3. If input is vague: Set status to 'refining', generate 2-3 multiple-choice questions, AND create a 'Draft Super-Prompt'.

### OUTPUT FORMAT (Strict JSON):
{
  "status": "ready" | "refining",
  "questions": [
    {
      "id": "string",
      "text": "The question text",
      "options": ["Option A", "Option B", "Option C"]
    }
  ],
  "refined_prompt": "The full engineered prompt including Persona, Instructions, and Context."
}

### REFINED_PROMPT RULES:
- Assign a Senior Expert persona.
- Include structural constraints (e.g., 'Use Markdown', 'Be concise').
- If status is 'refining', build the best possible draft based on current info.
"""

def call_small_model(user_input, context=""):
    full_prompt = f"{SYSTEM_PROMPT_ARCHITECT}\n\nUser Input: {user_input}\nContext from user choices: {context}"
    
    response = requests.post(OLLAMA_URL, json={
        "model": SMALL_MODEL,
        "prompt": full_prompt,
        "stream": False,
        "format": "json"
    })
    return json.loads(response.json()['response'])

def call_large_model(final_prompt):
    headers = {
        "Authorization": f"Bearer {OPENROUTER_API_KEY}",
        "Content-Type": "application/json"
    }
    data = {
        "model": LARGE_MODEL,
        "messages": [{"role": "user", "content": final_prompt}]
    }
    response = requests.post(OPENROUTER_URL, headers=headers, json=data)
    return response.json()['choices'][0]['message']['content']

def run_workflow():
    print("--- Welcome to the Super-Prompt Orchestrator ---")
    user_query = input("What do you want to achieve? ")
    
    # Step 1: Initial Analysis
    result = call_small_model(user_query)
    
    # Step 2: Interaction Loop (Refining)
    if result['status'] == 'refining':
        print("\n[AI Architect]: I need a bit more detail to get you the best result.")
        print(f"Draft Prompt: {result['refined_prompt'][:100]}...") # Show a snippet
        
        user_selections = []
        for q in result['questions']:
            print(f"\n{q['text']}")
            for i, opt in enumerate(q['options']):
                print(f"{i+1}. {opt}")
            
            choice = input("Select a number (or type your own answer): ")
            val = q['options'][int(choice)-1] if choice.isdigit() and int(choice) <= len(q['options']) else choice
            user_selections.append(f"{q['id']}: {val}")
        
        # Step 3: Re-generate the prompt with new context
        print("\nBuilding final Super-Prompt...")
        result = call_small_model(user_query, context=", ".join(user_selections))

    # Step 4: Execute with Large Model
    print(f"\n--- EXECUTING SUPER-PROMPT ---\n{result['refined_prompt']}\n")
    print("--- FINAL RESPONSE ---")
    final_output = call_large_model(result['refined_prompt'])
    print(final_output)

if __name__ == "__main__":
    run_workflow()
Originally created by @dojje on GitHub (Mar 1, 2026). Original GitHub issue: https://github.com/open-webui/open-webui/issues/22074 ### Check Existing Issues - [x] I have searched for all existing **open AND closed** issues and discussions for similar requests. I have found none that is comparable to my request. ### Verify Feature Scope - [x] I have read through and understood the scope definition for feature requests in the Issues section. I believe my feature request meets the definition and belongs in the Issues section instead of the Discussions. ### Problem Description When writing a prompt, sometimes I forget to include information critical to the response. This results in me having to retype the prompt with this information included. Sometimes I ask other chatbots on the web to read through my prompt and refine it by asking follow up questions about information I forgot to include. ### Desired Solution you'd like When writing a prompt, you can turn on a mode, let's call it "interactive prompt refinement". An AI model makes follow up questions as a form. Those can be Yes/No, multiple choice or an interactive text box. The user answers the form and then the AI-model writes a proper prompt according to prompting guidelines, such as assigning a role to the chatbot. ### Alternatives Considered I've considered asking an AI chatbot to create follow-up questions all the time, but it's hard to answer them in text and it get's messy when copying and pasting prompts. ### Additional Context Here is an exampel UI which I drew: <img width="720" height="720" alt="Image" src="https://github.com/user-attachments/assets/e8536e69-4965-43b2-988a-419e5d81b9f9" /> I (Gemini) made a demo of this feature with a local llama3:8b model running locally to refine the prompt, then asks an openrouter AI the refined prompt. ```python import requests import json # Configuration OLLAMA_URL = "http://localhost:11434/api/generate" OPENROUTER_URL = "https://openrouter.ai/api/v1/chat/completions" OPENROUTER_API_KEY = "YOUR_OPENROUTER_API_KEY" SMALL_MODEL = "llama3:8b" # Local model (Mistral, Llama, etc.) LARGE_MODEL = "openrouter/free" # Remote high-end model SYSTEM_PROMPT_ARCHITECT = """ You are a 'Prompt Architect'. Your goal is to transform vague user requests into high-quality 'Super-Prompts' for a large AI. ### WORKFLOW: 1. Analyze user input. Identify missing context, persona, format, or tone. 2. If the input is sufficient: Set status to 'ready'. 3. If input is vague: Set status to 'refining', generate 2-3 multiple-choice questions, AND create a 'Draft Super-Prompt'. ### OUTPUT FORMAT (Strict JSON): { "status": "ready" | "refining", "questions": [ { "id": "string", "text": "The question text", "options": ["Option A", "Option B", "Option C"] } ], "refined_prompt": "The full engineered prompt including Persona, Instructions, and Context." } ### REFINED_PROMPT RULES: - Assign a Senior Expert persona. - Include structural constraints (e.g., 'Use Markdown', 'Be concise'). - If status is 'refining', build the best possible draft based on current info. """ def call_small_model(user_input, context=""): full_prompt = f"{SYSTEM_PROMPT_ARCHITECT}\n\nUser Input: {user_input}\nContext from user choices: {context}" response = requests.post(OLLAMA_URL, json={ "model": SMALL_MODEL, "prompt": full_prompt, "stream": False, "format": "json" }) return json.loads(response.json()['response']) def call_large_model(final_prompt): headers = { "Authorization": f"Bearer {OPENROUTER_API_KEY}", "Content-Type": "application/json" } data = { "model": LARGE_MODEL, "messages": [{"role": "user", "content": final_prompt}] } response = requests.post(OPENROUTER_URL, headers=headers, json=data) return response.json()['choices'][0]['message']['content'] def run_workflow(): print("--- Welcome to the Super-Prompt Orchestrator ---") user_query = input("What do you want to achieve? ") # Step 1: Initial Analysis result = call_small_model(user_query) # Step 2: Interaction Loop (Refining) if result['status'] == 'refining': print("\n[AI Architect]: I need a bit more detail to get you the best result.") print(f"Draft Prompt: {result['refined_prompt'][:100]}...") # Show a snippet user_selections = [] for q in result['questions']: print(f"\n{q['text']}") for i, opt in enumerate(q['options']): print(f"{i+1}. {opt}") choice = input("Select a number (or type your own answer): ") val = q['options'][int(choice)-1] if choice.isdigit() and int(choice) <= len(q['options']) else choice user_selections.append(f"{q['id']}: {val}") # Step 3: Re-generate the prompt with new context print("\nBuilding final Super-Prompt...") result = call_small_model(user_query, context=", ".join(user_selections)) # Step 4: Execute with Large Model print(f"\n--- EXECUTING SUPER-PROMPT ---\n{result['refined_prompt']}\n") print("--- FINAL RESPONSE ---") final_output = call_large_model(result['refined_prompt']) print(final_output) if __name__ == "__main__": run_workflow() ```
Author
Owner

@Classic298 commented on GitHub (Mar 1, 2026):

You can already do that with event emitters

https://docs.openwebui.com/features/extensibility/plugin/development/events

<!-- gh-comment-id:3979723781 --> @Classic298 commented on GitHub (Mar 1, 2026): You can already do that with event emitters https://docs.openwebui.com/features/extensibility/plugin/development/events
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#35161