feat: Streamlined assisted editing in the playground #403

Closed
opened 2025-11-11 14:20:24 -06:00 by GiteaMirror · 1 comment
Owner

Originally created by @robertvazan on GitHub (Mar 3, 2024).

Is your feature request related to a problem? Please describe.
I understand that the playground was designed to be multi-purpose. I am testing its applicability to assisted editing as requested in #987. I draw inspiration from NovelAI and GitHub Copilot (except this is for text only). The idea is that I paste source content (notes, quotes, prior text, similar texts) at the top and then let the LLM generate final text while I read it top-down and make edits. Whenever I edit something, I want the LLM to regenerate the text that follows.

Since local models and especially the smaller ones need more guidance, particularly when the content is more challenging, the editing experience should be as streamlined as possible to allow frequent switching between LLM and manual writing.

Describe the solution you'd like
I am thinking of several changes that could make the playground practical for this use case:

  • Cursor should not move until I accept the proposed text. My draft (which serves as prompt) should be kept separately from the proposed continuation. Quick-n-dirty solution is to generate into separate text area. Neat solution is to show the continuation inline as ghost text in different color akin to how GitHub Copilot does it.
  • Have shortcuts to accept the next word/sentence/paragragh (delimited by space/symbol/newline). The accepted text becomes part of the draft/prompt and cursor moves to its end.
  • Have a shortcut to start generation of the continuation as well as to stop it.
  • Have an option to enable reactive generation. Generated continuation is updated automatically whenever the draft text changes.

Additional context
The idea here is to work with limitations and advantages of small local models. On one hand, these models require more guidance and thus more collaborative UI than cloud models. On the other hand, it is okay to waste compute resources on them, for example to support the above mentioned reactive generation.

Originally created by @robertvazan on GitHub (Mar 3, 2024). **Is your feature request related to a problem? Please describe.** I understand that the playground was designed to be multi-purpose. I am testing its applicability to assisted editing as requested in #987. I draw inspiration from NovelAI and GitHub Copilot (except this is for text only). The idea is that I paste source content (notes, quotes, prior text, similar texts) at the top and then let the LLM generate final text while I read it top-down and make edits. Whenever I edit something, I want the LLM to regenerate the text that follows. Since local models and especially the smaller ones need more guidance, particularly when the content is more challenging, the editing experience should be as streamlined as possible to allow frequent switching between LLM and manual writing. **Describe the solution you'd like** I am thinking of several changes that could make the playground practical for this use case: - Cursor should not move until I accept the proposed text. My draft (which serves as prompt) should be kept separately from the proposed continuation. Quick-n-dirty solution is to generate into separate text area. Neat solution is to show the continuation inline as ghost text in different color akin to how GitHub Copilot does it. - Have shortcuts to accept the next word/sentence/paragragh (delimited by space/symbol/newline). The accepted text becomes part of the draft/prompt and cursor moves to its end. - Have a shortcut to start generation of the continuation as well as to stop it. - Have an option to enable reactive generation. Generated continuation is updated automatically whenever the draft text changes. **Additional context** The idea here is to work with limitations and advantages of small local models. On one hand, these models require more guidance and thus more collaborative UI than cloud models. On the other hand, it is okay to waste compute resources on them, for example to support the above mentioned reactive generation.
GiteaMirror added the enhancementgood first issuehelp wantednon-core labels 2025-11-11 14:20:24 -06:00
Author
Owner

@tjbck commented on GitHub (May 6, 2024):

Closing in favour of https://github.com/open-webui/open-webui/issues/2000, Let's continue our discussion there!

@tjbck commented on GitHub (May 6, 2024): Closing in favour of https://github.com/open-webui/open-webui/issues/2000, Let's continue our discussion there!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#403