mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 11:28:35 -05:00
[GH-ISSUE #13790] feat: Small Creature Comforts - wakelock, status indicator and output scroll speed #17034
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Mc0023 on GitHub (May 11, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/13790
Check Existing Issues
Problem Description
I had an idea for 2 quality of life improvements for user experience.
Wakelock: Mobile devices (phones, tablets, laptops) can go to sleep or lock while waiting for a long running prompt/response to complete. Which can cause you to loose the part or all of the response if it’s a temporary chat.
Status indicator: Send button doesn’t show what is happening behind the scenes, you don’t know if the ollama is stalled or stuck while loading the model or if the model is loaded and running but just doesn’t have a response yet.
Output scroll speed: It appears that open webui has a maximum test reveal rate. I noticed even when the response is delivered in one http response it will still reveal word by word at a certain speed.
Desired Solution you'd like
Web browser wakelock api: Most current browsers have added the “wakelock” api to allow JavaScript to request that the computer not go to sleep. It’s supported in chrome/chromium, Firefox, safari plus others I’m not as familiar with. It would be a nice improvement to have the page request to prevent sleep while a query is running so phones, tablets, laptops, etc do not lock or go to sleep while waiting for a long running query to complete.
Status indicator: I know the send button does show when it’s waiting on a response but it would be nice if we could know additional status of what’s happening while waiting for a response, ie like if the model is loading or has ollama has stopped responding etc. obviously this is dependent on the information the inference engine provides but if possible it would be a nice reassurance.
Output scroll speed: It would be nice to have option to speed that up for fast responding models since it delays what the user can do next like starting the read aloud TTS feature.
I’m going to take a look and try my hand ar implementing these but I am a novice at best with programming so other will most likely have better ideas for cleaner implementations.
Alternatives Considered
Additional Context
No response
@Mc0023 commented on GitHub (May 11, 2025):
Edit - corrected formatting.