[GH-ISSUE #13790] feat: Small Creature Comforts - wakelock, status indicator and output scroll speed #32563

Closed
opened 2026-04-25 06:29:52 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @Mc0023 on GitHub (May 11, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/13790

Check Existing Issues

  • I have searched the existing issues and discussions.

Problem Description

I had an idea for 2 quality of life improvements for user experience.

  1. Wakelock: Mobile devices (phones, tablets, laptops) can go to sleep or lock while waiting for a long running prompt/response to complete. Which can cause you to loose the part or all of the response if it’s a temporary chat.

  2. Status indicator: Send button doesn’t show what is happening behind the scenes, you don’t know if the ollama is stalled or stuck while loading the model or if the model is loaded and running but just doesn’t have a response yet.

  3. Output scroll speed: It appears that open webui has a maximum test reveal rate. I noticed even when the response is delivered in one http response it will still reveal word by word at a certain speed.

Desired Solution you'd like

  1. Web browser wakelock api: Most current browsers have added the “wakelock” api to allow JavaScript to request that the computer not go to sleep. It’s supported in chrome/chromium, Firefox, safari plus others I’m not as familiar with. It would be a nice improvement to have the page request to prevent sleep while a query is running so phones, tablets, laptops, etc do not lock or go to sleep while waiting for a long running query to complete.

  2. Status indicator: I know the send button does show when it’s waiting on a response but it would be nice if we could know additional status of what’s happening while waiting for a response, ie like if the model is loading or has ollama has stopped responding etc. obviously this is dependent on the information the inference engine provides but if possible it would be a nice reassurance.

  3. Output scroll speed: It would be nice to have option to speed that up for fast responding models since it delays what the user can do next like starting the read aloud TTS feature.
    I’m going to take a look and try my hand ar implementing these but I am a novice at best with programming so other will most likely have better ideas for cleaner implementations.

Alternatives Considered

  1. May not be possible due to lack of status data from inference engines.

Additional Context

No response

Originally created by @Mc0023 on GitHub (May 11, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/13790 ### Check Existing Issues - [x] I have searched the existing issues and discussions. ### Problem Description I had an idea for 2 quality of life improvements for user experience. 1. Wakelock: Mobile devices (phones, tablets, laptops) can go to sleep or lock while waiting for a long running prompt/response to complete. Which can cause you to loose the part or all of the response if it’s a temporary chat. 2. Status indicator: Send button doesn’t show what is happening behind the scenes, you don’t know if the ollama is stalled or stuck while loading the model or if the model is loaded and running but just doesn’t have a response yet. 3. Output scroll speed: It appears that open webui has a maximum test reveal rate. I noticed even when the response is delivered in one http response it will still reveal word by word at a certain speed. ### Desired Solution you'd like 1. Web browser wakelock api: Most current browsers have added the “wakelock” api to allow JavaScript to request that the computer not go to sleep. It’s supported in chrome/chromium, Firefox, safari plus others I’m not as familiar with. It would be a nice improvement to have the page request to prevent sleep while a query is running so phones, tablets, laptops, etc do not lock or go to sleep while waiting for a long running query to complete. 2. Status indicator: I know the send button does show when it’s waiting on a response but it would be nice if we could know additional status of what’s happening while waiting for a response, ie like if the model is loading or has ollama has stopped responding etc. obviously this is dependent on the information the inference engine provides but if possible it would be a nice reassurance. 3. Output scroll speed: It would be nice to have option to speed that up for fast responding models since it delays what the user can do next like starting the read aloud TTS feature. I’m going to take a look and try my hand ar implementing these but I am a novice at best with programming so other will most likely have better ideas for cleaner implementations. ### Alternatives Considered 2. May not be possible due to lack of status data from inference engines. ### Additional Context _No response_
Author
Owner

@Mc0023 commented on GitHub (May 11, 2025):

Edit - corrected formatting.

<!-- gh-comment-id:2870024737 --> @Mc0023 commented on GitHub (May 11, 2025): Edit - corrected formatting.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#32563