mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 10:58:17 -05:00
[GH-ISSUE #11228] [Bug] OpenWebUI Hangs on Black Screen When Ollama Server is Down #16152
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @MillionthOdin16 on GitHub (Mar 5, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/11228
Check Existing Issues
Installation Method
Git Clone
Open WebUI Version
0.5.19
Ollama Version (if applicable)
0.5.13
Operating System
Debian 12
Browser (if applicable)
Chrome
Confirmation
README.md.Expected Behavior
OpenWebUI should still load the website for a user if a connection to an Ollama server fails. The user shouldn't be left on a blank black page with no functionality or indication of an error.
Actual Behavior
When the Ollama server is down, OpenWebUI hangs on a black screen when a user visits the landing page.
Steps to Reproduce
Logs & Screenshots
Inspecting the network requests shows that when the Ollama server is down, a request to the models endpoint hangs indefinitely.
Additional Information
When an Ollama connection is configured in the settings and the Ollama server is down, the Open Web UI hangs on a black screen upon opening. If the Ollama server is brought back online and a connection can be made, the user can refresh the page and the Open Web UI loads normally.
@Classic298 commented on GitHub (Mar 5, 2025):
That's quite an old version of OpenWebUI, is it still the case on newer versions also?
@MillionthOdin16 commented on GitHub (Mar 5, 2025):
I'm sorry. Yes, it's the current version. I meant 0.5.19.
@aaron-r-campbell commented on GitHub (Mar 5, 2025):
I can confirm this issue on my end as well. Running the latest version in docker-compose.
I have multiple Ollama instances some of which get powered off at night. When attempting to load the web UI without first starting all the Ollama servers, the login page will load but the main page will hang for around 5-10 minutes.
I currently get around this by leaving these servers disabled in the admin settings until I need to use them, but it would be preferable if the UI would load first and display some error about the connection rather than hang.
@tjbck commented on GitHub (Mar 5, 2025):
This has always been the intended behavior. Some users have unreliable or slow network connections, and in such cases, it is preferable to wait until all models are fully loaded rather than enforce a strict timeout that could result in some models not appearing at all. Missing models due to a timeout would likely cause more issues and confusion than waiting a bit longer for them to load.
For users who want to customize this behavior, we provide the flexibility to manually adjust the timeout by setting the
AIOHTTP_CLIENT_TIMEOUT_MODEL_LISTenvironment variable. You can find more details on how to configure this here: Environment Configuration.@MillionthOdin16 commented on GitHub (Mar 5, 2025):
@tjbck Thanks for the detailed explanation about the intended behavior and the option to adjust
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST. I get the rationale behind waiting for all models to load—avoiding partial failures is a valid concern for users with unreliable connections. However, I think this design choice still has significant usability gaps that warrant revisiting, especially given how it scales with multiple API connections.For users like @aaron-r-campbell and me, the current behavior—hanging on a black screen for 5-10 minutes or indefinitely when an API server is down—isn’t just a delay; it’s a complete blocker with zero feedback. If you have multiple API connections configured, a single failed endpoint makes the entire server unusable. There’s no way to disable that failed connection through the UI—you’re forced to either manually edit the configuration database or craft a custom POST request to remove the endpoint. That’s a steep barrier for most users and a far cry from intuitive.
Without an error message or a partially loaded UI, it’s impossible to tell if Open WebUI is broken, stuck, or just slow, which risks confusing less technical users even more than missing models would. A timeout might hide some models, but the status quo effectively hides the entire application. Could we explore a middle ground? For example:
This would maintain the reliability you highlighted while making the experience more resilient and user-friendly, especially for multi-endpoint setups. I’d be happy to discuss further or contribute a PR if the team’s open to it. Reopening this for discussion could help align the behavior with user expectations—thoughts?
@bloomberg21 commented on GitHub (Mar 7, 2025):
I would totally agree with this.
This has to be weirdest "feature" I have ever seen in an app that doesn't even make any sense when reading the devs justification.
Adding: Ollama API should NOT be ON by default. This is what was causing an issue on my upgrade today and introducing me to this "timeout feature" for the first time
@Pb-207 commented on GitHub (Mar 7, 2025):
Same issue.
@danielrosehill commented on GitHub (Apr 19, 2025):
Same issue. As someone running this on a VPS and with zero intention of using Ollama this is rendering owui almost unusable for me, too.