[GH-ISSUE #11228] [Bug] OpenWebUI Hangs on Black Screen When Ollama Server is Down #31681

Closed
opened 2026-04-25 05:34:55 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @MillionthOdin16 on GitHub (Mar 5, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/11228

Check Existing Issues

  • I have searched the existing issues and discussions.

Installation Method

Git Clone

Open WebUI Version

0.5.19

Ollama Version (if applicable)

0.5.13

Operating System

Debian 12

Browser (if applicable)

Chrome

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have checked the browser console logs.
  • I have checked the Docker container logs.
  • I have listed steps to reproduce the bug in detail.

Expected Behavior

OpenWebUI should still load the website for a user if a connection to an Ollama server fails. The user shouldn't be left on a blank black page with no functionality or indication of an error.

Actual Behavior

When the Ollama server is down, OpenWebUI hangs on a black screen when a user visits the landing page.

Steps to Reproduce

  1. Configure an Ollama connection in the Open Web UI settings.
  2. Ensure the Ollama server is down.
  3. Open the Open Web UI.
  4. Observe that the page hangs on a black screen.
  5. Bring the Ollama server back online.
  6. Refresh the Open Web Ul page.
  7. Observe that the page loads normally.

Logs & Screenshots

Inspecting the network requests shows that when the Ollama server is down, a request to the models endpoint hangs indefinitely.

Additional Information

When an Ollama connection is configured in the settings and the Ollama server is down, the Open Web UI hangs on a black screen upon opening. If the Ollama server is brought back online and a connection can be made, the user can refresh the page and the Open Web UI loads normally.

Originally created by @MillionthOdin16 on GitHub (Mar 5, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/11228 ### Check Existing Issues - [x] I have searched the existing issues and discussions. ### Installation Method Git Clone ### Open WebUI Version 0.5.19 ### Ollama Version (if applicable) 0.5.13 ### Operating System Debian 12 ### Browser (if applicable) Chrome ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have checked the browser console logs. - [x] I have checked the Docker container logs. - [x] I have listed steps to reproduce the bug in detail. ### Expected Behavior OpenWebUI should still load the website for a user if a connection to an Ollama server fails. The user shouldn't be left on a blank black page with no functionality or indication of an error. ### Actual Behavior When the Ollama server is down, OpenWebUI hangs on a black screen when a user visits the landing page. ### Steps to Reproduce 1. Configure an Ollama connection in the Open Web UI settings. 2. Ensure the Ollama server is down. 3. Open the Open Web UI. 4. Observe that the page hangs on a black screen. 5. Bring the Ollama server back online. 6. Refresh the Open Web Ul page. 7. Observe that the page loads normally. ### Logs & Screenshots Inspecting the network requests shows that when the Ollama server is down, a request to the models endpoint hangs indefinitely. ### Additional Information When an Ollama connection is configured in the settings and the Ollama server is down, the Open Web UI hangs on a black screen upon opening. If the Ollama server is brought back online and a connection can be made, the user can refresh the page and the Open Web UI loads normally.
GiteaMirror added the bug label 2026-04-25 05:34:55 -05:00
Author
Owner

@Classic298 commented on GitHub (Mar 5, 2025):

That's quite an old version of OpenWebUI, is it still the case on newer versions also?

<!-- gh-comment-id:2701423894 --> @Classic298 commented on GitHub (Mar 5, 2025): That's quite an old version of OpenWebUI, is it still the case on newer versions also?
Author
Owner

@MillionthOdin16 commented on GitHub (Mar 5, 2025):

I'm sorry. Yes, it's the current version. I meant 0.5.19.

<!-- gh-comment-id:2701464545 --> @MillionthOdin16 commented on GitHub (Mar 5, 2025): I'm sorry. Yes, it's the current version. I meant 0.5.19.
Author
Owner

@aaron-r-campbell commented on GitHub (Mar 5, 2025):

I can confirm this issue on my end as well. Running the latest version in docker-compose.

I have multiple Ollama instances some of which get powered off at night. When attempting to load the web UI without first starting all the Ollama servers, the login page will load but the main page will hang for around 5-10 minutes.

I currently get around this by leaving these servers disabled in the admin settings until I need to use them, but it would be preferable if the UI would load first and display some error about the connection rather than hang.

<!-- gh-comment-id:2702146699 --> @aaron-r-campbell commented on GitHub (Mar 5, 2025): I can confirm this issue on my end as well. Running the latest version in docker-compose. I have multiple Ollama instances some of which get powered off at night. When attempting to load the web UI without first starting all the Ollama servers, the login page will load but the main page will hang for around 5-10 minutes. I currently get around this by leaving these servers disabled in the admin settings until I need to use them, but it would be preferable if the UI would load first and display some error about the connection rather than hang.
Author
Owner

@tjbck commented on GitHub (Mar 5, 2025):

This has always been the intended behavior. Some users have unreliable or slow network connections, and in such cases, it is preferable to wait until all models are fully loaded rather than enforce a strict timeout that could result in some models not appearing at all. Missing models due to a timeout would likely cause more issues and confusion than waiting a bit longer for them to load.

For users who want to customize this behavior, we provide the flexibility to manually adjust the timeout by setting the AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST environment variable. You can find more details on how to configure this here: Environment Configuration.

<!-- gh-comment-id:2702154289 --> @tjbck commented on GitHub (Mar 5, 2025): This has always been the intended behavior. Some users have unreliable or slow network connections, and in such cases, it is preferable to wait until all models are fully loaded rather than enforce a strict timeout that could result in some models not appearing at all. Missing models due to a timeout would likely cause more issues and confusion than waiting a bit longer for them to load. For users who want to customize this behavior, we provide the flexibility to manually adjust the timeout by setting the `AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST` environment variable. You can find more details on how to configure this here: [Environment Configuration](https://docs.openwebui.com/getting-started/env-configuration/#aiohttp_client_timeout_model_list).
Author
Owner

@MillionthOdin16 commented on GitHub (Mar 5, 2025):

@tjbck Thanks for the detailed explanation about the intended behavior and the option to adjust AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST. I get the rationale behind waiting for all models to load—avoiding partial failures is a valid concern for users with unreliable connections. However, I think this design choice still has significant usability gaps that warrant revisiting, especially given how it scales with multiple API connections.

For users like @aaron-r-campbell and me, the current behavior—hanging on a black screen for 5-10 minutes or indefinitely when an API server is down—isn’t just a delay; it’s a complete blocker with zero feedback. If you have multiple API connections configured, a single failed endpoint makes the entire server unusable. There’s no way to disable that failed connection through the UI—you’re forced to either manually edit the configuration database or craft a custom POST request to remove the endpoint. That’s a steep barrier for most users and a far cry from intuitive.

Without an error message or a partially loaded UI, it’s impossible to tell if Open WebUI is broken, stuck, or just slow, which risks confusing less technical users even more than missing models would. A timeout might hide some models, but the status quo effectively hides the entire application. Could we explore a middle ground? For example:

  1. Set a reasonable default timeout (e.g., 30-60 seconds) instead of waiting indefinitely.
  2. Load the UI first and display a clear error (e.g., “Ollama server [name] unavailable—check connection or disable in settings”) when a connection fails.
  3. Add a UI option to disable or remove failed API endpoints without needing manual DB edits or POST requests.
  4. Keep the env variable for advanced users to tweak timeouts as needed.

This would maintain the reliability you highlighted while making the experience more resilient and user-friendly, especially for multi-endpoint setups. I’d be happy to discuss further or contribute a PR if the team’s open to it. Reopening this for discussion could help align the behavior with user expectations—thoughts?

<!-- gh-comment-id:2702205996 --> @MillionthOdin16 commented on GitHub (Mar 5, 2025): @tjbck Thanks for the detailed explanation about the intended behavior and the option to adjust `AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST`. I get the rationale behind waiting for all models to load—avoiding partial failures is a valid concern for users with unreliable connections. However, I think this design choice still has significant usability gaps that warrant revisiting, especially given how it scales with multiple API connections. For users like @aaron-r-campbell and me, the current behavior—hanging on a black screen for 5-10 minutes or indefinitely when an API server is down—isn’t just a delay; it’s a complete blocker with zero feedback. If you have multiple API connections configured, a single failed endpoint makes the entire server unusable. There’s no way to disable that failed connection through the UI—you’re forced to either manually edit the configuration database or craft a custom POST request to remove the endpoint. That’s a steep barrier for most users and a far cry from intuitive. Without an error message or a partially loaded UI, it’s impossible to tell if Open WebUI is broken, stuck, or just slow, which risks confusing less technical users even more than missing models would. A timeout might hide some models, but the status quo effectively hides the entire application. Could we explore a middle ground? For example: 1. Set a reasonable default timeout (e.g., 30-60 seconds) instead of waiting indefinitely. 2. Load the UI first and display a clear error (e.g., “Ollama server [name] unavailable—check connection or disable in settings”) when a connection fails. 3. Add a UI option to disable or remove failed API endpoints without needing manual DB edits or POST requests. 4. Keep the env variable for advanced users to tweak timeouts as needed. This would maintain the reliability you highlighted while making the experience more resilient and user-friendly, especially for multi-endpoint setups. I’d be happy to discuss further or contribute a PR if the team’s open to it. Reopening this for discussion could help align the behavior with user expectations—thoughts?
Author
Owner

@bloomberg21 commented on GitHub (Mar 7, 2025):

@tjbck Thanks for the detailed explanation about the intended behavior and the option to adjust AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST. I get the rationale behind waiting for all models to load—avoiding partial failures is a valid concern for users with unreliable connections. However, I think this design choice still has significant usability gaps that warrant revisiting, especially given how it scales with multiple API connections.

For users like @aaron-r-campbell and me, the current behavior—hanging on a black screen for 5-10 minutes or indefinitely when an API server is down—isn’t just a delay; it’s a complete blocker with zero feedback. If you have multiple API connections configured, a single failed endpoint makes the entire server unusable. There’s no way to disable that failed connection through the UI—you’re forced to either manually edit the configuration database or craft a custom POST request to remove the endpoint. That’s a steep barrier for most users and a far cry from intuitive.

Without an error message or a partially loaded UI, it’s impossible to tell if Open WebUI is broken, stuck, or just slow, which risks confusing less technical users even more than missing models would. A timeout might hide some models, but the status quo effectively hides the entire application. Could we explore a middle ground? For example:

  1. Set a reasonable default timeout (e.g., 30-60 seconds) instead of waiting indefinitely.
  2. Load the UI first and display a clear error (e.g., “Ollama server [name] unavailable—check connection or disable in settings”) when a connection fails.
  3. Add a UI option to disable or remove failed API endpoints without needing manual DB edits or POST requests.
  4. Keep the env variable for advanced users to tweak timeouts as needed.

This would maintain the reliability you highlighted while making the experience more resilient and user-friendly, especially for multi-endpoint setups. I’d be happy to discuss further or contribute a PR if the team’s open to it. Reopening this for discussion could help align the behavior with user expectations—thoughts?

I would totally agree with this.

This has to be weirdest "feature" I have ever seen in an app that doesn't even make any sense when reading the devs justification.

Adding: Ollama API should NOT be ON by default. This is what was causing an issue on my upgrade today and introducing me to this "timeout feature" for the first time

<!-- gh-comment-id:2705487502 --> @bloomberg21 commented on GitHub (Mar 7, 2025): > [@tjbck](https://github.com/tjbck) Thanks for the detailed explanation about the intended behavior and the option to adjust `AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST`. I get the rationale behind waiting for all models to load—avoiding partial failures is a valid concern for users with unreliable connections. However, I think this design choice still has significant usability gaps that warrant revisiting, especially given how it scales with multiple API connections. > > For users like [@aaron-r-campbell](https://github.com/aaron-r-campbell) and me, the current behavior—hanging on a black screen for 5-10 minutes or indefinitely when an API server is down—isn’t just a delay; it’s a complete blocker with zero feedback. If you have multiple API connections configured, a single failed endpoint makes the entire server unusable. There’s no way to disable that failed connection through the UI—you’re forced to either manually edit the configuration database or craft a custom POST request to remove the endpoint. That’s a steep barrier for most users and a far cry from intuitive. > > Without an error message or a partially loaded UI, it’s impossible to tell if Open WebUI is broken, stuck, or just slow, which risks confusing less technical users even more than missing models would. A timeout might hide some models, but the status quo effectively hides the entire application. Could we explore a middle ground? For example: > > 1. Set a reasonable default timeout (e.g., 30-60 seconds) instead of waiting indefinitely. > 2. Load the UI first and display a clear error (e.g., “Ollama server [name] unavailable—check connection or disable in settings”) when a connection fails. > 3. Add a UI option to disable or remove failed API endpoints without needing manual DB edits or POST requests. > 4. Keep the env variable for advanced users to tweak timeouts as needed. > > This would maintain the reliability you highlighted while making the experience more resilient and user-friendly, especially for multi-endpoint setups. I’d be happy to discuss further or contribute a PR if the team’s open to it. Reopening this for discussion could help align the behavior with user expectations—thoughts? I would totally agree with this. This has to be weirdest "feature" I have ever seen in an app that doesn't even make any sense when reading the devs justification. Adding: Ollama API should NOT be ON by default. This is what was causing an issue on my upgrade today and introducing me to this "timeout feature" for the first time
Author
Owner

@Pb-207 commented on GitHub (Mar 7, 2025):

Same issue.

<!-- gh-comment-id:2707601042 --> @Pb-207 commented on GitHub (Mar 7, 2025): Same issue.
Author
Owner

@danielrosehill commented on GitHub (Apr 19, 2025):

Same issue. As someone running this on a VPS and with zero intention of using Ollama this is rendering owui almost unusable for me, too.

<!-- gh-comment-id:2816503021 --> @danielrosehill commented on GitHub (Apr 19, 2025): Same issue. As someone running this on a VPS and with zero intention of using Ollama this is rendering owui almost unusable for me, too.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#31681