[GH-ISSUE #2337] bug: hanging connection causing blank screen #51511

Closed
opened 2026-05-05 12:33:22 -05:00 by GiteaMirror · 20 comments
Owner

Originally created by @Zambito1 on GitHub (May 17, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/2337

Bug Report

Description

Bug Summary:
If the Open WebUI backend hangs indefinitely, the UI will show a blank screen with just the keybinding help button in the bottom right.

image

Steps to Reproduce:

I noticed this because I run Open WebUI on my desktop, and Ollama on another machine. If I connected to a VPN on my desktop, LAN connections from my desktop would hang indefinitely. When I tried to boot up Open WebUI, I would just see the screen above.

The easiest way to reproduce this is to run nc -lp 11434 and try to use that as the Ollama server. You will see something like:

$ nc -lp 11434
GET /api/tags HTTP/1.1
Host: host.docker.internal:11434
Accept: */*
Accept-Encoding: gzip, deflate
User-Agent: Python/3.11 aiohttp/3.9.5

As the Open WebUI backend tries to connect to Ollama, but nc will never respond to the request, nor will it close the connection. You will see the above screen when you open the web interface.

I was able to work around my VPN issue by splitting my connection to only route traffic that I want to access over the VPN with the virtual interface.

Expected Behavior:

You can see the web interface, and ideally there will be some sort of timeout to show a connection error.

Actual Behavior:

You are hit with a wall of nothing.

Environment

  • Open WebUI Version: v0.1.124

  • Ollama (if applicable): NA

  • Operating System: NA

  • Browser (if applicable): NA

Reproduction Details

Confirmation:

  • I have read and followed all the instructions provided in the README.md.
  • I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.

Logs and Screenshots

Browser Console Logs:
NA, the connection issue is on the backend. There is nothing that fails in the UI logs.

Docker Container Logs:
This is what happens when you load the index page:

INFO:     172.17.0.1:58940 - "GET /api/config HTTP/1.1" 200 OK
INFO:     172.17.0.1:58940 - "GET /api/v1/auths/ HTTP/1.1" 200 OK
INFO:     172.17.0.1:58940 - "GET /api/changelog HTTP/1.1" 200 OK
INFO:apps.ollama.main:get_all_models()

If you kill nc, the UI will actually show up:

ERROR:apps.ollama.main:Connection error: Server disconnected
INFO:apps.ollama.main:get_all_models()
ERROR:apps.ollama.main:Connection error: Cannot connect to host host.docker.internal:11434 ssl:default [Connection refused]
INFO:     172.17.0.1:40308 - "GET /ollama/api/tags HTTP/1.1" 200 OK
INFO:apps.openai.main:get_all_models()
INFO:apps.openai.main:get_all_models()
INFO:     172.17.0.1:40308 - "GET /openai/api/models HTTP/1.1" 200 OK
INFO:     172.17.0.1:40308 - "GET /litellm/api/v1/models HTTP/1.1" 200 OK
INFO:     172.17.0.1:40308 - "GET /api/v1/modelfiles/ HTTP/1.1" 200 OK
INFO:     172.17.0.1:40308 - "GET /api/v1/prompts/ HTTP/1.1" 200 OK
INFO:     172.17.0.1:40308 - "GET /api/v1/documents/ HTTP/1.1" 200 OK
INFO:     172.17.0.1:40308 - "GET /api/v1/chats/tags/all HTTP/1.1" 200 OK
INFO:apps.ollama.main:get_all_models()
ERROR:apps.ollama.main:Connection error: Cannot connect to host host.docker.internal:11434 ssl:default [Connection refused]
INFO:apps.ollama.main:get_all_models()
ERROR:apps.ollama.main:Connection error: Cannot connect to host host.docker.internal:11434 ssl:default [Connection refused]
INFO:     172.17.0.1:40308 - "GET /ollama/api/tags HTTP/1.1" 200 OK
INFO:     172.17.0.1:40308 - "GET /static/favicon.png HTTP/1.1" 304 Not Modified
INFO:apps.ollama.main:get_all_models()
INFO:     172.17.0.1:40308 - "GET /api/v1/chats/ HTTP/1.1" 200 OK
ERROR:apps.ollama.main:Connection error: Cannot connect to host host.docker.internal:11434 ssl:default [Connection refused]
ERROR:apps.ollama.main:Connection error: Cannot connect to host host.docker.internal:11434 ssl:default [Connection refused]
INFO:     172.17.0.1:49174 - "GET /ollama/api/version HTTP/1.1" 500 Internal Server Error
INFO:apps.openai.main:get_all_models()
INFO:apps.openai.main:get_all_models()
INFO:     172.17.0.1:49174 - "GET /openai/api/models HTTP/1.1" 200 OK
INFO:     172.17.0.1:49174 - "GET /litellm/api/v1/models HTTP/1.1" 200 OK

Screenshots (if applicable):
See above.

Installation Method

Docker, but probably does not matter.

Originally created by @Zambito1 on GitHub (May 17, 2024). Original GitHub issue: https://github.com/open-webui/open-webui/issues/2337 # Bug Report ## Description **Bug Summary:** If the Open WebUI backend hangs indefinitely, the UI will show a blank screen with just the keybinding help button in the bottom right. ![image](https://github.com/open-webui/open-webui/assets/7004857/e1b08129-39a4-4323-ba36-18ae6aa5c980) **Steps to Reproduce:** I noticed this because I run Open WebUI on my desktop, and Ollama on another machine. If I connected to a VPN on my desktop, LAN connections from my desktop would hang indefinitely. When I tried to boot up Open WebUI, I would just see the screen above. The easiest way to reproduce this is to run `nc -lp 11434` and try to use that as the Ollama server. You will see something like: ``` $ nc -lp 11434 GET /api/tags HTTP/1.1 Host: host.docker.internal:11434 Accept: */* Accept-Encoding: gzip, deflate User-Agent: Python/3.11 aiohttp/3.9.5 ``` As the Open WebUI backend tries to connect to Ollama, but `nc` will never respond to the request, nor will it close the connection. You will see the above screen when you open the web interface. I was able to work around my VPN issue by splitting my connection to only route traffic that I want to access over the VPN with the virtual interface. **Expected Behavior:** You can see the web interface, and ideally there will be some sort of timeout to show a connection error. **Actual Behavior:** You are hit with a wall of nothing. ## Environment - **Open WebUI Version:** v0.1.124 - **Ollama (if applicable):** NA - **Operating System:** NA - **Browser (if applicable):** NA ## Reproduction Details **Confirmation:** - [x] I have read and followed all the instructions provided in the README.md. - [x] I am on the latest version of both Open WebUI and Ollama. - [ ] I have included the browser console logs. - [x] I have included the Docker container logs. ## Logs and Screenshots **Browser Console Logs:** NA, the connection issue is on the backend. There is nothing that fails in the UI logs. **Docker Container Logs:** This is what happens when you load the index page: ``` INFO: 172.17.0.1:58940 - "GET /api/config HTTP/1.1" 200 OK INFO: 172.17.0.1:58940 - "GET /api/v1/auths/ HTTP/1.1" 200 OK INFO: 172.17.0.1:58940 - "GET /api/changelog HTTP/1.1" 200 OK INFO:apps.ollama.main:get_all_models() ``` If you kill `nc`, the UI will actually show up: ``` ERROR:apps.ollama.main:Connection error: Server disconnected INFO:apps.ollama.main:get_all_models() ERROR:apps.ollama.main:Connection error: Cannot connect to host host.docker.internal:11434 ssl:default [Connection refused] INFO: 172.17.0.1:40308 - "GET /ollama/api/tags HTTP/1.1" 200 OK INFO:apps.openai.main:get_all_models() INFO:apps.openai.main:get_all_models() INFO: 172.17.0.1:40308 - "GET /openai/api/models HTTP/1.1" 200 OK INFO: 172.17.0.1:40308 - "GET /litellm/api/v1/models HTTP/1.1" 200 OK INFO: 172.17.0.1:40308 - "GET /api/v1/modelfiles/ HTTP/1.1" 200 OK INFO: 172.17.0.1:40308 - "GET /api/v1/prompts/ HTTP/1.1" 200 OK INFO: 172.17.0.1:40308 - "GET /api/v1/documents/ HTTP/1.1" 200 OK INFO: 172.17.0.1:40308 - "GET /api/v1/chats/tags/all HTTP/1.1" 200 OK INFO:apps.ollama.main:get_all_models() ERROR:apps.ollama.main:Connection error: Cannot connect to host host.docker.internal:11434 ssl:default [Connection refused] INFO:apps.ollama.main:get_all_models() ERROR:apps.ollama.main:Connection error: Cannot connect to host host.docker.internal:11434 ssl:default [Connection refused] INFO: 172.17.0.1:40308 - "GET /ollama/api/tags HTTP/1.1" 200 OK INFO: 172.17.0.1:40308 - "GET /static/favicon.png HTTP/1.1" 304 Not Modified INFO:apps.ollama.main:get_all_models() INFO: 172.17.0.1:40308 - "GET /api/v1/chats/ HTTP/1.1" 200 OK ERROR:apps.ollama.main:Connection error: Cannot connect to host host.docker.internal:11434 ssl:default [Connection refused] ERROR:apps.ollama.main:Connection error: Cannot connect to host host.docker.internal:11434 ssl:default [Connection refused] INFO: 172.17.0.1:49174 - "GET /ollama/api/version HTTP/1.1" 500 Internal Server Error INFO:apps.openai.main:get_all_models() INFO:apps.openai.main:get_all_models() INFO: 172.17.0.1:49174 - "GET /openai/api/models HTTP/1.1" 200 OK INFO: 172.17.0.1:49174 - "GET /litellm/api/v1/models HTTP/1.1" 200 OK ``` **Screenshots (if applicable):** See above. ## Installation Method Docker, but probably does not matter.
Author
Owner

@tjbck commented on GitHub (May 17, 2024):

PR welcome!

<!-- gh-comment-id:2117857647 --> @tjbck commented on GitHub (May 17, 2024): PR welcome!
Author
Owner

@tjbck commented on GitHub (May 17, 2024):

Updated some code on our dev branch, let us know if that did anything for you.

<!-- gh-comment-id:2118113832 --> @tjbck commented on GitHub (May 17, 2024): Updated some code on our dev branch, let us know if that did anything for you.
Author
Owner

@Zambito1 commented on GitHub (May 21, 2024):

Hey @tjbck, I got around to testing the dev branch today. The problem still seems to happen. I should have said to use nc -lk 11434 or ncat -lk 11434, that will continue to listen to connections after the first one. If you do not use -k, refreshing the page will kill netcat, and the UI will appear.

<!-- gh-comment-id:2123533309 --> @Zambito1 commented on GitHub (May 21, 2024): Hey @tjbck, I got around to testing the dev branch today. The problem still seems to happen. I should have said to use `nc -lk 11434` or `ncat -lk 11434`, that will continue to listen to connections after the first one. If you do not use `-k`, refreshing the page will kill netcat, and the UI will appear.
Author
Owner

@justinh-rahb commented on GitHub (May 22, 2024):

@Hiradpi I'm going to guess this is on Windows. You'll need to set ENABLE_LITELLM=False as an environment variable since it's not supported running directly on Windows with the method we're using, and will soon be deprecated and removed from the project anyway.

<!-- gh-comment-id:2125930300 --> @justinh-rahb commented on GitHub (May 22, 2024): @Hiradpi I'm going to guess this is on Windows. You'll need to set `ENABLE_LITELLM=False` as an environment variable since it's not supported running directly on Windows with the method we're using, and will soon be deprecated and removed from the project anyway.
Author
Owner

@Zambito1 commented on GitHub (May 23, 2024):

For clarity this is unrelated to the issue originally described. I just tested it again on the latest dev branch, and the UI is still blank when the Ollama service is reachable but unresponsive (such as when using netcat instead of ollama to listen to the port).

<!-- gh-comment-id:2126089778 --> @Zambito1 commented on GitHub (May 23, 2024): For clarity this is unrelated to the issue originally described. I just tested it again on the latest dev branch, and the UI is still blank when the Ollama service is reachable but unresponsive (such as when using netcat instead of ollama to listen to the port).
Author
Owner

@tjbck commented on GitHub (May 26, 2024):

Just pushed a massive refac/update to our dev branch, please try again and let us know how it went! Much thanks!

<!-- gh-comment-id:2132099742 --> @tjbck commented on GitHub (May 26, 2024): Just pushed a massive refac/update to our dev branch, please try again and let us know how it went! Much thanks!
Author
Owner

@Zambito1 commented on GitHub (May 26, 2024):

Now the UI will be blank for 5 seconds, and then it will appear after the backend connection to Ollama times out.

image

The UI appears after the models connection finishes. It is definitely much better than indefinitely showing a blank screen (enough that I would consider the original issue resolved), but I do think the UI should be able to show before that request completes. Feel free to close this issue, or use it to continue tracking the UI not appearing until this request finishes at your discretion.

<!-- gh-comment-id:2132410554 --> @Zambito1 commented on GitHub (May 26, 2024): Now the UI will be blank for 5 seconds, and then it will appear after the backend connection to Ollama times out. ![image](https://github.com/open-webui/open-webui/assets/7004857/95a50983-9b2b-4c18-bf01-58cf79856a82) The UI appears after the models connection finishes. It is definitely much better than indefinitely showing a blank screen (enough that I would consider the original issue resolved), but I do think the UI should be able to show before that request completes. Feel free to close this issue, or use it to continue tracking the UI not appearing until this request finishes at your discretion.
Author
Owner

@tjbck commented on GitHub (May 26, 2024):

@Zambito1 could you try disabling Ollama connection from settings > connections?

<!-- gh-comment-id:2132410782 --> @tjbck commented on GitHub (May 26, 2024): @Zambito1 could you try disabling Ollama connection from settings > connections?
Author
Owner

@Zambito1 commented on GitHub (May 26, 2024):

Hm, that menu actually has some weird behavior when I try to do that. When I navigate there while listening with netcat instead of Ollama, the UI will show Ollama and Open AI as disabled.

image

When the connection attempt to Ollama times out, the UI will change automatically, switching both to be enabled.

image

I did not interact with anything manually between the above screenshots.

If I have nothing listening on the port that Open WebUI expects Ollama to be on (neither netcat nor Ollama), these are both immediately populated and on when I navigate to the menu.

image

Note that I did not manually enter the URL between the above two screenshots. I just closed netcat, and then reopened the settings window.

<!-- gh-comment-id:2132413081 --> @Zambito1 commented on GitHub (May 26, 2024): Hm, that menu actually has some weird behavior when I try to do that. When I navigate there while listening with netcat instead of Ollama, the UI will show Ollama and Open AI as disabled. ![image](https://github.com/open-webui/open-webui/assets/7004857/e75a7ae6-6741-469b-97ab-ee44326db5a8) When the connection attempt to Ollama times out, the UI will change automatically, switching both to be enabled. ![image](https://github.com/open-webui/open-webui/assets/7004857/c479c93e-eefc-4ffc-856c-0d086f62672e) I did not interact with anything manually between the above screenshots. If I have nothing listening on the port that Open WebUI expects Ollama to be on (neither netcat nor Ollama), these are both immediately populated and on when I navigate to the menu. ![image](https://github.com/open-webui/open-webui/assets/7004857/5a1f31d3-b85c-4056-bf76-52e9a21626e7) Note that I did not manually enter the URL between the above two screenshots. I just closed netcat, and then reopened the settings window.
Author
Owner

@tjbck commented on GitHub (May 26, 2024):

Just added a fix to include a loading screen, let me know if you encounter the same set of issues!

<!-- gh-comment-id:2132414013 --> @tjbck commented on GitHub (May 26, 2024): Just added a fix to include a loading screen, let me know if you encounter the same set of issues!
Author
Owner

@Zambito1 commented on GitHub (May 26, 2024):

On commit b7fc37d9 the settings will load for 12 seconds before timing out (2 sequential 6 second time outs).

image

image

After the connection times out, the settings will show with the URLs populated and the connections enabled.

image

If I disable the Ollama connection now, I am able to refresh the page without waiting on a blank screen like my earlier tests today.

<!-- gh-comment-id:2132434705 --> @Zambito1 commented on GitHub (May 26, 2024): On commit b7fc37d9 the settings will load for 12 seconds before timing out (2 sequential 6 second time outs). ![image](https://github.com/open-webui/open-webui/assets/7004857/2cdcbad1-55ba-4377-96b8-e3a5680d0247) ![image](https://github.com/open-webui/open-webui/assets/7004857/dc42ea9f-bcbf-45fe-aa46-ccaf3ab7df83) After the connection times out, the settings will show with the URLs populated and the connections enabled. ![image](https://github.com/open-webui/open-webui/assets/7004857/a8448091-cedc-49b0-8e57-d28018573a75) If I disable the Ollama connection now, I am able to refresh the page without waiting on a blank screen like my earlier tests today.
Author
Owner

@sebdanielsson commented on GitHub (Jun 4, 2024):

Loading times are greatly improved since I followed this issue but I'm still getting 6 seconds of loading time. Is this intended?

image
<!-- gh-comment-id:2147702748 --> @sebdanielsson commented on GitHub (Jun 4, 2024): Loading times are greatly improved since I followed this issue but I'm still getting 6 seconds of loading time. Is this intended? <img width="1302" alt="image" src="https://github.com/open-webui/open-webui/assets/20663065/8c30b5e3-e478-4c1e-9f2d-dc5435f6f2d8">
Author
Owner

@eagerto-learn commented on GitHub (Feb 4, 2025):

I am still experiencing this issue. I use multiple ollama instances and when any one of the instances are unreachable, the whole openwebui is unusable.

I am getting prompted to login but after successfully logging in I get the same blank screen.

<!-- gh-comment-id:2634649667 --> @eagerto-learn commented on GitHub (Feb 4, 2025): I am still experiencing this issue. I use multiple ollama instances and when any one of the instances are unreachable, the whole openwebui is unusable. I am getting prompted to login but after successfully logging in I get the same blank screen.
Author
Owner

@GeorgiaM-honestly commented on GitHub (Feb 7, 2025):

I am also experiencing this issue. This is preventing me from using a new Open-WebUI install.

New install. Setting the environment variable OLLAMA_API_TIMEOUT to 0.1 didn't change anything. ( docker run [...] -e OLLAMA_API_TIMEOUT=0.1

<!-- gh-comment-id:2643992405 --> @GeorgiaM-honestly commented on GitHub (Feb 7, 2025): I am also experiencing this issue. This is preventing me from using a new Open-WebUI install. New install. Setting the environment variable OLLAMA_API_TIMEOUT to 0.1 didn't change anything. ( docker run [...] -e OLLAMA_API_TIMEOUT=0.1
Author
Owner

@flennic commented on GitHub (Feb 10, 2025):

Encountering the same issue. ollama hosted on another machine. When the machine is not reachable, I get a blank screen. It starts working again as soon the the machine hosting ollama is reachable via the network again. It is the only available ollama model instance (so if this one is down, there is none available at all).

<!-- gh-comment-id:2646673064 --> @flennic commented on GitHub (Feb 10, 2025): Encountering the same issue. ollama hosted on another machine. When the machine is not reachable, I get a blank screen. It starts working again as soon the the machine hosting ollama is reachable via the network again. It is the only available ollama model instance (so if this one is down, there is none available at all).
Author
Owner

@blyedev commented on GitHub (Feb 16, 2025):

Same issue here, except my ollama instance is behind a ssh tunnel

<!-- gh-comment-id:2661409799 --> @blyedev commented on GitHub (Feb 16, 2025): Same issue here, except my ollama instance is behind a ssh tunnel
Author
Owner

@frugbug commented on GitHub (Feb 19, 2025):

I'm also experiencing this issue, but it only occurs for me when a remote instance is down. With an inaccessible instance on the same host as open-webui, the UI does still load fine

<!-- gh-comment-id:2669223261 --> @frugbug commented on GitHub (Feb 19, 2025): I'm also experiencing this issue, but it only occurs for me when a remote instance is down. With an inaccessible instance on the same host as open-webui, the UI does still load fine
Author
Owner

@intelligo1466 commented on GitHub (Feb 22, 2025):

Same here. Fresh install on Windows 11 of Open WebUI container with Podman. Installation was without error, but when I connect to localhost:3000 the Open WebUI logo flashes for a second, disappears, then I'm left with a black screen. No response when left- and right-Clicking around the screen.

<!-- gh-comment-id:2676014841 --> @intelligo1466 commented on GitHub (Feb 22, 2025): Same here. Fresh install on Windows 11 of Open WebUI container with Podman. Installation was without error, but when I connect to localhost:3000 the Open WebUI logo flashes for a second, disappears, then I'm left with a black screen. No response when left- and right-Clicking around the screen.
Author
Owner

@mcpeixoto commented on GitHub (Mar 14, 2025):

Same here. Did a fresh installed -> Configured OLLAMA but after some time (??) the page broke and after login I only get a blank page.
Using docker latest version

<!-- gh-comment-id:2724882798 --> @mcpeixoto commented on GitHub (Mar 14, 2025): Same here. Did a fresh installed -> Configured OLLAMA but after some time (??) the page broke and after login I only get a blank page. Using docker latest version
Author
Owner

@intelligo1466 commented on GitHub (Mar 22, 2025):

Same here. Fresh install on Windows 11 of Open WebUI container with Podman. Installation was without error, but when I connect to localhost:3000 the Open WebUI logo flashes for a second, disappears, then I'm left with a black screen. No response when left- and right-Clicking around the screen.

Follow-up: By exercising some patience, I discovered that this apparent disappearance of Open WebUI is only because of the unexpectedly long lag between starting the Open WebUI container and the appearance of Open WebUI on localhost, especially if it's your first time running the container. After a minute, Open WebUI appears and functions as expected.

<!-- gh-comment-id:2745400250 --> @intelligo1466 commented on GitHub (Mar 22, 2025): > Same here. Fresh install on Windows 11 of Open WebUI container with Podman. Installation was without error, but when I connect to localhost:3000 the Open WebUI logo flashes for a second, disappears, then I'm left with a black screen. No response when left- and right-Clicking around the screen. Follow-up: By exercising some patience, I discovered that this apparent disappearance of Open WebUI is only because of the unexpectedly long lag between starting the Open WebUI container and the appearance of Open WebUI on localhost, especially if it's your first time running the container. After a minute, Open WebUI appears and functions as expected.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#51511