"No models": containerized Open web UI cannot access non-containerized Ollama 127.0.0.1:11434 via host.docker.internal #2267

Closed
opened 2025-11-11 15:03:50 -06:00 by GiteaMirror · 2 comments
Owner

Originally created by @huornlmj on GitHub (Oct 3, 2024).

Bug Report

No models shown in web ui due to containerized Open web UI cannot access non-containerized Ollama 127.0.0.1:11434 via host.docker.internal

Installation Method

Ollama installed via ollama installation script, non containerized.
Ollama listening locally on 127.0.0.1:11434

Openweb ui installed using the provided installer command:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Environment

  • Open WebUI Version: Whatever comes from ghcr.io/open-webui/open-webui:main

  • Ollama (if applicable): 0.3.12

  • Operating System: Ubuntu 22.04.5 LTS

  • Browser (if applicable): N/A

Confirmation:

  • I have read and followed all the instructions provided in the README.md.
  • I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.

Expected Behavior:

Local models show up in web ui

Actual Behavior:

No models show up in web ui

Description

Bug Summary:
No models shown in web ui due to containerized Open web UI cannot access non-containerized Ollama 127.0.0.1:11434 via host.docker.internal

Reproduction Details

  1. Install Ollama using Ollama installer script (non containerized)
  2. Confirm that it is listening locally and that I can curl (via http though - NOT https)
$ curl http://127.0.0.1:11434
Ollama is running
  1. Once installed confirm that models are available:
$ ollama ls
NAME                     ID              SIZE      MODIFIED
deepseek-v2.5:latest     409b2dd8a3c4    132 GB    46 hours ago
llama3:70b               786f3184aec0    39 GB     46 hours ago
starcoder2:15b           21ae152d49e0    9.1 GB    2 days ago
deepseek-coder-v2:16b    63fb193b3a9b    8.9 GB    2 days ago
llama3.1:70b             c0df3564cfe8    39 GB     2 days ago
  1. install Openweb ui with docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  2. Visit web ui and observe there are no models showing.

Logs and Screenshots

image

Docker Container Logs:

https://github.com/open-webui/open-webui

Running migrations
INFO:     172.17.0.1:52582 - "GET / HTTP/1.1" 304 Not Modified
INFO  [open_webui.apps.openai.main] get_all_models()
INFO  [open_webui.apps.ollama.main] get_all_models()
ERROR [open_webui.apps.ollama.main] Connection error: Cannot connect to host host.docker.internal:11434 ssl:default [Connect call failed ('172.17.0.1', 11434)]
INFO:     172.17.0.1:52582 - "GET /static/splash.png HTTP/1.1" 200 OK
INFO  [open_webui.apps.openai.main] get_all_models()
INFO  [open_webui.apps.ollama.main] get_all_models()
ERROR [open_webui.apps.ollama.main] Connection error: Cannot connect to host host.docker.internal:11434 ssl:default [Connect call failed ('172.17.0.1', 11434)]
INFO:     172.17.0.1:52582 - "GET /api/config HTTP/1.1" 200 OK
INFO  [open_webui.apps.openai.main] get_all_models()
INFO  [open_webui.apps.ollama.main] get_all_models()
ERROR [open_webui.apps.ollama.main] Connection error: Cannot connect to host host.docker.internal:11434 ssl:default [Connect call failed ('172.17.0.1', 11434)]

Additional Information

The issue is that ollama is listening locally on 127.0.0.1:11434.

$ sudo netstat -plant | grep 11434
tcp        0      0 127.0.0.1:11434         0.0.0.0:*               LISTEN      8488/ollama

And host.docker.internal is reachable from inside the open webui container (proven if I install the ping command)

# ping host.docker.internal
PING host.docker.internal (172.17.0.1) 56(84) bytes of data.
64 bytes from host.docker.internal (172.17.0.1): icmp_seq=1 ttl=64 time=0.096 ms
64 bytes from host.docker.internal (172.17.0.1): icmp_seq=2 ttl=64 time=0.043 ms
64 bytes from host.docker.internal (172.17.0.1): icmp_seq=3 ttl=64 time=0.043 ms

But you can see that it's 172.17.0.1 that it is resolving to. And if I add nmap to the open webui container and check if port 11434 is open on that172.17.0.1 interface I see that it is not. Because it's 127.0.0.1 on the host that port 11434 is actually listening on.

root@5ed91454727e:/app/backend# nmap host.docker.internal -p 11434
Starting Nmap 7.93 ( https://nmap.org ) at 2024-10-03 15:57 UTC
Nmap scan report for host.docker.internal (172.17.0.1)
Host is up (0.000048s latency).

PORT      STATE  SERVICE
11434/tcp closed unknown
MAC Address: 02:42:BC:DD:FD:03 (Unknown)

Nmap done: 1 IP address (1 host up) scanned in 0.18 seconds

By the way, I am not prepared to use the host network as a solution because I intend to put an nginx reverse proxy in front of the openweb ui port so that I can offload TLS termination to it. If I use the host network approach then open webui's port 8080 will then be exposed on all interfaces, and then I'd need to start faffing with the OS iptables / ufw firewall to compensate. This compensation is the last option here.

Note

If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!

Originally created by @huornlmj on GitHub (Oct 3, 2024). # Bug Report No models shown in web ui due to containerized Open web UI cannot access non-containerized Ollama 127.0.0.1:11434 via host.docker.internal ## Installation Method Ollama installed via ollama installation script, non containerized. Ollama listening locally on 127.0.0.1:11434 Openweb ui installed using the provided installer command: `docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main` ## Environment - **Open WebUI Version:** Whatever comes from ghcr.io/open-webui/open-webui:main - **Ollama (if applicable):** 0.3.12 - **Operating System:** Ubuntu 22.04.5 LTS - **Browser (if applicable):** N/A **Confirmation:** - [X] I have read and followed all the instructions provided in the README.md. - [X] I am on the latest version of both Open WebUI and Ollama. - [X] I have included the browser console logs. - [X] I have included the Docker container logs. - [X] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below. ## Expected Behavior: Local models show up in web ui ## Actual Behavior: No models show up in web ui ## Description **Bug Summary:** No models shown in web ui due to containerized Open web UI cannot access non-containerized Ollama 127.0.0.1:11434 via host.docker.internal ## Reproduction Details 1. Install Ollama using Ollama installer script (non containerized) 2. Confirm that it is listening locally and that I can curl (via http though - *NOT* https) ```console $ curl http://127.0.0.1:11434 Ollama is running ``` 3. Once installed confirm that models are available: ```console $ ollama ls NAME ID SIZE MODIFIED deepseek-v2.5:latest 409b2dd8a3c4 132 GB 46 hours ago llama3:70b 786f3184aec0 39 GB 46 hours ago starcoder2:15b 21ae152d49e0 9.1 GB 2 days ago deepseek-coder-v2:16b 63fb193b3a9b 8.9 GB 2 days ago llama3.1:70b c0df3564cfe8 39 GB 2 days ago ``` 4. install Openweb ui with `docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main` 5. Visit web ui and observe there are no models showing. ## Logs and Screenshots ![image](https://github.com/user-attachments/assets/f6f87ce8-ebfb-4e3c-bd45-4beae31f090c) **Docker Container Logs:** ``` https://github.com/open-webui/open-webui Running migrations INFO: 172.17.0.1:52582 - "GET / HTTP/1.1" 304 Not Modified INFO [open_webui.apps.openai.main] get_all_models() INFO [open_webui.apps.ollama.main] get_all_models() ERROR [open_webui.apps.ollama.main] Connection error: Cannot connect to host host.docker.internal:11434 ssl:default [Connect call failed ('172.17.0.1', 11434)] INFO: 172.17.0.1:52582 - "GET /static/splash.png HTTP/1.1" 200 OK INFO [open_webui.apps.openai.main] get_all_models() INFO [open_webui.apps.ollama.main] get_all_models() ERROR [open_webui.apps.ollama.main] Connection error: Cannot connect to host host.docker.internal:11434 ssl:default [Connect call failed ('172.17.0.1', 11434)] INFO: 172.17.0.1:52582 - "GET /api/config HTTP/1.1" 200 OK INFO [open_webui.apps.openai.main] get_all_models() INFO [open_webui.apps.ollama.main] get_all_models() ERROR [open_webui.apps.ollama.main] Connection error: Cannot connect to host host.docker.internal:11434 ssl:default [Connect call failed ('172.17.0.1', 11434)] ``` ## Additional Information The issue is that ollama is listening locally on 127.0.0.1:11434. ```console $ sudo netstat -plant | grep 11434 tcp 0 0 127.0.0.1:11434 0.0.0.0:* LISTEN 8488/ollama ``` And `host.docker.internal` is reachable from inside the open webui container (proven if I install the ping command) ```console # ping host.docker.internal PING host.docker.internal (172.17.0.1) 56(84) bytes of data. 64 bytes from host.docker.internal (172.17.0.1): icmp_seq=1 ttl=64 time=0.096 ms 64 bytes from host.docker.internal (172.17.0.1): icmp_seq=2 ttl=64 time=0.043 ms 64 bytes from host.docker.internal (172.17.0.1): icmp_seq=3 ttl=64 time=0.043 ms ``` But you can see that it's 172.17.0.1 that it is resolving to. And if I add nmap to the open webui container and check if port 11434 is open on that172.17.0.1 interface I see that it is not. Because it's 127.0.0.1 on the host that port 11434 is actually listening on. ```console root@5ed91454727e:/app/backend# nmap host.docker.internal -p 11434 Starting Nmap 7.93 ( https://nmap.org ) at 2024-10-03 15:57 UTC Nmap scan report for host.docker.internal (172.17.0.1) Host is up (0.000048s latency). PORT STATE SERVICE 11434/tcp closed unknown MAC Address: 02:42:BC:DD:FD:03 (Unknown) Nmap done: 1 IP address (1 host up) scanned in 0.18 seconds ``` By the way, I am not prepared to use the host network as a solution because I intend to put an nginx reverse proxy in front of the openweb ui port so that I can offload TLS termination to it. If I use the host network approach then open webui's port 8080 will then be exposed on all interfaces, and then I'd need to start faffing with the OS iptables / ufw firewall to compensate. This compensation is the last option here. ## Note If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!
Author
Owner

@stumblebot commented on GitHub (Oct 3, 2024):

Similar issue here on Ubuntu 22.04.4 and ollama 0.3.9.

@stumblebot commented on GitHub (Oct 3, 2024): Similar issue here on Ubuntu 22.04.4 and ollama 0.3.9.
Author
Owner

@huornlmj commented on GitHub (Oct 3, 2024):

I have a bodge while this is being looked at. Make a hose-pipe tcp connection to connect 127.0.0.1:11434 to whatever your own docker interface gateway is. In my system it's 172.17.0.1. That way, as long as the SOCAT tunnel is up then your open webui container can see port 11434 open on its host.docker.internal interface which the SOCAT then pipes over to your host's 127.0.0.1:11434. But it all falls apart if you kill the SOCAT tunnel, hence the '&'.

$ socat TCP4-LISTEN:11434,bind=172.17.0.1,fork,reuseaddr TCP4:127.0.0.1:11434 &
@huornlmj commented on GitHub (Oct 3, 2024): I have a bodge while this is being looked at. Make a hose-pipe tcp connection to connect 127.0.0.1:11434 to whatever your own docker interface gateway is. In my system it's 172.17.0.1. That way, as long as the SOCAT tunnel is up then your open webui container can see port 11434 open on its host.docker.internal interface which the SOCAT then pipes over to your host's 127.0.0.1:11434. But it all falls apart if you kill the SOCAT tunnel, hence the '&'. ``` $ socat TCP4-LISTEN:11434,bind=172.17.0.1,fork,reuseaddr TCP4:127.0.0.1:11434 & ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#2267