[GH-ISSUE #209] Keep getting server connection failed when starting ollama-webui #27505

Closed
opened 2026-04-25 02:12:23 -05:00 by GiteaMirror · 45 comments
Owner

Originally created by @zono50 on GitHub (Dec 13, 2023).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/209

I run the docker command - docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main

and it starts, i can go to localhost:3000 and it pulls up, but as soon as it does, i get a dropdown at the top saying "Server connection failed" and then won't let me do anything.

If I go to localhost:11434 it shows ollama is running, and i can run ollama in the terminal, but can't do anything in ollama-webui.

Originally created by @zono50 on GitHub (Dec 13, 2023). Original GitHub issue: https://github.com/open-webui/open-webui/issues/209 I run the docker command - docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main and it starts, i can go to localhost:3000 and it pulls up, but as soon as it does, i get a dropdown at the top saying "Server connection failed" and then won't let me do anything. If I go to localhost:11434 it shows ollama is running, and i can run ollama in the terminal, but can't do anything in ollama-webui.
Author
Owner

@tjbck commented on GitHub (Dec 13, 2023):

Hi, Could you please provide us with both browser console logs and docker container logs, we cannot diagnose your issue without them. Thanks.

<!-- gh-comment-id:1854182461 --> @tjbck commented on GitHub (Dec 13, 2023): Hi, Could you please provide us with both browser console logs and docker container logs, we cannot diagnose your issue without them. Thanks.
Author
Owner

@zono50 commented on GitHub (Dec 13, 2023):

Failed to load resource: the server responded with a status of 500 (internal server error) syntax error: unexpected token '<', "<!doctype "... is not valid JSON

The command i run to start the program is -
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main

Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 203, in _new_conn
sock = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/usr/local/lib/python3.11/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 790, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 496, in _make_request
conn.request(
File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 395, in request
self.endheaders()
File "/usr/local/lib/python3.11/http/client.py", line 1281, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.11/http/client.py", line 1041, in _send_output
self.send(msg)
File "/usr/local/lib/python3.11/http/client.py", line 979, in send
self.connect()
File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 243, in connect
self.sock = self._new_conn()
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 218, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fe3a0425610>: Failed to establish a new connection: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 844, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='host.docker.internal', port=11434): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a0425610>: Failed to establish a new connection: [Errno 111] Connection refused'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 1455, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 869, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/flask_cors/extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 867, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 852, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/backend/apps/ollama/main.py", line 63, in proxy
target_response = requests.request(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='host.docker.internal', port=11434): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a0425610>: Failed to establish a new connection: [Errno 111] Connection refused'))
INFO: 172.17.0.1:56854 - "GET /tags HTTP/1.1" 500 Internal Server Error

<!-- gh-comment-id:1854666753 --> @zono50 commented on GitHub (Dec 13, 2023): Failed to load resource: the server responded with a status of 500 (internal server error) syntax error: unexpected token '<', "<!doctype "... is not valid JSON The command i run to start the program is - docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.11/site-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 790, in urlopen response = self._make_request( ^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 496, in _make_request conn.request( File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/local/lib/python3.11/http/client.py", line 1281, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/local/lib/python3.11/http/client.py", line 1041, in _send_output self.send(msg) File "/usr/local/lib/python3.11/http/client.py", line 979, in send self.connect() File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() ^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fe3a0425610>: Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( ^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 844, in urlopen retries = retries.increment( ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='host.docker.internal', port=11434): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a0425610>: Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 1455, in wsgi_app response = self.full_dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 869, in full_dispatch_request rv = self.handle_user_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/flask_cors/extension.py", line 176, in wrapped_function return cors_after_request(app.make_response(f(*args, **kwargs))) ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 867, in full_dispatch_request rv = self.dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 852, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/backend/apps/ollama/main.py", line 63, in proxy target_response = requests.request( ^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='host.docker.internal', port=11434): Max retries exceeded with url: /api/tags (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a0425610>: Failed to establish a new connection: [Errno 111] Connection refused')) INFO: 172.17.0.1:56854 - "GET /tags HTTP/1.1" 500 Internal Server Error
Author
Owner

@tjbck commented on GitHub (Dec 14, 2023):

It seems like Ollama instance is not reachable from the docker container, Could you tell us more about the system you're running Ollama/Ollama WebUI on, as well as how you installed docker on your machine?

<!-- gh-comment-id:1854946538 --> @tjbck commented on GitHub (Dec 14, 2023): It seems like Ollama instance is not reachable from the docker container, Could you tell us more about the system you're running Ollama/Ollama WebUI on, as well as how you installed docker on your machine?
Author
Owner

@zono50 commented on GitHub (Dec 14, 2023):

I installed ollama originally, and it worked fine, but then i installed a program called quivr which i think was trying to run on 3000 as well. I ended up removing that program and reinstalling this one. i'm using linux manjaro

<!-- gh-comment-id:1854982964 --> @zono50 commented on GitHub (Dec 14, 2023): I installed ollama originally, and it worked fine, but then i installed a program called quivr which i think was trying to run on 3000 as well. I ended up removing that program and reinstalling this one. i'm using linux manjaro
Author
Owner

@tjbck commented on GitHub (Dec 14, 2023):

Hmm, strange. Could you try to curl ollama from inside the webui container and see if it has access to it? Also, just to make sure both Ollama/Ollama WebUI are running on the same machine, correct? Thanks.

<!-- gh-comment-id:1855006792 --> @tjbck commented on GitHub (Dec 14, 2023): Hmm, strange. Could you try to `curl` ollama from inside the webui container and see if it has access to it? Also, just to make sure both Ollama/Ollama WebUI are running on the same machine, correct? Thanks.
Author
Owner

@zono50 commented on GitHub (Dec 14, 2023):

yes, if i go to localhost:11434 it says ollama is runninng, and i can run it from my terminal. how do I curl ollama from inside the webui container? I even reinstalled ollama, set open rules in my firewall, and same issue

<!-- gh-comment-id:1855046037 --> @zono50 commented on GitHub (Dec 14, 2023): yes, if i go to localhost:11434 it says ollama is runninng, and i can run it from my terminal. how do I curl ollama from inside the webui container? I even reinstalled ollama, set open rules in my firewall, and same issue
Author
Owner

@tjbck commented on GitHub (Dec 14, 2023):

Refer to here: https://stackoverflow.com/questions/30172605/how-do-i-get-into-a-docker-containers-shell

<!-- gh-comment-id:1855050279 --> @tjbck commented on GitHub (Dec 14, 2023): Refer to here: https://stackoverflow.com/questions/30172605/how-do-i-get-into-a-docker-containers-shell
Author
Owner

@ElieFrancis1 commented on GitHub (Dec 14, 2023):

Hello, I think I have the same problem :

  • I have the last version of ollama running on a server
  • I started ollama-webui using the following command : docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=http://myserver.com:11434 --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main
  • I installed curl in the container and used it to check access to ollama : curl http://myserver.com:11434
  • I got the following message : Ollama is running

In the browser, when I open http://myserver.com:3000, I get the message "Server connection failed".

<!-- gh-comment-id:1855516301 --> @ElieFrancis1 commented on GitHub (Dec 14, 2023): Hello, I think I have the same problem : - I have the last version of ollama running on a server - I started ollama-webui using the following command : docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=http://myserver.com:11434 --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main - I installed curl in the container and used it to check access to ollama : curl http://myserver.com:11434 - I got the following message : Ollama is running In the browser, when I open http://myserver.com:3000, I get the message "Server connection failed".
Author
Owner

@Clovis-krz commented on GitHub (Dec 14, 2023):

Same for me, OLLAMA_API_BASE_URL doesnt seem to work when starting docker container. I ended up overwriting the url in the settings on the webui but i have to do that for every computer

<!-- gh-comment-id:1855626124 --> @Clovis-krz commented on GitHub (Dec 14, 2023): Same for me, OLLAMA_API_BASE_URL doesnt seem to work when starting docker container. I ended up overwriting the url in the settings on the webui but i have to do that for every computer
Author
Owner

@sartilas commented on GitHub (Dec 14, 2023):

same for me and in the UI i have : "Uh-oh! There was an issue connecting to Ollama."
curl ollama from inside the webui container :
"root@41938ad91712:/app/backend# curl http://localhost:11434/
curl: (7) Failed to connect to localhost port 11434: Connection refused"

But in in the host : "Ollama is running"

<!-- gh-comment-id:1855657434 --> @sartilas commented on GitHub (Dec 14, 2023): same for me and in the UI i have : "Uh-oh! There was an issue connecting to Ollama." curl ollama from inside the webui container : "root@41938ad91712:/app/backend# curl http://localhost:11434/ curl: (7) Failed to connect to localhost port 11434: Connection refused" But in in the host : "Ollama is running"
Author
Owner

@zono50 commented on GitHub (Dec 14, 2023):

I can do the docker exec command but when i get into the contain using /bin/bash, it doesn't recognize any programs like curl, pacman, wget, ping, nothing

<!-- gh-comment-id:1855948037 --> @zono50 commented on GitHub (Dec 14, 2023): I can do the docker exec command but when i get into the contain using /bin/bash, it doesn't recognize any programs like curl, pacman, wget, ping, nothing
Author
Owner

@zono50 commented on GitHub (Dec 14, 2023):

Got this information from Clovis

version: '3.6'

services:

ollama-webui:
build:
context: .
args:
OLLAMA_API_BASE_URL: '/ollama/api'
dockerfile: Dockerfile
image: ollama-webui:latest
container_name: ollama-webui
environment:
- "OLLAMA_API_BASE_URL=http://localhost:11434/api"
# Uncomment below for WIP: Auth support
# - "WEBUI_AUTH=TRUE"
# - "WEBUI_DB_URL=mongodb://root:example@ollama-webui-db:27017/"
# - "WEBUI_JWT_SECRET_KEY=SECRET_KEY"
restart: unless-stopped
network_mode: "host"

volumes:
ollama: {}

i did notice on this option that the option to download and use model files disappears when you change the port

<!-- gh-comment-id:1856078263 --> @zono50 commented on GitHub (Dec 14, 2023): Got this information from Clovis version: '3.6' services: ollama-webui: build: context: . args: OLLAMA_API_BASE_URL: '/ollama/api' dockerfile: Dockerfile image: ollama-webui:latest container_name: ollama-webui environment: - "OLLAMA_API_BASE_URL=http://localhost:11434/api" # Uncomment below for WIP: Auth support # - "WEBUI_AUTH=TRUE" # - "WEBUI_DB_URL=mongodb://root:example@ollama-webui-db:27017/" # - "WEBUI_JWT_SECRET_KEY=SECRET_KEY" restart: unless-stopped network_mode: "host" volumes: ollama: {} i did notice on this option that the option to download and use model files disappears when you change the port
Author
Owner

@tjbck commented on GitHub (Dec 14, 2023):

Hi @efrancis59, it looks like you might've made a mistake setting the OLLAMA_API_BASE_URL env var, the url should include /api at the end. (e.g. OLLAMA_API_BASE_URL=http://myserver.com:11434/api)

<!-- gh-comment-id:1856247968 --> @tjbck commented on GitHub (Dec 14, 2023): Hi @efrancis59, it looks like you might've made a mistake setting the `OLLAMA_API_BASE_URL` env var, the url should include `/api` at the end. (e.g. `OLLAMA_API_BASE_URL=http://myserver.com:11434/api`)
Author
Owner

@tjbck commented on GitHub (Dec 14, 2023):

Same for me, OLLAMA_API_BASE_URL doesnt seem to work when starting docker container. I ended up overwriting the url in the settings on the webui but i have to do that for every computer

@Clovis-krz Could you also provide us with which command you used to install, as well as the ollama setup you have? Thanks.

<!-- gh-comment-id:1856251618 --> @tjbck commented on GitHub (Dec 14, 2023): > Same for me, OLLAMA_API_BASE_URL doesnt seem to work when starting docker container. I ended up overwriting the url in the settings on the webui but i have to do that for every computer @Clovis-krz Could you also provide us with which command you used to install, as well as the ollama setup you have? Thanks.
Author
Owner

@tjbck commented on GitHub (Dec 14, 2023):

same for me and in the UI i have : "Uh-oh! There was an issue connecting to Ollama." curl ollama from inside the webui container : "root@41938ad91712:/app/backend# curl http://localhost:11434/ curl: (7) Failed to connect to localhost port 11434: Connection refused"

But in in the host : "Ollama is running"

@sartilas, Please refer to here: https://stackoverflow.com/questions/24319662/from-inside-of-a-docker-container-how-do-i-connect-to-the-localhost-of-the-mach

Also, if you could provide us with the setup you have, it would help us tremendously.

<!-- gh-comment-id:1856259614 --> @tjbck commented on GitHub (Dec 14, 2023): > same for me and in the UI i have : "Uh-oh! There was an issue connecting to Ollama." curl ollama from inside the webui container : "root@41938ad91712:/app/backend# curl http://localhost:11434/ curl: (7) Failed to connect to localhost port 11434: Connection refused" > > But in in the host : "Ollama is running" @sartilas, Please refer to here: https://stackoverflow.com/questions/24319662/from-inside-of-a-docker-container-how-do-i-connect-to-the-localhost-of-the-mach Also, if you could provide us with the setup you have, it would help us tremendously.
Author
Owner
<!-- gh-comment-id:1856267496 --> @tjbck commented on GitHub (Dec 14, 2023): Here are some potential solutions/relevant issues I found by googling: - https://stackoverflow.com/questions/75237114/max-retries-exceeded-with-url-failed-to-establish-a-new-connection-errno-111 - https://forums.docker.com/t/genai-stack-connection-error-with-host-docker-internal-port-11434/137993/2 - https://forums.docker.com/t/connection-refused-on-host-docker-internal/136925/2 - https://forums.docker.com/t/host-docker-internal-in-production-environment/137507/3 - https://github.com/PrefectHQ/prefect/issues/4963
Author
Owner

@zono50 commented on GitHub (Dec 14, 2023):

i previously updated my version of ollama but still didn't fix the issue, so i stopped 11434 process and ollama, removed ollama-webui, and then did git clone, and ran docker compose up -d --build. It fixed my issuue, and am back on port 3000 enjoying the good life

<!-- gh-comment-id:1856382468 --> @zono50 commented on GitHub (Dec 14, 2023): i previously updated my version of ollama but still didn't fix the issue, so i stopped 11434 process and ollama, removed ollama-webui, and then did git clone, and ran docker compose up -d --build. It fixed my issuue, and am back on port 3000 enjoying the good life
Author
Owner

@iamyb commented on GitHub (Dec 15, 2023):

same here. the api request to http://localhost:3000/ollama/api/tags was failed with Bad Request reported. api request http://localhost:3000/api/v1/ is OK.

<!-- gh-comment-id:1858005636 --> @iamyb commented on GitHub (Dec 15, 2023): same here. the api request to http://localhost:3000/ollama/api/tags was failed with Bad Request reported. api request http://localhost:3000/api/v1/ is OK.
Author
Owner

@iamyb commented on GitHub (Dec 15, 2023):

same here. the api request to http://localhost:3000/ollama/api/tags was failed with Bad Request reported. api request http://localhost:3000/api/v1/ is OK.

work out with below.
docker run -d --network=host -e OLLAMA_API_BASE_URL=http://localhost:11434/api --name ollama-webui --restart always ollama-webui

<!-- gh-comment-id:1858087110 --> @iamyb commented on GitHub (Dec 15, 2023): > same here. the api request to http://localhost:3000/ollama/api/tags was failed with Bad Request reported. api request http://localhost:3000/api/v1/ is OK. work out with below. `docker run -d --network=host -e OLLAMA_API_BASE_URL=http://localhost:11434/api --name ollama-webui --restart always ollama-webui`
Author
Owner

@djmaze commented on GitHub (Dec 15, 2023):

AFAICS, setting the OLLAMA_API_BASE_URL env var on the prebuilt docker image cannot work because the default url seems baked in to the frontend html during the build process.

The way the build currently works here, you have to build your own docker image with --build-arg OLLAMA_API_BASE_URL=http://your-server.com:11434/api and then use that.

This is a design fault and needs to be fixed.

<!-- gh-comment-id:1858608396 --> @djmaze commented on GitHub (Dec 15, 2023): AFAICS, setting the `OLLAMA_API_BASE_URL` env var on the prebuilt docker image cannot work because the default url seems baked in to the frontend html during the build process. The way the build currently works here, you have to build your own docker image with ` --build-arg OLLAMA_API_BASE_URL=http://your-server.com:11434/api` and then use that. This is a design fault and needs to be fixed.
Author
Owner

@tjbck commented on GitHub (Dec 15, 2023):

Hi @djmaze, FYI It's not a design fault and it's working as it should, By registering the OLLAMA_API_BASE_URL env var in the docker container, you essentially create a backend reverse proxy link, redirecting hardcoded [your webui url]/ollama/api route to [your ollama url]/api. Please refer to here: https://github.com/ollama-webui/ollama-webui#project-components. Thanks.

<!-- gh-comment-id:1858619054 --> @tjbck commented on GitHub (Dec 15, 2023): Hi @djmaze, FYI It's **not** a design fault and it's **working as it should**, By registering the `OLLAMA_API_BASE_URL` env var in the docker container, you essentially create a backend reverse proxy link, redirecting hardcoded `[your webui url]/ollama/api` route to `[your ollama url]/api`. Please refer to here: https://github.com/ollama-webui/ollama-webui#project-components. Thanks.
Author
Owner

@djmaze commented on GitHub (Dec 16, 2023):

Oh, well, sorry, then I got that wrong.

As I had the same error message in the frontend, here is a snippet of my docker logs with the original setup. I see the following error:

INFO:     10.0.1.5:37178 - "GET /tags HTTP/1.1" 200 OK
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1106, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
    raise exc
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/base.py", line 109, in __call__
    await response(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 270, in __call__
    async with anyio.create_task_group() as task_group:
  File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 597, in __aexit__
    raise exceptions[0]
  File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 273, in wrap
    await func()
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/base.py", line 134, in stream_response
    return await super().stream_response(send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 265, in stream_response
    await send({"type": "http.response.body", "body": chunk, "more_body": True})
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 159, in _send
    await send(message)
  File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 560, in send
    raise RuntimeError("Response content longer than Content-Length")
RuntimeError: Response content longer than Content-Length

Not sure why this happens. Maybe there is a problem with https urls (my ollama is available via https)?

<!-- gh-comment-id:1858624505 --> @djmaze commented on GitHub (Dec 16, 2023): Oh, well, sorry, then I got that wrong. As I had the same error message in the frontend, here is a snippet of my docker logs with the original setup. I see the following error: ``` INFO: 10.0.1.5:37178 - "GET /tags HTTP/1.1" 200 OK ERROR: Exception in ASGI application Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1106, in __call__ await super().__call__(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__ await self.middleware_stack(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__ raise exc File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__ await self.app(scope, receive, _send) File "/usr/local/lib/python3.11/site-packages/starlette/middleware/base.py", line 109, in __call__ await response(scope, receive, send) File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 270, in __call__ async with anyio.create_task_group() as task_group: File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 597, in __aexit__ raise exceptions[0] File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 273, in wrap await func() File "/usr/local/lib/python3.11/site-packages/starlette/middleware/base.py", line 134, in stream_response return await super().stream_response(send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 265, in stream_response await send({"type": "http.response.body", "body": chunk, "more_body": True}) File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 159, in _send await send(message) File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 560, in send raise RuntimeError("Response content longer than Content-Length") RuntimeError: Response content longer than Content-Length ``` Not sure why this happens. Maybe there is a problem with https urls (my ollama is available via https)?
Author
Owner

@tjbck commented on GitHub (Dec 16, 2023):

@djmaze Could you provide us with the commands you used to install? Also, is Ollama running on the same server as Ollama WebUI?

<!-- gh-comment-id:1858691924 --> @tjbck commented on GitHub (Dec 16, 2023): @djmaze Could you provide us with the commands you used to install? Also, is Ollama running on the same server as Ollama WebUI?
Author
Owner

@djmaze commented on GitHub (Dec 17, 2023):

It's different docker containers on the same machine. For Ollama WebUI, I used the original docker image and command from the documentation (replaced my domain with example.com in the snippets):

docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=https://ollama.example.com/api --name ollama-webui ghcr.io/ollama-webui/ollama-webui:main

I just realized the logs now don't show the error anymore, only this:

INFO:     172.17.0.1:59804 - "GET /tags HTTP/1.1" 400 Bad Request

When calling https://webui.example.com/api/tags manually, the page shows the following error message:

{"detail":"Server Connection Error","message":"404 Client Error: Not Found for url: https://ollama.example.com/api/tags"}

And I don't see any access attempts in the Ollama server log, so the proxy connection seems not being established at all.

(Getting the url via curl from within the webui container works, btw.)

<!-- gh-comment-id:1859257913 --> @djmaze commented on GitHub (Dec 17, 2023): It's different docker containers on the same machine. For Ollama WebUI, I used the original docker image and command from the documentation (replaced my domain with `example.com` in the snippets): ```bash docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=https://ollama.example.com/api --name ollama-webui ghcr.io/ollama-webui/ollama-webui:main ``` I just realized the logs now don't show the error anymore, only this: ``` INFO: 172.17.0.1:59804 - "GET /tags HTTP/1.1" 400 Bad Request ``` When calling `https://webui.example.com/api/tags` manually, the page shows the following error message: ``` {"detail":"Server Connection Error","message":"404 Client Error: Not Found for url: https://ollama.example.com/api/tags"} ``` And I don't see any access attempts in the Ollama server log, so the proxy connection seems not being established at all. (Getting the url via `curl` from within the webui container works, btw.)
Author
Owner

@tjbck commented on GitHub (Dec 19, 2023):

@djmaze If you can join our Discord server and send me a pm with your ollama url, I'll personally take a look. Thanks.

Please check our TROUBLESHOOTING.md: https://github.com/ollama-webui/ollama-webui/blob/main/TROUBLESHOOTING.md

<!-- gh-comment-id:1861984398 --> @tjbck commented on GitHub (Dec 19, 2023): @djmaze If you can join our Discord server and send me a pm with your ollama url, I'll personally take a look. Thanks. Please check our TROUBLESHOOTING.md: https://github.com/ollama-webui/ollama-webui/blob/main/TROUBLESHOOTING.md
Author
Owner

@m-hoseyny commented on GitHub (Jan 29, 2024):

hello,
I just joined the Discord server and asked about the problem. However, I didn't get any response. How can I solve this problem?

<!-- gh-comment-id:1915509949 --> @m-hoseyny commented on GitHub (Jan 29, 2024): hello, I just joined the Discord server and asked about the problem. However, I didn't get any response. How can I solve this problem?
Author
Owner

@themw123 commented on GitHub (Feb 8, 2024):

same for me. Docker logs shows:
INFO: 100.79.239.85:1866 - "GET /ollama/api/version HTTP/1.1" 500 Internal Server Error
And in browser console i see its not able to call version and tags endpoint (500)
I am using newest Versions of ollama and ollama-webui

my compose:

version: '3.3'
services:
    ollama-webui:
        volumes:
            - './ollama/webui:/app/backend/data'
        network_mode: host
        container_name: ollama-webui
        image: ghcr.io/ollama-webui/ollama-webui:main
        restart: unless-stopped

edit: Very interesting with docker run its now working:
docker run -d --network=host -v absolute_path_to/ollama/webui:/app/backend/data -e OLLAMA_API_BASE_URL=http://127.0.0.1:11434/api --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main

edit: Now i am getting 500: Internal Error when clickung on admin panel

<!-- gh-comment-id:1935059838 --> @themw123 commented on GitHub (Feb 8, 2024): same for me. Docker logs shows: `INFO: 100.79.239.85:1866 - "GET /ollama/api/version HTTP/1.1" 500 Internal Server Error` And in browser console i see its not able to call version and tags endpoint (500) I am using newest Versions of ollama and ollama-webui my compose: ``` version: '3.3' services: ollama-webui: volumes: - './ollama/webui:/app/backend/data' network_mode: host container_name: ollama-webui image: ghcr.io/ollama-webui/ollama-webui:main restart: unless-stopped ``` edit: Very interesting with docker run its now working: `docker run -d --network=host -v absolute_path_to/ollama/webui:/app/backend/data -e OLLAMA_API_BASE_URL=http://127.0.0.1:11434/api --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main` edit: Now i am getting `500: Internal Error` when clickung on admin panel
Author
Owner

@welchste commented on GitHub (Mar 8, 2024):

I keep getting this as well. Not using any containers, everything manually installed

INFO:     127.0.0.1:47524 - "POST /ollama/urls/update HTTP/1.1" 500 Internal Server Error
get_all_models
INFO:     127.0.0.1:47524 - "POST /ollama/urls/update HTTP/1.1" 500 Internal Server Error
get_all_models
INFO:     127.0.0.1:47524 - "POST /ollama/urls/update HTTP/1.1" 500 Internal Server Error
get_all_models
INFO:     127.0.0.1:47524 - "POST /ollama/urls/update HTTP/1.1" 500 Internal Server Error
<!-- gh-comment-id:1986482725 --> @welchste commented on GitHub (Mar 8, 2024): I keep getting this as well. Not using any containers, everything manually installed ``` INFO: 127.0.0.1:47524 - "POST /ollama/urls/update HTTP/1.1" 500 Internal Server Error get_all_models INFO: 127.0.0.1:47524 - "POST /ollama/urls/update HTTP/1.1" 500 Internal Server Error get_all_models INFO: 127.0.0.1:47524 - "POST /ollama/urls/update HTTP/1.1" 500 Internal Server Error get_all_models INFO: 127.0.0.1:47524 - "POST /ollama/urls/update HTTP/1.1" 500 Internal Server Error ```
Author
Owner

@targinosilveira commented on GitHub (Mar 9, 2024):

I keep getting this as well. Not using any containers, everything manually installed

INFO:     127.0.0.1:47524 - "POST /ollama/urls/update HTTP/1.1" 500 Internal Server Error
get_all_models
INFO:     127.0.0.1:47524 - "POST /ollama/urls/update HTTP/1.1" 500 Internal Server Error
get_all_models
INFO:     127.0.0.1:47524 - "POST /ollama/urls/update HTTP/1.1" 500 Internal Server Error
get_all_models
INFO:     127.0.0.1:47524 - "POST /ollama/urls/update HTTP/1.1" 500 Internal Server Error

I am gatting the same problem, ollama and open webui was installed manualy in a VM, when I run directly on ollama or a CLI client I haven't problems, only when I try by Open WebUI

INFO: 127.0.0.1:47270 - "POST /ollama/api/chat HTTP/1.1" 500 Internal Server Error
http://localhost:11434
INFO: 127.0.0.1:47270 - "POST /ollama/api/generate HTTP/1.1" 500 Internal Server Error

<!-- gh-comment-id:1986689173 --> @targinosilveira commented on GitHub (Mar 9, 2024): > I keep getting this as well. Not using any containers, everything manually installed > > ``` > INFO: 127.0.0.1:47524 - "POST /ollama/urls/update HTTP/1.1" 500 Internal Server Error > get_all_models > INFO: 127.0.0.1:47524 - "POST /ollama/urls/update HTTP/1.1" 500 Internal Server Error > get_all_models > INFO: 127.0.0.1:47524 - "POST /ollama/urls/update HTTP/1.1" 500 Internal Server Error > get_all_models > INFO: 127.0.0.1:47524 - "POST /ollama/urls/update HTTP/1.1" 500 Internal Server Error > ``` I am gatting the same problem, ollama and open webui was installed manualy in a VM, when I run directly on ollama or a CLI client I haven't problems, only when I try by Open WebUI INFO: 127.0.0.1:47270 - "POST /ollama/api/chat HTTP/1.1" 500 Internal Server Error http://localhost:11434 INFO: 127.0.0.1:47270 - "POST /ollama/api/generate HTTP/1.1" 500 Internal Server Error
Author
Owner

@justinh-rahb commented on GitHub (Mar 9, 2024):

@welchste @targinosilveira

If you've installed directly, check your .env file against the .env.example file and be sure you've updated your variable names. OLLAMA_API_BASE_URL -> OLLAMA_BASE_URL

<!-- gh-comment-id:1986703646 --> @justinh-rahb commented on GitHub (Mar 9, 2024): @welchste @targinosilveira If you've installed directly, check your `.env` file against the `.env.example` file and be sure you've updated your variable names. `OLLAMA_API_BASE_URL` -> `OLLAMA_BASE_URL`
Author
Owner

@welchste commented on GitHub (Mar 9, 2024):

@welchste @targinosilveira

If you've installed directly, check your .env file against the .env.example file and be sure you've updated your variable names. OLLAMA_API_BASE_URL -> OLLAMA_BASE_URL

Hey, I tried that. Then actually went Docker route, with no luck. Then I used the local host option with docker and still no luck. I then installed Ollama on a remote server, with no luck.

Finally I tried a different Front End project, and that works. So at least I ruled out any firewall or Ollama specific issues.

This other project is a place holder so look forward to being able to use Open-WebUI in the future.

<!-- gh-comment-id:1986706069 --> @welchste commented on GitHub (Mar 9, 2024): > @welchste @targinosilveira > > If you've installed directly, check your `.env` file against the `.env.example` file and be sure you've updated your variable names. `OLLAMA_API_BASE_URL` -> `OLLAMA_BASE_URL` Hey, I tried that. Then actually went Docker route, with no luck. Then I used the local host option with docker and still no luck. I then installed Ollama on a remote server, with no luck. Finally I tried a different Front End project, and that works. So at least I ruled out any firewall or Ollama specific issues. This other project is a place holder so look forward to being able to use Open-WebUI in the future.
Author
Owner

@justinh-rahb commented on GitHub (Mar 9, 2024):

@welchste @targinosilveira
If you've installed directly, check your .env file against the .env.example file and be sure you've updated your variable names. OLLAMA_API_BASE_URL -> OLLAMA_BASE_URL

Hey, I tried that. Then actually went Docker route, with no luck. Then I used the local host option with docker and still no luck. I then installed Ollama on a remote server, with no luck.

Finally I tried a different Front End project, and that works. So at least I ruled out any firewall or Ollama specific issues.

This other project is a place holder so look forward to being able to use Open-WebUI in the future.

Mind me asking, what had you set your OLLAMA_BASE_URL to through these attempts?

<!-- gh-comment-id:1986709373 --> @justinh-rahb commented on GitHub (Mar 9, 2024): > > @welchste @targinosilveira > > If you've installed directly, check your `.env` file against the `.env.example` file and be sure you've updated your variable names. `OLLAMA_API_BASE_URL` -> `OLLAMA_BASE_URL` > > Hey, I tried that. Then actually went Docker route, with no luck. Then I used the local host option with docker and still no luck. I then installed Ollama on a remote server, with no luck. > > Finally I tried a different Front End project, and that works. So at least I ruled out any firewall or Ollama specific issues. > > This other project is a place holder so look forward to being able to use Open-WebUI in the future. Mind me asking, what had you set your `OLLAMA_BASE_URL` to through these attempts?
Author
Owner

@welchste commented on GitHub (Mar 12, 2024):

@welchste @targinosilveira
If you've installed directly, check your .env file against the .env.example file and be sure you've updated your variable names. OLLAMA_API_BASE_URL -> OLLAMA_BASE_URL

Hey, I tried that. Then actually went Docker route, with no luck. Then I used the local host option with docker and still no luck. I then installed Ollama on a remote server, with no luck.
Finally I tried a different Front End project, and that works. So at least I ruled out any firewall or Ollama specific issues.
This other project is a place holder so look forward to being able to use Open-WebUI in the future.

Mind me asking, what had you set your OLLAMA_BASE_URL to through these attempts?

Hey @justinh-rahb wanted to close this loop. It was user error, where I kept adding the port and /api URI was in the UI. The base ollama URL I was not including the port. Tried again today put OLLAMA_BASE_URL=IP.V.4.Adress:11434 and that did the trick

<!-- gh-comment-id:1992737862 --> @welchste commented on GitHub (Mar 12, 2024): > > > @welchste @targinosilveira > > > If you've installed directly, check your `.env` file against the `.env.example` file and be sure you've updated your variable names. `OLLAMA_API_BASE_URL` -> `OLLAMA_BASE_URL` > > > > > > Hey, I tried that. Then actually went Docker route, with no luck. Then I used the local host option with docker and still no luck. I then installed Ollama on a remote server, with no luck. > > Finally I tried a different Front End project, and that works. So at least I ruled out any firewall or Ollama specific issues. > > This other project is a place holder so look forward to being able to use Open-WebUI in the future. > > Mind me asking, what had you set your `OLLAMA_BASE_URL` to through these attempts? Hey @justinh-rahb wanted to close this loop. It was user error, where I kept adding the port and `/api` URI was in the UI. The base ollama URL I was not including the port. Tried again today put `OLLAMA_BASE_URL=IP.V.4.Adress:11434` and that did the trick
Author
Owner

@ilhooq commented on GitHub (May 2, 2024):

I am facing the same problem. By default, Ollama only listens to the local IP address 127.0.0.1, which means that the network inside the Docker container cannot bind to the host port 11434.

To resolve this issue, you can follow these steps:

  1. Open the ollama unit file located at /etc/systemd/system/ollama.service.
  2. In the [Service] section of the file, add the following line: Environment="OLLAMA_HOST=0.0.0.0".
  3. Reload systemctl sudo systemctl daemon-reload
  4. Restart the service sudo systemctl restart ollama.service

This configuration change will allow Ollama to listen on all available network interfaces, including the external IP address (0.0.0.0).

You can find more detailed information this issue discussion: https://github.com/ollama/ollama/issues/2603.

<!-- gh-comment-id:2091208548 --> @ilhooq commented on GitHub (May 2, 2024): I am facing the same problem. By default, Ollama only listens to the local IP address 127.0.0.1, which means that the network inside the Docker container cannot bind to the host port 11434. To resolve this issue, you can follow these steps: 1. Open the ollama unit file located at `/etc/systemd/system/ollama.service`. 2. In the [Service] section of the file, add the following line: `Environment="OLLAMA_HOST=0.0.0.0"`. 3. Reload systemctl `sudo systemctl daemon-reload` 4. Restart the service `sudo systemctl restart ollama.service` This configuration change will allow Ollama to listen on all available network interfaces, including the external IP address (0.0.0.0). You can find more detailed information this issue discussion: https://github.com/ollama/ollama/issues/2603.
Author
Owner

@justinh-rahb commented on GitHub (May 2, 2024):

I am facing the same problem. By default, Ollama only listens to the local IP address 127.0.0.1, which means that the network inside the Docker container cannot bind to the host port 11434.

To resolve this issue, you can follow these steps:

  1. Open the ollama unit file located at /etc/systemd/system/ollama.service.
  2. In the [Service] section of the file, add the following line: Environment="OLLAMA_HOST=0.0.0.0".
  3. Reload systemctl sudo systemctl daemon-reload
  4. Restart the service sudo systemctl restart ollama.service

This configuration change will allow Ollama to listen on all available network interfaces, including the external IP address (0.0.0.0).

You can find more detailed information this issue discussion: ollama/ollama#2603.

It is NOT recommended to edit .service files directly, you should use a local override by doing systemctl edit ollama.service instead.

<!-- gh-comment-id:2091610580 --> @justinh-rahb commented on GitHub (May 2, 2024): > I am facing the same problem. By default, Ollama only listens to the local IP address 127.0.0.1, which means that the network inside the Docker container cannot bind to the host port 11434. > > To resolve this issue, you can follow these steps: > > 1. Open the ollama unit file located at `/etc/systemd/system/ollama.service`. > 2. In the [Service] section of the file, add the following line: `Environment="OLLAMA_HOST=0.0.0.0"`. > 3. Reload systemctl `sudo systemctl daemon-reload` > 4. Restart the service `sudo systemctl restart ollama.service` > > This configuration change will allow Ollama to listen on all available network interfaces, including the external IP address (0.0.0.0). > > You can find more detailed information this issue discussion: [ollama/ollama#2603](https://github.com/ollama/ollama/issues/2603). It is **NOT** recommended to edit `.service` files directly, you should use a local override by doing `systemctl edit ollama.service` instead.
Author
Owner

@vanillagreencom commented on GitHub (Jun 5, 2024):

For me and anyone else experiencing the 500 internal server error on POST to ollama - it was because I had my context window settings too high, since I was using a higher one for openai.

Maybe there should be separate settings like this depending on what model you are using?

<!-- gh-comment-id:2148851434 --> @vanillagreencom commented on GitHub (Jun 5, 2024): For me and anyone else experiencing the 500 internal server error on POST to ollama - it was because I had my context window settings too high, since I was using a higher one for openai. Maybe there should be separate settings like this depending on what model you are using?
Author
Owner

@ayang commented on GitHub (Jun 14, 2024):

OLLAMA_HOST=0.0.0.0:11434 ollama serve
this works!

<!-- gh-comment-id:2168213617 --> @ayang commented on GitHub (Jun 14, 2024): OLLAMA_HOST=0.0.0.0:11434 ollama serve this works!
Author
Owner

@milkevolii commented on GitHub (Jul 2, 2024):

Admin Panel -> Connections -> Changing Ollama API from http://localhost:11434 to http://host.docker.internal:11434 worked for me.

<!-- gh-comment-id:2203156408 --> @milkevolii commented on GitHub (Jul 2, 2024): Admin Panel -> Connections -> Changing Ollama API from http://localhost:11434 to http://host.docker.internal:11434 worked for me.
Author
Owner

@robin536180 commented on GitHub (Jul 5, 2024):

Admin Panel -> Connections -> Changing Ollama API from http://localhost:11434 to http://host.docker.internal:11434 worked for me.

That's the final answer, and it works!

<!-- gh-comment-id:2210202649 --> @robin536180 commented on GitHub (Jul 5, 2024): > Admin Panel -> Connections -> Changing Ollama API from http://localhost:11434 to http://host.docker.internal:11434 worked for me. That's the final answer, and it works!
Author
Owner

@Bodo-von-Greif commented on GitHub (Aug 30, 2024):

OLLAMA_HOST=0.0.0.0:11434 ollama serve this works!

My installation which worked without problems broke when i switched to systemd installation of Ollama.
This hint saved my configuration and BTW i also included HTTP_PROXY and HTTPS_PROXY and NO_PROXY.
Thanks a lot!

<!-- gh-comment-id:2321080413 --> @Bodo-von-Greif commented on GitHub (Aug 30, 2024): > OLLAMA_HOST=0.0.0.0:11434 ollama serve this works! My installation which worked without problems broke when i switched to systemd installation of Ollama. This hint saved my configuration and BTW i also included HTTP_PROXY and HTTPS_PROXY and NO_PROXY. Thanks a lot!
Author
Owner

@justinh-rahb commented on GitHub (Aug 30, 2024):

@Bodo-von-Greif Setting OLLAMA_HOST for systemd is covered by their FAQ:
https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-linux

<!-- gh-comment-id:2322063860 --> @justinh-rahb commented on GitHub (Aug 30, 2024): @Bodo-von-Greif Setting `OLLAMA_HOST` for systemd is covered by their FAQ: https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-linux
Author
Owner

@ipipe-hellis commented on GitHub (Oct 26, 2024):

Admin Panel -> Connections -> Changing Ollama API from http://localhost:11434 to http://host.docker.internal:11434 worked for me.

This also worked for me.

<!-- gh-comment-id:2439214401 --> @ipipe-hellis commented on GitHub (Oct 26, 2024): > Admin Panel -> Connections -> Changing Ollama API from http://localhost:11434 to http://host.docker.internal:11434 worked for me. This also worked for me.
Author
Owner

@prachetas commented on GitHub (Nov 8, 2024):

If you have installed ollama on windows then:
open command prompt and write: ipconfig
then take the IPV4 address and add it to Admin Panel -> Connections -> e.g. http://192.XXX.XXX.XXX:11434

<!-- gh-comment-id:2465526227 --> @prachetas commented on GitHub (Nov 8, 2024): If you have installed ollama on windows then: open command prompt and write: ipconfig then take the IPV4 address and add it to Admin Panel -> Connections -> e.g. http://192.XXX.XXX.XXX:11434
Author
Owner

@ibinasaker commented on GitHub (Nov 24, 2024):

For me the flag "--add-host=host.docker.internal:host-gateway" was the answer, note that this will let the container gateway to the host network. a bit risky in enterprise/prod environments.

whole command for me was:

sudo docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

<!-- gh-comment-id:2496206031 --> @ibinasaker commented on GitHub (Nov 24, 2024): For me the flag "--add-host=host.docker.internal:host-gateway" was the answer, note that this will let the container gateway to the host network. a bit risky in enterprise/prod environments. whole command for me was: sudo docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Author
Owner

@mjhumphrey4 commented on GitHub (Nov 30, 2024):

I used the following fix:

  • Allow Ollama to listen for connections on all interfaces, not just 127.0.0.1. This is because host.docker.internal:host-gateway will use the public facing interface on your host. Allowing Ollama to accept connections from all interfaces enables Docker to access Ollama on your host. As a result, if you try accessing curl host.docker.internal:11434/api in the Open WebUI Docker container environment, you will not get a result, but after allow Ollama to listen for connections on all interfaces, you will receive Ollama is running. Below are some test logs
<-- Enter the Open WebUI to test access to the Ollama Port 11434 -->

$ sudo docker exec -it open-webui /bin/bash
root@704ff21e6d24:/app/backend# curl host.docker.internal:11434 
curl: (7) Failed to connect to host.docker.internal port 11434 after 0 ms: Couldn't connect to server

<-- Edit Ollama to allow connections to all interfaces -->

$ sudo systemctl edit ollama.service

[Service]
Environment="OLLAMA_HOST=0.0.0.0"

$ sudo systemctl restart ollama.service

$ sudo docker exec -it open-webui /bin/bash

root@704ff21e6d24:/app/backend# curl host.docker.internal:11434/api

root@704ff21e6d24:/app/backend# curl host.docker.internal:11434

Ollama is runningroot@704ff21e6d24:/app/backend# 

Refresh your Open WebUI interface and verify that you can access Ollama well. You can safely deploy Open WebUI using any of the official installation commands such as: sudo docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Don't forget to allow the port through your firewall config.

<!-- gh-comment-id:2508769480 --> @mjhumphrey4 commented on GitHub (Nov 30, 2024): I used the following fix: * Allow Ollama to listen for connections on all interfaces, not just `127.0.0.1`. This is because `host.docker.internal:host-gateway` will use the public facing interface on your host. Allowing Ollama to accept connections from all interfaces enables Docker to access Ollama on your host. As a result, if you try accessing `curl host.docker.internal:11434/api` in the Open WebUI Docker container environment, you will not get a result, but after allow Ollama to listen for connections on all interfaces, you will receive `Ollama is running`. Below are some test logs ``` <-- Enter the Open WebUI to test access to the Ollama Port 11434 --> $ sudo docker exec -it open-webui /bin/bash root@704ff21e6d24:/app/backend# curl host.docker.internal:11434 curl: (7) Failed to connect to host.docker.internal port 11434 after 0 ms: Couldn't connect to server <-- Edit Ollama to allow connections to all interfaces --> $ sudo systemctl edit ollama.service [Service] Environment="OLLAMA_HOST=0.0.0.0" $ sudo systemctl restart ollama.service $ sudo docker exec -it open-webui /bin/bash root@704ff21e6d24:/app/backend# curl host.docker.internal:11434/api root@704ff21e6d24:/app/backend# curl host.docker.internal:11434 Ollama is runningroot@704ff21e6d24:/app/backend# ``` Refresh your Open WebUI interface and verify that you can access Ollama well. You can safely deploy Open WebUI using any of the official installation commands such as: `sudo docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main` > Don't forget to allow the port through your firewall config.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#27505