[GH-ISSUE #1] docker network access error #11897

Closed
opened 2026-04-19 18:35:17 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @slychief on GitHub (Oct 17, 2023).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/1

Originally assigned to: @tjbck on GitHub.

Hi,

when running the docker build command via the run.sh script, I get the following error messages:

grafik

grafik

When accessing the web-gui I get an "500 Internal Error".

Any advice how to fix this?

Best regards,
Alexander

Originally created by @slychief on GitHub (Oct 17, 2023). Original GitHub issue: https://github.com/open-webui/open-webui/issues/1 Originally assigned to: @tjbck on GitHub. Hi, when running the docker build command via the run.sh script, I get the following error messages: ![grafik](https://github.com/ollama-webui/ollama-webui/assets/831947/b354a33e-4e2f-4cb1-8c5f-263387e13c67) ![grafik](https://github.com/ollama-webui/ollama-webui/assets/831947/66a02f41-9da1-4963-9bfe-5dcc70c0b73c) When accessing the web-gui I get an "500 Internal Error". Any advice how to fix this? Best regards, Alexander
Author
Owner

@tjbck commented on GitHub (Oct 18, 2023):

Hi Alexander,

Could you verify if the Ollama server is running on http://127.0.0.1:11434/?

Thanks.

<!-- gh-comment-id:1767421357 --> @tjbck commented on GitHub (Oct 18, 2023): Hi Alexander, Could you verify if the Ollama server is running on http://127.0.0.1:11434/? Thanks.
Author
Owner

@slychief commented on GitHub (Oct 18, 2023):

yes. curl http://127.0.0.1:11434 answers "ollama is running"

<!-- gh-comment-id:1767768435 --> @slychief commented on GitHub (Oct 18, 2023): yes. curl http://127.0.0.1:11434 answers "ollama is running"
Author
Owner

@tjbck commented on GitHub (Oct 18, 2023):

Hi there,

Are you using Windows by any chance? If so, please try adding the "--add-host=host.docker.internal:host-gateway" to the docker command. I've recently updated the run.sh file, so kindly pull the latest code and test it out. Let me know if it resolves the issue.

Alternatively, if the aforementioned solution doesn't work, you can try running Ollama with the following configuration:

OLLAMA_HOST=0.0.0.0 OLLAMA_ORIGINS=* ollama serve

Keep me posted on your progress.

Thanks!

<!-- gh-comment-id:1767799122 --> @tjbck commented on GitHub (Oct 18, 2023): Hi there, Are you using Windows by any chance? If so, please try adding the "--add-host=host.docker.internal:host-gateway" to the docker command. I've recently updated the run.sh file, so kindly pull the latest code and test it out. Let me know if it resolves the issue. Alternatively, if the aforementioned solution doesn't work, you can try running Ollama with the following configuration: ``` OLLAMA_HOST=0.0.0.0 OLLAMA_ORIGINS=* ollama serve ``` Keep me posted on your progress. Thanks!
Author
Owner

@slychief commented on GitHub (Oct 18, 2023):

Hi,

the docker "--add-host=..." param seem to have solved the 500 error. I can also confirm, that the 500 error re-appears when ollama is not running.

Running ollama with the OLLAMA_HOST enviroment variable set as recommended also made everything work.

The only issue that I'm still having is, that it does not work with the ollama docker image. I'm running the container using the following command:

docker run -d --gpus device=5 -v ollama:/root/.ollama -p 0.0.0.0:11434:11434 --name ollama ollama/ollama

The GUI is running, I can select models. But when I submit a prompt, the GUI shows that it is being processed, but nothing else happens. It just hangs indefinitely.

Actually, I don't rely on Docker. But I need to restrict ollama to a specific GPU, otherwise it would just grab all available GPUs in the server. Do you know how to parameterize ollama to use a specific GPU?

<!-- gh-comment-id:1767901881 --> @slychief commented on GitHub (Oct 18, 2023): Hi, the docker "--add-host=..." param seem to have solved the 500 error. I can also confirm, that the 500 error re-appears when ollama is not running. Running ollama with the OLLAMA_HOST enviroment variable set as recommended also made everything work. The only issue that I'm still having is, that it does not work with the ollama docker image. I'm running the container using the following command: ` docker run -d --gpus device=5 -v ollama:/root/.ollama -p 0.0.0.0:11434:11434 --name ollama ollama/ollama ` The GUI is running, I can select models. But when I submit a prompt, the GUI shows that it is being processed, but nothing else happens. It just hangs indefinitely. Actually, I don't rely on Docker. But I need to restrict ollama to a specific GPU, otherwise it would just grab all available GPUs in the server. Do you know how to parameterize ollama to use a specific GPU?
Author
Owner

@deputynl commented on GitHub (Oct 18, 2023):

I also struggle. I run the container with the below command. Ollama is running on 10,1.1.8:11434 and I can use the REST API, but when using the webui, I get no answer on any request.
docker run -d -p 4444:3000 --add-host=host.docker.internal:10.1.1.8 --name ollama-webui --restart always ollama-webui

<!-- gh-comment-id:1767956748 --> @deputynl commented on GitHub (Oct 18, 2023): I also struggle. I run the container with the below command. Ollama is running on 10,1.1.8:11434 and I can use the REST API, but when using the webui, I get no answer on any request. docker run -d -p 4444:3000 --add-host=host.docker.internal:10.1.1.8 --name ollama-webui --restart always ollama-webui
Author
Owner

@tjbck commented on GitHub (Oct 18, 2023):

The only issue that I'm still having is, that it does not work with the ollama docker image. I'm running the container using the following command:

Hi,

I'm glad to hear that the issue has been resolved and that it's now working for you. Regarding the Ollama docker image, you could try adding --net=host to make the container appear as if it's running on the host itself. The command would look something like this:

docker run -d --gpus device=5 -v ollama:/root/.ollama --net=host --name ollama ollama/ollama

I haven't personally tested the command, so I can't guarantee it will work. Please let me know if the above command works for you.

Feel free to reach out if you need further assistance.

<!-- gh-comment-id:1768006678 --> @tjbck commented on GitHub (Oct 18, 2023): > The only issue that I'm still having is, that it does not work with the ollama docker image. I'm running the container using the following command: Hi, I'm glad to hear that the issue has been resolved and that it's now working for you. Regarding the Ollama docker image, you could try adding --net=host to make the container appear as if it's running on the host itself. The command would look something like this: ``` docker run -d --gpus device=5 -v ollama:/root/.ollama --net=host --name ollama ollama/ollama ``` I haven't personally tested the command, so I can't guarantee it will work. Please let me know if the above command works for you. Feel free to reach out if you need further assistance.
Author
Owner

@tjbck commented on GitHub (Oct 18, 2023):

Hi @depuytnl,

Ollama is running on 10,1.1.8:11434 and I can use the REST API, but when using the webui, I get no answer on any request.

Currently, the Ollama WebUI requires Ollama to be accessible from the same IP address as the Ollama WebUI. To address this issue, I'll be adding a feature that allows you to edit the Ollama address via an environment variable soon.

Thank you for your patience, and I'll keep you posted on the progress of this feature. Feel free to reach out if you have any further questions or concerns.

<!-- gh-comment-id:1768024112 --> @tjbck commented on GitHub (Oct 18, 2023): Hi @depuytnl, > Ollama is running on 10,1.1.8:11434 and I can use the REST API, but when using the webui, I get no answer on any request. Currently, the Ollama WebUI requires Ollama to be accessible from the same IP address as the Ollama WebUI. To address this issue, I'll be adding a feature that allows you to edit the Ollama address via an environment variable soon. Thank you for your patience, and I'll keep you posted on the progress of this feature. Feel free to reach out if you have any further questions or concerns.
Author
Owner

@slychief commented on GitHub (Oct 18, 2023):

I'm glad to hear that the issue has been resolved and that it's now working for you. Regarding the Ollama docker image, you could try adding --net=host to make the container appear as if it's running on the host itself. The command would look something like this:

docker run -d --gpus device=5 -v ollama:/root/.ollama --net=host --name ollama ollama/ollama

I haven't personally tested the command, so I can't guarantee it will work. Please let me know if the above command works for you.

Feel free to reach out if you need further assistance.

Hi,

no that hasn't solved it. I have posted the question on how to assign specific gpus to ollama serve in the ollama issue tracker. This would be a solution.

I will experiment with docker settings as well. If I find a solution I will post it to this thread.

Thanks for the great support!

<!-- gh-comment-id:1768066195 --> @slychief commented on GitHub (Oct 18, 2023): > I'm glad to hear that the issue has been resolved and that it's now working for you. Regarding the Ollama docker image, you could try adding --net=host to make the container appear as if it's running on the host itself. The command would look something like this: > > ``` > docker run -d --gpus device=5 -v ollama:/root/.ollama --net=host --name ollama ollama/ollama > ``` > > I haven't personally tested the command, so I can't guarantee it will work. Please let me know if the above command works for you. > > Feel free to reach out if you need further assistance. Hi, no that hasn't solved it. I have posted the question on how to assign specific gpus to ollama serve in the ollama issue tracker. This would be a solution. I will experiment with docker settings as well. If I find a solution I will post it to this thread. Thanks for the great support!
Author
Owner

@tjbck commented on GitHub (Oct 18, 2023):

Hi @depuytnl,

I have just implemented an environment variable that allows you to connect to the model when Ollama is hosted on a different server. You can utilize the environment variable -e OLLAMA_ENDPOINT="http://[insert your Ollama address]" to establish the connection.

For your specific use case, the following code should work:

docker build -t ollama-webui .
docker run -d -p 4444:3000 --add-host=host.docker.internal:host-gateway -e OLLAMA_ENDPOINT="http://10.1.1.8:11434" --name ollama-webui --restart always ollama-webui

Feel free to test it out, and please let me know if you encounter any issues or if you have any other questions.

<!-- gh-comment-id:1768115140 --> @tjbck commented on GitHub (Oct 18, 2023): Hi @depuytnl, I have just implemented an environment variable that allows you to connect to the model when Ollama is hosted on a different server. You can utilize the environment variable -e OLLAMA_ENDPOINT="http://[insert your Ollama address]" to establish the connection. For your specific use case, the following code should work: ```bash docker build -t ollama-webui . docker run -d -p 4444:3000 --add-host=host.docker.internal:host-gateway -e OLLAMA_ENDPOINT="http://10.1.1.8:11434" --name ollama-webui --restart always ollama-webui ``` Feel free to test it out, and please let me know if you encounter any issues or if you have any other questions.
Author
Owner

@deputynl commented on GitHub (Oct 18, 2023):

Thanks for that very quick update. I have just tried it, but sadly the result is the same. It doesn't output any errors, is there anything I can switch on to provide better feedback? Thanks again!

<!-- gh-comment-id:1768284883 --> @deputynl commented on GitHub (Oct 18, 2023): Thanks for that very quick update. I have just tried it, but sadly the result is the same. It doesn't output any errors, is there anything I can switch on to provide better feedback? Thanks again!
Author
Owner

@tjbck commented on GitHub (Oct 18, 2023):

@depuytnl

I'm sorry to hear that the environment variable didn't work as expected. Could you please provide the logs by using the following command:

docker logs ollama-webui

This should provide us with all the logs to help us understand what might be causing the issue.

Additionally, were you able to run Ollama with the following command successfully?

OLLAMA_HOST=0.0.0.0 OLLAMA_ORIGINS=* ollama serve

Please let me know the outcome of these steps so I can assist you further.

<!-- gh-comment-id:1768749308 --> @tjbck commented on GitHub (Oct 18, 2023): @depuytnl I'm sorry to hear that the environment variable didn't work as expected. Could you please provide the logs by using the following command: ```bash docker logs ollama-webui ``` This should provide us with all the logs to help us understand what might be causing the issue. Additionally, were you able to run Ollama with the following command successfully? ```bash OLLAMA_HOST=0.0.0.0 OLLAMA_ORIGINS=* ollama serve ``` Please let me know the outcome of these steps so I can assist you further.
Author
Owner

@tjbck commented on GitHub (Oct 18, 2023):

@depuytnl

I also just noticed that one of the files wasn't pushed to the repository. Could you please try the same command with the latest release of the repository by pulling the latest changes?

Let me know if this resolves the issue or if you need any further assistance.

Thanks.

<!-- gh-comment-id:1768762300 --> @tjbck commented on GitHub (Oct 18, 2023): @depuytnl I also just noticed that one of the files wasn't pushed to the repository. Could you please try the same command with the latest release of the repository by pulling the latest changes? Let me know if this resolves the issue or if you need any further assistance. Thanks.
Author
Owner

@deputynl commented on GitHub (Oct 19, 2023):

I just tried with the latest release, but still get the same result. The logs have the following content:
Listening on 0.0.0.0:3000
http://10.1.1.8:11434

I'm running the ollama container as provided by that project, so not sure how I could change the command. I did however run the below commands from inside the ollama-webui container, which I suspect show that the ollama rest interface is reachable from the ollama-webui container.

set

ENV='prod'
HOME='/root'
HOSTNAME='90eee2e42f0c'
IFS='
'
NODE_VERSION='20.8.1'
OLLAMA_ENDPOINT='http://10.1.1.8:11434'
OPTIND='1'
PATH='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
PPID='0'
PS1='# '
PS2='> '
PS4='+ '
PWD='/app'
TERM='xterm'
YARN_VERSION='1.22.19'

curl http://10.1.1.8:11434

Ollama is running

<!-- gh-comment-id:1770171176 --> @deputynl commented on GitHub (Oct 19, 2023): I just tried with the latest release, but still get the same result. The logs have the following content: Listening on 0.0.0.0:3000 http://10.1.1.8:11434 I'm running the ollama container as provided by that project, so not sure how I could change the command. I did however run the below commands from inside the ollama-webui container, which I suspect show that the ollama rest interface is reachable from the ollama-webui container. # set ENV='prod' HOME='/root' HOSTNAME='90eee2e42f0c' IFS=' ' NODE_VERSION='20.8.1' OLLAMA_ENDPOINT='http://10.1.1.8:11434' OPTIND='1' PATH='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' PPID='0' PS1='# ' PS2='> ' PS4='+ ' PWD='/app' TERM='xterm' YARN_VERSION='1.22.19' # curl http://10.1.1.8:11434 Ollama is running
Author
Owner

@tjbck commented on GitHub (Oct 19, 2023):

@depuytnl

Thank you for sharing the logs and the details of the commands you ran within the ollama-webui container. Since the Ollama server appears to be reachable from the container, the issue is likely related to CORS.

If you were able to access the main page, it indicates that CORS might indeed be causing the problem. To help us further diagnose the issue, could you please provide a screenshot of your console logs from the browser's developer tools?

Additionally, to enable CORS from the Ollama server, it is necessary to run the following command:

OLLAMA_HOST=0.0.0.0 OLLAMA_ORIGINS=* ollama serve

Kindly execute this command for the Ollama server. After making these changes, please attempt to access the Ollama WebUI again to check if the issue is resolved.

Thanks.

<!-- gh-comment-id:1771688393 --> @tjbck commented on GitHub (Oct 19, 2023): @depuytnl Thank you for sharing the logs and the details of the commands you ran within the ollama-webui container. Since the Ollama server appears to be reachable from the container, the issue is likely related to CORS. If you were able to access the main page, it indicates that CORS might indeed be causing the problem. To help us further diagnose the issue, could you please provide a screenshot of your console logs from the browser's developer tools? Additionally, to enable CORS from the Ollama server, it is necessary to run the following command: ```bash OLLAMA_HOST=0.0.0.0 OLLAMA_ORIGINS=* ollama serve ``` Kindly execute this command for the Ollama server. After making these changes, please attempt to access the Ollama WebUI again to check if the issue is resolved. Thanks.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#11897