mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 19:08:59 -05:00
basic docker set up question #490
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @LaptopDev on GitHub (Mar 17, 2024).
I am accessing the webui remotely, running both ollama and open-webui on my remote server.
I want models to run on my powerful server, and want to interact with them through the webui.
For some reason the instructions do not explicitly inform you of what docker run command to use for this configuration. There are two docker run commands to choose from, one which is used for local connection to a local ollama server, which I chose, and another which is used to connect to a remote ollama server.
I can not connect to ollama. I cannot add models. On my server I can list 2 models available in terminal with ollama which do not show up in open-webui. Do I need to use the other docker run command?
@justinh-rahb commented on GitHub (Mar 17, 2024):
The method of communication between the two containers remains consistent whether they're hosted on a remote server or your local machine, provided they're deployed together. The only change involves how you interact with the WebUI; instead of using http://localhost:3000, you'd connect via the server's IP address or hostname. Given this setup, leveraging Docker Compose for installation is advisable. This approach not only simplifies managing the containers as a single unit but also makes full use of Docker's capabilities on a server environment.
https://docs.openwebui.com/getting-started/#using-docker-compose
@LaptopDev commented on GitHub (Mar 17, 2024):
It seems to me you are under the impression that I ran both in containers. But I technically did follow the instructions since my binary execution of 'ollama serve' is "on my computer".
But what I am understanding from your reply is this means they are not "deployed together", and must be (making the quick start documentation inapplicable for those running ollama as a binary). Do I got that right?
I am connecting via the server's IP address fine, and assume that I would be doing the same with a set up involving docker-compose.yml/compose.yml according to the documentation you linked. But it seems there is some misunderstanding about my set up configuration still and what is causing it to not connect to ollama.
@justinh-rahb commented on GitHub (Mar 17, 2024):
Now I understand your deployment scenario better:
@LaptopDev commented on GitHub (Mar 17, 2024):
This doesn't seem to work for me.
but as you can see the process is running on said port. Does the executable name have to match 'ollama'?
@justinh-rahb commented on GitHub (Mar 17, 2024):
Your Ollama is running on a server or your desktop/laptop, which is it? I'm getting mixed messages here which is confusing my ability to assist. Your real problem is obvious from the error output above: your Ollama is only listening on the 127.0.0.1 interface, you need it to listen to all interfaces by setting
OLLAMA_HOST=0.0.0.0Read their FAQ:
https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-linux
@LaptopDev commented on GitHub (Mar 17, 2024):
I see. Thank you for your patience! I got it working now.