[GH-ISSUE #6072] Unable to get Ollama and OpenwebUI working at all #3794

Closed
opened 2026-04-12 14:37:39 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @nicholhai on GitHub (Jul 30, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6072

What is the issue?

Hello All,

Does anyone have instructions on getting Ollama and WebUI working on a tower computer with the following specs: Intel Core i7-13700F 2.1GHz, GeForce RTX 4060 Ti 64GB, 64GB DDR5. I tried all the following on Ubuntu Server 24.04 OS but can install any OS necessary

I have it running perfectly on my Mac Studio, but I can't replicate it on this standalone machine.

Here is what I have done so far

Followed instructions from: https://docs.openwebui.com/getting-started/

Tried all the different methods of doing it manually + Docker with all options for the same. No luck.

I had two issues open:
https://github.com/ollama/ollama/issues/5892
https://github.com/ollama/ollama/issues/5925

OS

No response

GPU

Nvidia, Intel

CPU

Intel

Ollama version

No response

Originally created by @nicholhai on GitHub (Jul 30, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6072 ### What is the issue? Hello All, Does anyone have instructions on getting Ollama and WebUI working on a tower computer with the following specs: Intel Core i7-13700F 2.1GHz, GeForce RTX 4060 Ti 64GB, 64GB DDR5. **I tried all the following on Ubuntu Server 24.04 OS but can install any OS necessary** I have it running perfectly on my Mac Studio, but I can't replicate it on this standalone machine. ### Here is what I have done so far **Followed instructions from:** https://docs.openwebui.com/getting-started/ Tried all the different methods of doing it manually + Docker with all options for the same. No luck. **I had two issues open:** https://github.com/ollama/ollama/issues/5892 https://github.com/ollama/ollama/issues/5925 ### OS _No response_ ### GPU Nvidia, Intel ### CPU Intel ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-12 14:37:39 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 30, 2024):

Just so happens that I turned up a server running Ubuntu 24.04 LTS this morning, these are the steps I took. This is more or less clipped from my shell history.

Starting as root, updated all packages:

apt update
apt upgrade

Installed docker:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc]" \
  https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "${UBUNTU_CODENAME-$VERSION_CODENAME}") stable | sudo tee /etc/apt/sources.list.d/docker.list 
apt-get update
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done
apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Verified docker:

docker run hello-world

At this point I created a normal user, added it to the docker group and to sudoers so that I don't need to type passwords:

adduser rick
adduser rick docker
echo "rick ALL=(ALL:ALL) NOPASSWD: ALL" > /etc/sudoers.d/rick

Now, as "rick", verify I can use docker:

docker run hello-world

Install nvidia toolkit:

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey     | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list     | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g'     | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

Install nvidia drivers:

sudo aptitude install -y nvidia-driver-550-server nvidia-utils-550-server

Load the kernel module:

sudo modprobe nvidia

Check GPU is avaible:

nvidia-smi

Check GPU is available to containers:

docker run --rm --gpus all nvidia/cuda:11.3.1-base-ubuntu20.04 nvidia-smi

Create a compose file:

cat > docker-compose.yaml <<EOF
services:
  ollama:
    image: ollama/ollama
    volumes:
      - ./ollama:/root/.ollama
    ports:
      - 11434:11434
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

  open-webui:
    image: ghcr.io/open-webui/open-webui
    volumes:
      - ./open-webui:/app/backend/data
    depends_on:
      - ollama
    environment:
      - 'OLLAMA_BASE_URL=http://ollama:11434'
      - 'WEBUI_SECRET_KEY='
      - WEBUI_AUTH=${WEBUI_AUTH-False}
    ports:
      - ${OPEN_WEBUI_PORT-3000}:8080
EOF

Bring up the services:

docker compose up -d

Verify that ollama is running:

curl localhost:11434/api/version

Do first inference with ollama:

docker compose exec -it ollama ollama run llama3 hello

Connected to port 3000 on the new host and did the first conversation with open-webui (I wanted to paste a screencap here but for some reason it doesn't work for me):

llama3:latest 
Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?
<!-- gh-comment-id:2258419074 --> @rick-github commented on GitHub (Jul 30, 2024): Just so happens that I turned up a server running Ubuntu 24.04 LTS this morning, these are the steps I took. This is more or less clipped from my shell history. Starting as root, updated all packages: ``` apt update apt upgrade ``` Installed docker: ``` curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc chmod a+r /etc/apt/keyrings/docker.asc echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc]" \ https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "${UBUNTU_CODENAME-$VERSION_CODENAME}") stable | sudo tee /etc/apt/sources.list.d/docker.list apt-get update for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin ``` Verified docker: ``` docker run hello-world ``` At this point I created a normal user, added it to the docker group and to sudoers so that I don't need to type passwords: ``` adduser rick adduser rick docker echo "rick ALL=(ALL:ALL) NOPASSWD: ALL" > /etc/sudoers.d/rick ``` Now, as "rick", verify I can use docker: ``` docker run hello-world ``` Install nvidia toolkit: ``` curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list sudo apt-get update sudo apt-get install -y nvidia-container-toolkit sudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart docker ``` Install nvidia drivers: ``` sudo aptitude install -y nvidia-driver-550-server nvidia-utils-550-server ``` Load the kernel module: ``` sudo modprobe nvidia ``` Check GPU is avaible: ``` nvidia-smi ``` Check GPU is available to containers: ``` docker run --rm --gpus all nvidia/cuda:11.3.1-base-ubuntu20.04 nvidia-smi ``` Create a compose file: ``` cat > docker-compose.yaml <<EOF services: ollama: image: ollama/ollama volumes: - ./ollama:/root/.ollama ports: - 11434:11434 deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] open-webui: image: ghcr.io/open-webui/open-webui volumes: - ./open-webui:/app/backend/data depends_on: - ollama environment: - 'OLLAMA_BASE_URL=http://ollama:11434' - 'WEBUI_SECRET_KEY=' - WEBUI_AUTH=${WEBUI_AUTH-False} ports: - ${OPEN_WEBUI_PORT-3000}:8080 EOF ``` Bring up the services: ``` docker compose up -d ``` Verify that ollama is running: ``` curl localhost:11434/api/version ``` Do first inference with ollama: ``` docker compose exec -it ollama ollama run llama3 hello ``` Connected to port 3000 on the new host and did the first conversation with open-webui (I wanted to paste a screencap here but for some reason it doesn't work for me): ``` llama3:latest Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat? ```
Author
Owner

@rick-github commented on GitHub (Jul 30, 2024):

You mentioned in the other ticket that your system broke after rebooting. I added restart: unless-stopped to the services, docker compose up -d to set the new policy, rebooted the server, and everything still works.

<!-- gh-comment-id:2258438597 --> @rick-github commented on GitHub (Jul 30, 2024): You mentioned in the other ticket that your system broke after rebooting. I added `restart: unless-stopped` to the services, `docker compose up -d` to set the new policy, rebooted the server, and everything still works.
Author
Owner

@nicholhai commented on GitHub (Jul 30, 2024):

I am going to try this right now with a fresh install of Ubuntu Server 24.04.

Thanks Rick

<!-- gh-comment-id:2258485952 --> @nicholhai commented on GitHub (Jul 30, 2024): I am going to try this right now with a fresh install of Ubuntu Server 24.04. Thanks Rick
Author
Owner

@jmorganca commented on GitHub (Sep 4, 2024):

Thanks @rick-github !

@nicholhai will close this for now but let me know if the issue persists.

<!-- gh-comment-id:2327758775 --> @jmorganca commented on GitHub (Sep 4, 2024): Thanks @rick-github ! @nicholhai will close this for now but let me know if the issue persists.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3794