[GH-ISSUE #546] Request: docker compose support for Ollama server #251

Closed
opened 2026-04-12 09:46:20 -05:00 by GiteaMirror · 17 comments
Owner

Originally created by @jamesbraza on GitHub (Sep 17, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/546

It would be really nice if Ollama supported docker compose for the Ollama server.

This would enable one to run:

  • docker compose up: start the Ollama server
  • docker compose down: stop the Ollama server

docker compose imo has two benefits:

  • A bit easier than having to deal with multiprocessing associated with ./ollama serve
  • Would enable Ollama server to be more OS independent, by outsourcing platform support to Docker

For reference, LocalAI supports this, and it works flawlessly, without having to deal with brew installs and compilation

Perhaps https://github.com/sickcodes/Docker-OSX can be used as the base image, since Ollama currently just supports macOS-based installations

Originally created by @jamesbraza on GitHub (Sep 17, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/546 It would be really nice if Ollama supported `docker compose` for the Ollama server. This would enable one to run: - `docker compose up`: start the Ollama server - `docker compose down`: stop the Ollama server `docker compose` imo has two benefits: - A bit easier than having to deal with multiprocessing associated with `./ollama serve` - Would enable Ollama server to be more OS independent, by outsourcing platform support to Docker For reference, [LocalAI](https://github.com/go-skynet/LocalAI) supports this, and it works flawlessly, without having to deal with `brew install`s and compilation Perhaps https://github.com/sickcodes/Docker-OSX can be used as the base image, since Ollama currently just supports macOS-based installations
GiteaMirror added the dockerfeature request labels 2026-04-12 09:46:20 -05:00
Author
Owner

@philipempl commented on GitHub (Sep 18, 2023):

Hi,
Ollama maintains a Dockerfile that I modified a bit by adding an entrypoint and a Docker compose file, which should work OS independently. In my version, I assume that the Dockerfile and the entrypoint.sh file are in the docker/ollama subdirectory. I hope this works for you:

docker-compose.yml:


version: '3'
services:
  ollama:
    hostname: ollama
    container_name: ollama
    build:
      context: ./docker/ollama
      dockerfile: Dockerfile
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/app
    networks:
      - net

volumes:
  ollama_data:
    driver: local

networks:
  net:
    driver: bridge

Dockerfile:

# Stage 1: Build the binary
FROM golang:alpine AS builder

# Install required dependencies
RUN apk add --no-cache git build-base cmake

# Set the working directory within the container
WORKDIR /app

# Clone the source code from the GitHub repository
RUN git clone https://github.com/jmorganca/ollama.git .

# Build the binary with static linking
RUN go generate ./... \
    && go build -ldflags '-linkmode external -extldflags "-static"' -o .

# Stage 2: Create the final image
FROM alpine

ENV OLLAMA_HOST "0.0.0.0"

# Install required runtime dependencies
RUN apk add --no-cache libstdc++ curl

# Copy the custom entry point script into the container
COPY Modelfile /Modelfile

# Copy the custom entry point script into the container
COPY entrypoint.sh /entrypoint.sh

# Make the script executable
RUN chmod +x /entrypoint.sh

# Create a non-root user
ARG USER=ollama
ARG GROUP=ollama
RUN addgroup $GROUP && adduser -D -G $GROUP $USER

# Copy the binary from the builder stage
COPY --from=builder /app/ollama /bin/ollama

USER $USER:$GROUP

ENTRYPOINT ["/entrypoint.sh"]

entrypoint.sh:

#!/bin/sh

./bin/ollama serve &

sleep 5

curl -X POST http://ollama:11434/api/pull -d '{"name": "llama2"}'

sleep 10

tail -f /dev/null

Note that you can modify entrypoint.sh as you wish, for example by creating your own model files.

Using this you can publish your own Ollama images on Dockerhub or JFrog.

<!-- gh-comment-id:1722759440 --> @philipempl commented on GitHub (Sep 18, 2023): Hi, Ollama maintains a [Dockerfile](https://github.com/jmorganca/ollama/blob/main/Dockerfile) that I modified a bit by adding an entrypoint and a Docker compose file, which should work OS independently. In my version, I assume that the Dockerfile and the entrypoint.sh file are in the **docker/ollama** subdirectory. I hope this works for you: **docker-compose.yml:** ```docker-compose.yml version: '3' services: ollama: hostname: ollama container_name: ollama build: context: ./docker/ollama dockerfile: Dockerfile ports: - "11434:11434" volumes: - ollama_data:/app networks: - net volumes: ollama_data: driver: local networks: net: driver: bridge ``` **Dockerfile:** ```Dockerfile # Stage 1: Build the binary FROM golang:alpine AS builder # Install required dependencies RUN apk add --no-cache git build-base cmake # Set the working directory within the container WORKDIR /app # Clone the source code from the GitHub repository RUN git clone https://github.com/jmorganca/ollama.git . # Build the binary with static linking RUN go generate ./... \ && go build -ldflags '-linkmode external -extldflags "-static"' -o . # Stage 2: Create the final image FROM alpine ENV OLLAMA_HOST "0.0.0.0" # Install required runtime dependencies RUN apk add --no-cache libstdc++ curl # Copy the custom entry point script into the container COPY Modelfile /Modelfile # Copy the custom entry point script into the container COPY entrypoint.sh /entrypoint.sh # Make the script executable RUN chmod +x /entrypoint.sh # Create a non-root user ARG USER=ollama ARG GROUP=ollama RUN addgroup $GROUP && adduser -D -G $GROUP $USER # Copy the binary from the builder stage COPY --from=builder /app/ollama /bin/ollama USER $USER:$GROUP ENTRYPOINT ["/entrypoint.sh"] ``` **entrypoint.sh:** ```shell #!/bin/sh ./bin/ollama serve & sleep 5 curl -X POST http://ollama:11434/api/pull -d '{"name": "llama2"}' sleep 10 tail -f /dev/null ``` Note that you can modify entrypoint.sh as you wish, for example by creating your own model files. Using this you can publish your own Ollama images on Dockerhub or JFrog.
Author
Owner

@jamesbraza commented on GitHub (Sep 18, 2023):

Oh wow, thanks! It would be cool if someone would list this image on Docker hub. Interesting with the multistage build.

It'd be cool if you upstreamed your changes to this repo in a PR because you:

  • Have nice code comments (over the currently-present Dockerfile, which has none)
  • Created the missing docker-compose.yaml
<!-- gh-comment-id:1722763358 --> @jamesbraza commented on GitHub (Sep 18, 2023): Oh wow, thanks! It would be cool if someone would list this image on Docker hub. Interesting with the multistage build. It'd be cool if you upstreamed your changes to this repo in a PR because you: - Have nice code comments (over the currently-present `Dockerfile`, which has none) - Created the missing `docker-compose.yaml`
Author
Owner

@philipempl commented on GitHub (Sep 18, 2023):

I agree, it would be cool to have a image for each model, e.g. ollama:llama27GB .... If I have the time, I will submit a pull request in the ollama repo and create base images on Dockerhub.

<!-- gh-comment-id:1722853170 --> @philipempl commented on GitHub (Sep 18, 2023): I agree, it would be cool to have a image for each model, e.g. ```ollama:llama27GB``` .... If I have the time, I will submit a pull request in the ```ollama repo``` and create base images on ```Dockerhub```.
Author
Owner

@jamesbraza commented on GitHub (Sep 18, 2023):

Oh I just meant one general Docker image for the Ollama server, and users could run docker container exec to pull 1+ models into the container, or use volume mounting somehow

<!-- gh-comment-id:1722863567 --> @jamesbraza commented on GitHub (Sep 18, 2023): Oh I just meant one general Docker image for the Ollama server, and users could run `docker container exec` to pull 1+ models into the container, or use volume mounting somehow
Author
Owner

@oskarhane commented on GitHub (Sep 18, 2023):

I agree, it would be cool to have a image for each model, e.g. ollama:llama27GB .... If I have the time, I will submit a pull request in the ollama repo and create base images on Dockerhub.

@philipempl

That would be incredibly useful. I don't see a use case where anyone would want to use an image without a model already present.

<!-- gh-comment-id:1723277842 --> @oskarhane commented on GitHub (Sep 18, 2023): > I agree, it would be cool to have a image for each model, e.g. `ollama:llama27GB` .... If I have the time, I will submit a pull request in the `ollama repo` and create base images on `Dockerhub`. @philipempl That would be incredibly useful. I don't see a use case where anyone would want to use an image without a model already present.
Author
Owner

@philipempl commented on GitHub (Sep 19, 2023):

I agree, it would be cool to have a image for each model, e.g. ollama:llama27GB .... If I have the time, I will submit a pull request in the ollama repo and create base images on Dockerhub.

@philipempl

That would be incredibly useful. I don't see a use case where anyone would want to use an image without a model already present.

The only thing that might limit Docker image creation is the fact that ollama - as far as I know - detects the hardware on the host machine during creation. But let's give it a try....

<!-- gh-comment-id:1724853974 --> @philipempl commented on GitHub (Sep 19, 2023): > > I agree, it would be cool to have a image for each model, e.g. `ollama:llama27GB` .... If I have the time, I will submit a pull request in the `ollama repo` and create base images on `Dockerhub`. > > > > @philipempl > > > > That would be incredibly useful. I don't see a use case where anyone would want to use an image without a model already present. The only thing that might limit Docker image creation is the fact that ollama - as far as I know - detects the hardware on the host machine during creation. But let's give it a try....
Author
Owner

@jamesbraza commented on GitHub (Sep 22, 2023):

TIL of https://hub.docker.com/u/ollama's existance, so I guess one can just directly configure docker-compose.yaml for macOS, no need to worry about docker build/buildx.

<!-- gh-comment-id:1730581774 --> @jamesbraza commented on GitHub (Sep 22, 2023): TIL of [https://hub.docker.com/u/ollama](https://hub.docker.com/u/ollama)'s existance, so I guess one can just directly configure `docker-compose.yaml` for macOS, no need to worry about `docker build`/`buildx`.
Author
Owner

@eddywashere commented on GitHub (Sep 26, 2023):

Might have missed this but for the ollama dockerhub image, what path on the container side should folks mount a volume to persist models that get downloaded? Is anything like a container restart needed to pick up the changes?

<!-- gh-comment-id:1736001414 --> @eddywashere commented on GitHub (Sep 26, 2023): Might have missed this but for the ollama dockerhub image, what path on the container side should folks mount a volume to persist models that get downloaded? Is anything like a container restart needed to pick up the changes?
Author
Owner

@jamesbraza commented on GitHub (Sep 26, 2023):

@eddywashere yeah, here is the Dockerfile underlying that Docker Hub image. You can see the WORKDIR there.

Is anything like a container restart needed to pick up the changes?

You don't need to restart the ollama serve process when you download a model, so similarly I don't think a container restart is necessary.

<!-- gh-comment-id:1736051517 --> @jamesbraza commented on GitHub (Sep 26, 2023): @eddywashere yeah, here is the [`Dockerfile`](https://github.com/jmorganca/ollama/blob/main/Dockerfile) underlying that Docker Hub image. You can see the `WORKDIR` there. > Is anything like a container restart needed to pick up the changes? You don't need to restart the `ollama serve` process when you download a model, so similarly I don't think a container restart is necessary.
Author
Owner

@jamesbraza commented on GitHub (Oct 3, 2023):

Looks like https://github.com/jmorganca/ollama/pull/440 is working on this

<!-- gh-comment-id:1744022884 --> @jamesbraza commented on GitHub (Oct 3, 2023): Looks like https://github.com/jmorganca/ollama/pull/440 is working on this
Author
Owner

@technovangelist commented on GitHub (Dec 4, 2023):

I think all the issues in this have been addressed. You can find the Ollama docker image at https://hub.docker.com/r/ollama/ollama. I will go ahead and close the issue now. If you think there is anything we left out, reopen and we can address. Thanks for being part of this great community.

<!-- gh-comment-id:1839382821 --> @technovangelist commented on GitHub (Dec 4, 2023): I think all the issues in this have been addressed. You can find the Ollama docker image at https://hub.docker.com/r/ollama/ollama. I will go ahead and close the issue now. If you think there is anything we left out, reopen and we can address. Thanks for being part of this great community.
Author
Owner

@jamesbraza commented on GitHub (Dec 4, 2023):

I don't quite feel this was resolved, having a Docker image in Docker hub is not a docker compose config. Maybe to be more explicit, the resolution I was hoping for is an Ollama-official Docker compose config:

git clone git@github.com:jmorganca/ollama.git
cd ollama
# The below starts Ollama server
docker compose up

# Interact with the server

# Then when you're ready
docker compose down
<!-- gh-comment-id:1839392353 --> @jamesbraza commented on GitHub (Dec 4, 2023): I don't quite feel this was resolved, having a Docker image in Docker hub is not a `docker compose` config. Maybe to be more explicit, the resolution I was hoping for is an Ollama-official Docker compose config: ```bash git clone git@github.com:jmorganca/ollama.git cd ollama # The below starts Ollama server docker compose up # Interact with the server # Then when you're ready docker compose down ```
Author
Owner

@technovangelist commented on GitHub (Dec 9, 2023):

Ok, thanks for clarifying. I'll go ahead and reopen.

<!-- gh-comment-id:1848079790 --> @technovangelist commented on GitHub (Dec 9, 2023): Ok, thanks for clarifying. I'll go ahead and reopen.
Author
Owner

@jinnabaalu commented on GitHub (Jan 29, 2024):

Here is what I am using while running on raspberry Pi or Mac

version: '3.8'

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports: ["11434:11434"]
    volumes:
      - ollama:/root/.ollama
    pull_policy: always
    tty: true
    restart: unless-stopped

  ollama-webui:
    image: ghcr.io/ollama-webui/ollama-webui:main
    container_name: ollama-webui
    ports: ["3000:8080"]
    volumes:
      - ollama-webui:/app/backend/data
    depends_on:
      - ollama
    environment:
      - 'OLLAMA_API_BASE_URL=http://ollama:11434/api'
    restart: unless-stopped

volumes:
  ollama: {}
  ollama-webui: {}
<!-- gh-comment-id:1914010890 --> @jinnabaalu commented on GitHub (Jan 29, 2024): Here is what I am using while running on raspberry Pi or Mac ```yml version: '3.8' services: ollama: image: ollama/ollama:latest container_name: ollama ports: ["11434:11434"] volumes: - ollama:/root/.ollama pull_policy: always tty: true restart: unless-stopped ollama-webui: image: ghcr.io/ollama-webui/ollama-webui:main container_name: ollama-webui ports: ["3000:8080"] volumes: - ollama-webui:/app/backend/data depends_on: - ollama environment: - 'OLLAMA_API_BASE_URL=http://ollama:11434/api' restart: unless-stopped volumes: ollama: {} ollama-webui: {} ```
Author
Owner

@tiredcisadmin commented on GitHub (Jan 29, 2024):

I just followed the docker instructions at in the Ollama-web readme. It does require pulling the git repo and building some things locally. However, it is running well. I may switch at some point as I do not like the complexity of using build scripts I don't have time to understand, but it's a good solution to get up and running.

ollama-webui - Installing Ollama and Ollama Web UI Together
Using Docker Compose -

<!-- gh-comment-id:1914809166 --> @tiredcisadmin commented on GitHub (Jan 29, 2024): I just followed the docker instructions at in the Ollama-web readme. It does require pulling the git repo and building some things locally. However, it is running well. I may switch at some point as I do not like the complexity of using build scripts I don't have time to understand, but it's a good solution to get up and running. [ollama-webui - Installing Ollama and Ollama Web UI Together Using Docker Compose - ](https://github.com/ollama-webui/ollama-webui?tab=readme-ov-file#installing-ollama-and-ollama-web-ui-together)
Author
Owner

@ozbillwang commented on GitHub (Oct 12, 2024):

Here is what I am using while running on raspberry Pi or Mac


  ollama-webui:
    image: ghcr.io/ollama-webui/ollama-webui:main
    container_name: ollama-webui
    ports: ["3000:8080"]
    volumes:
      - ollama-webui:/app/backend/data
    depends_on:
      - ollama
    environment:
      - 'OLLAMA_API_BASE_URL=http://ollama:11434/api'
    restart: unless-stopped

volumes:
  ollama: {}
  ollama-webui: {}

@jinnabaalu

Why this ui image is so huge?

ghcr.io/ollama-webui/ollama-webui main              d5a5c1126b5d   7 months ago        3.41GB

Any smaller ollama webui image I can choice?

<!-- gh-comment-id:2408556146 --> @ozbillwang commented on GitHub (Oct 12, 2024): > Here is what I am using while running on raspberry Pi or Mac > > ```yaml > > ollama-webui: > image: ghcr.io/ollama-webui/ollama-webui:main > container_name: ollama-webui > ports: ["3000:8080"] > volumes: > - ollama-webui:/app/backend/data > depends_on: > - ollama > environment: > - 'OLLAMA_API_BASE_URL=http://ollama:11434/api' > restart: unless-stopped > > volumes: > ollama: {} > ollama-webui: {} > ``` @jinnabaalu Why this ui image is so huge? ``` ghcr.io/ollama-webui/ollama-webui main d5a5c1126b5d 7 months ago 3.41GB ``` Any smaller ollama webui image I can choice?
Author
Owner

@jmorganca commented on GitHub (Dec 23, 2024):

Hi folks, and thanks for the issue @jamesbraza. Given the different permutations of docker compose files possible with Ollama, we haven't published one yet. This may change in the future however for the time being we're sticking to the abstraction of the container image itself: https://github.com/ollama/ollama/blob/main/docs/docker.md

<!-- gh-comment-id:2558701630 --> @jmorganca commented on GitHub (Dec 23, 2024): Hi folks, and thanks for the issue @jamesbraza. Given the different permutations of docker compose files possible with Ollama, we haven't published one yet. This may change in the future however for the time being we're sticking to the abstraction of the container image itself: https://github.com/ollama/ollama/blob/main/docs/docker.md
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#251