[GH-ISSUE #458] Docker container support #62251

Closed
opened 2026-05-03 07:59:48 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @philipempl on GitHub (Sep 2, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/458

Hi there,

is it possible to run ./ollama run llama2 in a docker container? I am able to build two docker containers (server and model), the model container connects to the server and loads the llama model, but when I communicate with the llama2 model, I get the following error:

hi

Error: unexpected end of response

Here is the model Dockerfile I use. The server file looks nearly the same.

FROM golang:1.20 AS source
RUN apt-get update && apt-get install -y cmake git
WORKDIR /app
RUN git clone https://github.com/jmorganca/ollama.git .
RUN go generate ./...
RUN CGO_ENABLED=1 go build -ldflags '-linkmode external -extldflags "-static"' .

FROM alpine
COPY --from=source /app/ollama /bin/ollama
ARG USER=ollama
ARG GROUP=ollama
RUN addgroup -g 1000 $GROUP && adduser -u 1000 -DG $GROUP $USER
USER $USER:$GROUP
ENTRYPOINT ["/bin/ollama"]
CMD ["run", "llama2", "--verbose"]

Thanks a lot for your help.

Best regards,
Philip

Originally created by @philipempl on GitHub (Sep 2, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/458 Hi there, is it possible to run ./ollama run llama2 in a docker container? I am able to build two docker containers (server and model), the model container connects to the server and loads the llama model, but when I communicate with the llama2 model, I get the following error: >>> hi ⠋ Error: unexpected end of response Here is the model Dockerfile I use. The server file looks nearly the same. FROM golang:1.20 AS source RUN apt-get update && apt-get install -y cmake git WORKDIR /app RUN git clone https://github.com/jmorganca/ollama.git . RUN go generate ./... RUN CGO_ENABLED=1 go build -ldflags '-linkmode external -extldflags "-static"' . FROM alpine COPY --from=source /app/ollama /bin/ollama ARG USER=ollama ARG GROUP=ollama RUN addgroup -g 1000 $GROUP && adduser -u 1000 -DG $GROUP $USER USER $USER:$GROUP ENTRYPOINT ["/bin/ollama"] CMD ["run", "llama2", "--verbose"] Thanks a lot for your help. Best regards, Philip
Author
Owner

@philipempl commented on GitHub (Sep 4, 2023):

ollama | 2023/09/04 18:31:32 ggml_llama.go:295: error starting llama.cpp server: error starting the external llama.cpp server: fork/exec /tmp/llama-2755348604/server: no such file or directory
ollama | [GIN] 2023/09/04 - 18:31:32 | 500 | 15.834125ms | 127.0.0.1 | POST "/api/generate"
ollama |
ollama |
ollama |
ollama |
ollama | 2023/09/04 18:31:32 [Recovery] 2023/09/04 - 18:31:32 panic recovered:
ollama | POST /api/generate HTTP/1.1
ollama | Host: 0.0.0.0:11434
ollama | Accept: application/json
ollama | Accept-Encoding: gzip
ollama | Content-Length: 72
ollama | Content-Type: application/json
ollama | User-Agent: ollama/0.0.0 (arm64 linux) Go/go1.20.7
ollama |
ollama |
ollama |
ollama | runtime error: invalid memory address or nil pointer dereference
ollama | /usr/local/go/src/runtime/panic.go:260 (0x4329f7)
ollama | /usr/local/go/src/runtime/signal_unix.go:841 (0x44ab63)
ollama | /usr/local/go/src/os/exec_unix.go:63 (0x4fefac)
ollama | /usr/local/go/src/os/exec.go:138 (0x6a88c7)
ollama | /usr/local/go/src/os/exec_posix.go:67 (0x6a88b4)
ollama | /usr/local/go/src/os/exec.go:123 (0x6a88b0)
ollama | /usr/local/go/src/os/exec/exec.go:449 (0x6a88ac)
ollama | /app/llm/ggml_llama.go:349 (0x76b543)
ollama | (*llama).Close: llm.Running.Cmd.Cancel()
ollama | /app/llm/ggml_llama.go:296 (0x76b534)
ollama | newLlama: llm.Close()
ollama | /app/llm/llm.go:80 (0x76e6d3)
ollama | New: return newLlama(model, adapters, ggmlRunner(), opts)
ollama | /app/server/routes.go:95 (0xa1c1ef)
ollama | load: llmModel, err := llm.New(model.ModelPath, model.AdapterPaths, opts)
ollama | /app/server/routes.go:173 (0xa1c78f)
ollama | GenerateHandler: if err := load(c.Request.Context(), model, req.Options, sessionDuration); err != nil {
ollama | /go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0xa04f7b)
ollama | /go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102 (0xa04f5c)
ollama | /go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0xa0425b)
ollama | /go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240 (0xa04238)
ollama | /go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0xa0339b)
ollama | /go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620 (0xa030c4)
ollama | /go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576 (0xa02cc3)
ollama | /usr/local/go/src/net/http/server.go:2936 (0x68271f)
ollama | /usr/local/go/src/net/http/server.go:1995 (0x67e4f7)
ollama | /usr/local/go/src/runtime/asm_arm64.s:1172 (0x465e93)
ollama |
ollama | [GIN] 2023/09/04 - 18:31:38 | 200 | 3.458µs | 127.0.0.1 | HEAD "/"

<!-- gh-comment-id:1705606451 --> @philipempl commented on GitHub (Sep 4, 2023): ollama | 2023/09/04 18:31:32 ggml_llama.go:295: error starting llama.cpp server: error starting the external llama.cpp server: fork/exec /tmp/llama-2755348604/server: no such file or directory ollama | [GIN] 2023/09/04 - 18:31:32 | 500 | 15.834125ms | 127.0.0.1 | POST "/api/generate" ollama | ollama | ollama | ollama | ollama | 2023/09/04 18:31:32 [Recovery] 2023/09/04 - 18:31:32 panic recovered: ollama | POST /api/generate HTTP/1.1 ollama | Host: 0.0.0.0:11434 ollama | Accept: application/json ollama | Accept-Encoding: gzip ollama | Content-Length: 72 ollama | Content-Type: application/json ollama | User-Agent: ollama/0.0.0 (arm64 linux) Go/go1.20.7 ollama | ollama | ollama | ollama | runtime error: invalid memory address or nil pointer dereference ollama | /usr/local/go/src/runtime/panic.go:260 (0x4329f7) ollama | /usr/local/go/src/runtime/signal_unix.go:841 (0x44ab63) ollama | /usr/local/go/src/os/exec_unix.go:63 (0x4fefac) ollama | /usr/local/go/src/os/exec.go:138 (0x6a88c7) ollama | /usr/local/go/src/os/exec_posix.go:67 (0x6a88b4) ollama | /usr/local/go/src/os/exec.go:123 (0x6a88b0) ollama | /usr/local/go/src/os/exec/exec.go:449 (0x6a88ac) ollama | /app/llm/ggml_llama.go:349 (0x76b543) ollama | (*llama).Close: llm.Running.Cmd.Cancel() ollama | /app/llm/ggml_llama.go:296 (0x76b534) ollama | newLlama: llm.Close() ollama | /app/llm/llm.go:80 (0x76e6d3) ollama | New: return newLlama(model, adapters, ggmlRunner(), opts) ollama | /app/server/routes.go:95 (0xa1c1ef) ollama | load: llmModel, err := llm.New(model.ModelPath, model.AdapterPaths, opts) ollama | /app/server/routes.go:173 (0xa1c78f) ollama | GenerateHandler: if err := load(c.Request.Context(), model, req.Options, sessionDuration); err != nil { ollama | /go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0xa04f7b) ollama | /go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102 (0xa04f5c) ollama | /go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0xa0425b) ollama | /go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240 (0xa04238) ollama | /go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0xa0339b) ollama | /go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620 (0xa030c4) ollama | /go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576 (0xa02cc3) ollama | /usr/local/go/src/net/http/server.go:2936 (0x68271f) ollama | /usr/local/go/src/net/http/server.go:1995 (0x67e4f7) ollama | /usr/local/go/src/runtime/asm_arm64.s:1172 (0x465e93) ollama | ollama | [GIN] 2023/09/04 - 18:31:38 | 200 | 3.458µs | 127.0.0.1 | HEAD "/"
Author
Owner

@priamai commented on GitHub (Sep 5, 2023):

I have a working docker solution I think let me dig my notes!

<!-- gh-comment-id:1706376656 --> @priamai commented on GitHub (Sep 5, 2023): I have a working docker solution I think let me dig my notes!
Author
Owner

@priamai commented on GitHub (Sep 5, 2023):

This works for me:

#!/bin/bash

docker build . -t ollama
docker run -p 11434:11434 ollama -d


curl -X POST http://localhost:11434/api/pull -d '{"name": "llama2"}'
<!-- gh-comment-id:1706378318 --> @priamai commented on GitHub (Sep 5, 2023): This works for me: ``` #!/bin/bash docker build . -t ollama docker run -p 11434:11434 ollama -d curl -X POST http://localhost:11434/api/pull -d '{"name": "llama2"}' ```
Author
Owner

@priamai commented on GitHub (Sep 5, 2023):

docker composer file, I think is almost the same.

version: "3.3"
services:
  ollama:
    build:
      context: .
    env_file: .env
    ports:
      - "${HOST_API_PORT:-11434}:${CONTAINER_API_PORT:-11434}"
    stdin_open: true
    tty: true
<!-- gh-comment-id:1706379474 --> @priamai commented on GitHub (Sep 5, 2023): docker composer file, I think is almost the same. ``` version: "3.3" services: ollama: build: context: . env_file: .env ports: - "${HOST_API_PORT:-11434}:${CONTAINER_API_PORT:-11434}" stdin_open: true tty: true ```
Author
Owner

@philipempl commented on GitHub (Sep 5, 2023):

thanks a lot @priamai for your fast response. I have a few questions about that:

Do you also use the same Dockerfile as i do?
How do you run the commands like "ollama run llama2"?
How do you request api/generate?

Have a nice day :)

<!-- gh-comment-id:1706646281 --> @philipempl commented on GitHub (Sep 5, 2023): thanks a lot @priamai for your fast response. I have a few questions about that: Do you also use the same Dockerfile as i do? How do you run the commands like "ollama run llama2"? How do you request api/generate? Have a nice day :)
Author
Owner

@jmorganca commented on GitHub (Sep 7, 2023):

There's now a Dockerfile that's kept up to date: https://github.com/jmorganca/ollama/blob/main/Dockerfile 🎉

<!-- gh-comment-id:1710182977 --> @jmorganca commented on GitHub (Sep 7, 2023): There's now a Dockerfile that's kept up to date: https://github.com/jmorganca/ollama/blob/main/Dockerfile 🎉
Author
Owner

@philipempl commented on GitHub (Sep 7, 2023):

@jmorganca Thanks a lot, this Dockerfile works just perfect 🥳 Would it be an option to replace that with the Dockerfile in the main directory?

However, thanks for creating and maintaining this awesome project 🙂

<!-- gh-comment-id:1710567627 --> @philipempl commented on GitHub (Sep 7, 2023): @jmorganca Thanks a lot, this Dockerfile works just perfect 🥳 Would it be an option to replace that with the Dockerfile in the main directory? However, thanks for creating and maintaining this awesome project 🙂
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62251