[GH-ISSUE #5017] Using Ollama in a Dockerfile #3176

Closed
opened 2026-04-12 13:39:54 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @Deepansharora27 on GitHub (Jun 13, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5017

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Hi,
I Have Been Trying to Use Ollama in My Dockerfile like this

FROM python:3.10 AS builder
WORKDIR /usr/src/app	
ENV PATH="/venv/bin:$PATH"
RUN apt-get update && apt-get install -y git
RUN python -m venv /venv
COPY . /usr/src/app
RUN pip install --no-cache-dir  -r requirements.txt
# Ollama Server Builder Stage: 
FROM ollama/ollama:0.1.32 AS OllamaServer
COPY --from=builder . .
WORKDIR /usr/src/app
ENV OLLAMA_HOST=0.0.0.0
ENV OLLAMA_ORIGINS=http://0.0.0.0:11434
RUN nohup bash -c "ollama serve &" && sleep 5 && ollama create llama3-encloud -f /usr/src/app/model/Modelfile
#Final Image Stage:
FROM python:3.10
WORKDIR /usr/src/app
COPY --from=builder /venv /venv
COPY --from=builder /usr/src/app /usr/src/app
COPY --from=OllamaServer . . 
EXPOSE 8000
EXPOSE 11434
ENV PATH="/venv/bin:$PATH"
RUN chmod +x /usr/src/app/script.sh
CMD ["/usr/src/app/script.sh"]

In the First builder layer, I build my llamaindex-chainlit application and setup and install it's dependencies. In the Next Layer which is the Ollama Server Layer I Spin up the Ollama Server and then create a Custom Model with the help of Using a Modelfile. In the Final Image Stage I copy the artifacts from the Ollama Server Stage and then try to build the whole application.

Running a Container Out Of this, Gives me the Following Error

  File "/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 232, in handle_request
    with map_httpcore_exceptions():
  File "/usr/local/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ConnectError: [Errno 99] Cannot assign requested address

I Know that The Best Options Could Have Been to Use Docker Compose But That is a Constraint. I do not want to use a Docker Compose Based Setup and want to get everything bundled within a Single Container.

Any Suggestions on this ?

OS

macOS

GPU

No response

CPU

Apple

Ollama version

0.1.32

Originally created by @Deepansharora27 on GitHub (Jun 13, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5017 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Hi, I Have Been Trying to Use Ollama in My Dockerfile like this ``` FROM python:3.10 AS builder WORKDIR /usr/src/app ENV PATH="/venv/bin:$PATH" RUN apt-get update && apt-get install -y git RUN python -m venv /venv COPY . /usr/src/app RUN pip install --no-cache-dir -r requirements.txt # Ollama Server Builder Stage: FROM ollama/ollama:0.1.32 AS OllamaServer COPY --from=builder . . WORKDIR /usr/src/app ENV OLLAMA_HOST=0.0.0.0 ENV OLLAMA_ORIGINS=http://0.0.0.0:11434 RUN nohup bash -c "ollama serve &" && sleep 5 && ollama create llama3-encloud -f /usr/src/app/model/Modelfile #Final Image Stage: FROM python:3.10 WORKDIR /usr/src/app COPY --from=builder /venv /venv COPY --from=builder /usr/src/app /usr/src/app COPY --from=OllamaServer . . EXPOSE 8000 EXPOSE 11434 ENV PATH="/venv/bin:$PATH" RUN chmod +x /usr/src/app/script.sh CMD ["/usr/src/app/script.sh"] ``` In the First `builder` layer, I build my llamaindex-chainlit application and setup and install it's dependencies. In the Next Layer which is the Ollama Server Layer I Spin up the Ollama Server and then create a Custom Model with the help of Using a Modelfile. In the Final Image Stage I copy the artifacts from the Ollama Server Stage and then try to build the whole application. Running a Container Out Of this, Gives me the Following Error ``` File "/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 232, in handle_request with map_httpcore_exceptions(): File "/usr/local/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/venv/lib/python3.10/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.ConnectError: [Errno 99] Cannot assign requested address ``` I Know that The Best Options Could Have Been to Use Docker Compose But That is a Constraint. I do not want to use a Docker Compose Based Setup and want to get everything bundled within a Single Container. Any Suggestions on this ? ### OS macOS ### GPU _No response_ ### CPU Apple ### Ollama version 0.1.32
GiteaMirror added the bug label 2026-04-12 13:39:54 -05:00
Author
Owner

@ic4-y commented on GitHub (Jun 13, 2024):

Just glancing at this, this appears to be a Python problem, not an ollama problem.

My guess is that there is some problem with the server.bind() method from the httpx Python package that you are calling somewhere from Python. Probably you either can't bind to the socket or you can't get a connection or something like that.

But again, this is not coming from ollama.

And to get around that it is probably a good idea and use the official ollama docker image from dockerhub and run your python app in a separate container. You don't need docker compose to do that, it's just easier to manage multiple containers that depend on each other that way.

<!-- gh-comment-id:2166640192 --> @ic4-y commented on GitHub (Jun 13, 2024): Just glancing at this, this appears to be a Python problem, not an ollama problem. My guess is that there is some problem with the `server.bind()` method from the `httpx` Python package that you are calling somewhere from Python. Probably you either can't bind to the socket or you can't get a connection or something like that. But again, this is not coming from ollama. And to get around that it is probably a good idea and use the official ollama docker image from dockerhub and run your python app in a separate container. You don't need docker compose to do that, it's just easier to manage multiple containers that depend on each other that way.
Author
Owner

@dhiltgen commented on GitHub (Jun 18, 2024):

The bug likely lies in your python app, or maybe the script.sh.

If you can isolate this down to just a dockerfile that doesn't pull in code we can't see and get it to fail the same way maybe we can help troubleshoot that part.

<!-- gh-comment-id:2177189354 --> @dhiltgen commented on GitHub (Jun 18, 2024): The bug likely lies in your python app, or maybe the script.sh. If you can isolate this down to just a dockerfile that doesn't pull in code we can't see and get it to fail the same way maybe we can help troubleshoot that part.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3176