[GH-ISSUE #11495] Unable to pull Ollama model even though I did #33353

Open
opened 2026-04-22 15:55:52 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @shermaine0603 on GitHub (Jul 22, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11495

Hi all, I am trying to run an Ollama model (llava:7b) inside a docker container.

I first created a start_services.sh file to run ollama serve in the background (since it does hang when it runs).

#!/bin/sh

# Start Ollama in the background
ollama serve &

# Wait for Ollama to start
sleep 5

# Pull the required model(s)
ollama pull llava:7b

# Start your Python application
python read_excel.py

tail -f /dev/null``

Then I build a docker image from this dockerfile:

> FROM ollama/ollama:latest

WORKDIR /app

# Install Python and dependencies
RUN apt-get update && apt-get install -y python3 python3-pip -y curl \
    && ln -s /usr/bin/python3 /usr/bin/python \
    && curl -fsSL https://ollama.com/install.sh | sh

RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y python3-venv python3-sklearn

# Create and activate a virtual environment
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

# Upgrade pip and install scikit-learn
RUN pip install --upgrade pip
RUN pip install scikit-learn

# RUN pip freeze > requirements.txt
# RUN pip install -r requirements.txt
RUN pip3 install ollama
RUN pip install Openpyxl
RUN pip install Pillow
RUN pip install pandas
RUN pip install numpy

# Make the startup script executable
COPY start_services.sh .
RUN chmod +x start_services.sh

# Expose the Ollama API port
EXPOSE 11434

#Override the ENTRYPOINT from the base image
ENTRYPOINT []

# Run the startup script
CMD ["./start_services.sh"]``

As you can see in my dockerfile, the last line runs the script.

However, when I finally tried to run my python script, which will run llava, inside the docker container, it gave the following error:

Traceback (most recent call last):
  File "/app/read_excel.py", line 15, in <module>
    res = ollama.chat(
          ^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/ollama/_client.py", line 342, in chat
    return self._request(
           ^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/ollama/_client.py", line 180, in _request
    return cls(**self._request_raw(*args, **kwargs).json())
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/ollama/_client.py", line 124, in _request_raw
    raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: model "llava:7b" not found, try pulling it first (status code: 404)

I don't understand how am I encountering this error even though I did pull the model in the start_services.sh. How can I resolve this issue?

Originally created by @shermaine0603 on GitHub (Jul 22, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11495 Hi all, I am trying to run an Ollama model (llava:7b) inside a docker container. I first created a start_services.sh file to run ollama serve in the background (since it does hang when it runs). ``` #!/bin/sh # Start Ollama in the background ollama serve & # Wait for Ollama to start sleep 5 # Pull the required model(s) ollama pull llava:7b # Start your Python application python read_excel.py tail -f /dev/null`` ``` Then I build a docker image from this dockerfile: ``` > FROM ollama/ollama:latest WORKDIR /app # Install Python and dependencies RUN apt-get update && apt-get install -y python3 python3-pip -y curl \ && ln -s /usr/bin/python3 /usr/bin/python \ && curl -fsSL https://ollama.com/install.sh | sh RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y python3-venv python3-sklearn # Create and activate a virtual environment RUN python3 -m venv /opt/venv ENV PATH="/opt/venv/bin:$PATH" # Upgrade pip and install scikit-learn RUN pip install --upgrade pip RUN pip install scikit-learn # RUN pip freeze > requirements.txt # RUN pip install -r requirements.txt RUN pip3 install ollama RUN pip install Openpyxl RUN pip install Pillow RUN pip install pandas RUN pip install numpy # Make the startup script executable COPY start_services.sh . RUN chmod +x start_services.sh # Expose the Ollama API port EXPOSE 11434 #Override the ENTRYPOINT from the base image ENTRYPOINT [] # Run the startup script CMD ["./start_services.sh"]`` ``` As you can see in my dockerfile, the last line runs the script. However, when I finally tried to run my python script, which will run llava, inside the docker container, it gave the following error: ``` Traceback (most recent call last): File "/app/read_excel.py", line 15, in <module> res = ollama.chat( ^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/ollama/_client.py", line 342, in chat return self._request( ^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/ollama/_client.py", line 180, in _request return cls(**self._request_raw(*args, **kwargs).json()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.12/site-packages/ollama/_client.py", line 124, in _request_raw raise ResponseError(e.response.text, e.response.status_code) from None ollama._types.ResponseError: model "llava:7b" not found, try pulling it first (status code: 404) ``` I don't understand how am I encountering this error even though I did pull the model in the start_services.sh. How can I resolve this issue?
GiteaMirror added the docker label 2026-04-22 15:55:52 -05:00
Author
Owner

@nehagawhane commented on GitHub (Aug 1, 2025):

Sometimes for me, it takes longer than 5, have you tried:

ollama pull <model>
until ollama list | grep -q <model>; do
sleep 5
done
<!-- gh-comment-id:3141862045 --> @nehagawhane commented on GitHub (Aug 1, 2025): Sometimes for me, it takes longer than 5, have you tried: ``` ollama pull <model> until ollama list | grep -q <model>; do sleep 5 done ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33353