[GH-ISSUE #10755] failed to run 'ollama pull llama3.2-vision' with the docker image built by myself #32824

Closed
opened 2026-04-22 14:40:39 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @guoyejun on GitHub (May 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10755

What is the issue?

it works w/o any error with the offical docker image as below

docker run -e HTTPS_PROXY=...  --gpus all -p 11434:11434 --name yjguo_ollama_container ollama/ollama
docker exec -it yjguo_ollama_container ollama pull llama3.2-vision

I made very a few changes on dockerfile and build_docker.sh, based on the latest code (333e360422),
a. add proxy
b. only build for amd64
c. add some libraries
see more detail at 042df909ed

And I'm failed to run 'ollama pull llama3.2-vision' with my docker image, as below.

ocker run -e HTTPS_PROXY=...  --gpus all -p 11434:11434 --name my_container ollama/ollama:0.7.0-9-g042df90
docker exec -it my_container ollama pull llama3.2-vision



Relevant log output

pulling manifest
Error: pull model manifest: 412:

The model you are attempting to pull requires a newer version of Ollama.

Please download the latest version at:

        https://ollama.com/download

OS

linux, docker

GPU

No response

CPU

No response

Ollama version

333e360422

Originally created by @guoyejun on GitHub (May 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10755 ### What is the issue? it works w/o any error with the offical docker image as below ``` docker run -e HTTPS_PROXY=... --gpus all -p 11434:11434 --name yjguo_ollama_container ollama/ollama docker exec -it yjguo_ollama_container ollama pull llama3.2-vision ``` I made very a few changes on dockerfile and build_docker.sh, based on the latest code (333e360422744e92275af2c1c2d5bc039ad97e8f), a. add proxy b. only build for amd64 c. add some libraries see more detail at https://github.com/guoyejun/ollama/commit/042df909ed184a14171fd4ead0d4a9002a7d95c5 And I'm failed to run 'ollama pull llama3.2-vision' with my docker image, as below. ``` ocker run -e HTTPS_PROXY=... --gpus all -p 11434:11434 --name my_container ollama/ollama:0.7.0-9-g042df90 docker exec -it my_container ollama pull llama3.2-vision ``` ### Relevant log output ```shell pulling manifest Error: pull model manifest: 412: The model you are attempting to pull requires a newer version of Ollama. Please download the latest version at: https://ollama.com/download ``` ### OS linux, docker ### GPU _No response_ ### CPU _No response_ ### Ollama version 333e360422744e92275af2c1c2d5bc039ad97e8f
GiteaMirror added the bug label 2026-04-22 14:40:39 -05:00
Author
Owner

@rick-github commented on GitHub (May 17, 2025):

The ollama library uses the version of the ollama server connecting to it to verify that the ollama server can run the model trying to be pulled. You can work around this either by setting the version to a compatible value when building the server, or using the official docker ollama image to pull the model, and then switch back to your custom ollama image to run it.

<!-- gh-comment-id:2888486498 --> @rick-github commented on GitHub (May 17, 2025): The ollama library uses the version of the ollama server connecting to it to verify that the ollama server can run the model trying to be pulled. You can work around this either by setting the version to a compatible value when building the server, or using the official docker ollama image to pull the model, and then switch back to your custom ollama image to run it.
Author
Owner

@guoyejun commented on GitHub (May 18, 2025):

thanks @rick-github

You can work around this either by setting the version to a compatible value when building the server

could you share the method that how can I find the correct version?

using the official docker ollama image to pull the model, and then switch back to your custom ollama image to run it

I tried it, but the same issue. Looks that the model pulled by one container could not be used by another container. Did I miss anything?

docker run ... -p 11434:11434 --name official_container ollama/ollama
docker exec -it official_container ollama pull llama3.2-vision
docker exec -it official_container ollama run llama3.2-vision

# stop official_container

docker run ... -p 11434:11434 --name my_container ollama/ollama:ollama/ollama:0.7.0-9-g042df90
docker exec -it my_container ollama run llama3.2-vision

pulling manifest
Error: pull model manifest: 412:
The model you are attempting to pull requires a newer version of Ollama.
Please download the latest version at:
        https://ollama.com/download

BTW, the reason that I only build docker for linux/amd64 is that: there's the below issue if I use the default setting. Both issues have the same keyword 'manifest', not sure if they are relative.

ERROR: docker exporter does not currently support exporting manifest lists
<!-- gh-comment-id:2888727296 --> @guoyejun commented on GitHub (May 18, 2025): thanks @rick-github > You can work around this either by setting the version to a compatible value when building the server could you share the method that how can I find the correct version? > using the official docker ollama image to pull the model, and then switch back to your custom ollama image to run it I tried it, but the same issue. Looks that the model pulled by one container could not be used by another container. Did I miss anything? ``` docker run ... -p 11434:11434 --name official_container ollama/ollama docker exec -it official_container ollama pull llama3.2-vision docker exec -it official_container ollama run llama3.2-vision # stop official_container docker run ... -p 11434:11434 --name my_container ollama/ollama:ollama/ollama:0.7.0-9-g042df90 docker exec -it my_container ollama run llama3.2-vision pulling manifest Error: pull model manifest: 412: The model you are attempting to pull requires a newer version of Ollama. Please download the latest version at: https://ollama.com/download ``` BTW, the reason that I only build docker for linux/amd64 is that: there's the below issue if I use the default setting. Both issues have the same keyword 'manifest', not sure if they are relative. ``` ERROR: docker exporter does not currently support exporting manifest lists ```
Author
Owner

@rick-github commented on GitHub (May 18, 2025):

could you share the method that how can I find the correct version?

The correct version is 0.7.0 or later.

I tried it, but the same issue. Looks that the model pulled by one container could not be used by another container. Did I miss anything?

Are you using persistent storage for the models? If you pull a model into a container without it, the model will not survive a restart.

ERROR: docker exporter does not currently support exporting manifest lists

I've not seen this error when using ollama. What command are you running that causes this?

<!-- gh-comment-id:2888956660 --> @rick-github commented on GitHub (May 18, 2025): > could you share the method that how can I find the correct version? The correct version is 0.7.0 or later. > I tried it, but the same issue. Looks that the model pulled by one container could not be used by another container. Did I miss anything? Are you using persistent storage for the models? If you pull a model into a container without it, the model will not survive a restart. > `ERROR: docker exporter does not currently support exporting manifest lists` I've not seen this error when using ollama. What command are you running that causes this?
Author
Owner

@guoyejun commented on GitHub (May 18, 2025):

The correct version is 0.7.0 or later.

I'm using the latest code in github (333e360422), but met the issue.

Are you using persistent storage for the models? If you pull a model into a container without it, the model will not survive a restart.

I don't know the answer. I just run the docker command and ollma command (as shown above) without any settings about persistent storage. (maybe ollama has an option to set the path for the models pulled?)

What command are you running that causes this?

I see this issue with scripts/build_docker.sh with original code. And there's no such issue to run with PLATFORM="linux/amd64" scripts/build_docker.sh

<!-- gh-comment-id:2888989253 --> @guoyejun commented on GitHub (May 18, 2025): > The correct version is 0.7.0 or later. I'm using the latest code in github (https://github.com/ollama/ollama/commit/333e360422744e92275af2c1c2d5bc039ad97e8f), but met the issue. > Are you using persistent storage for the models? If you pull a model into a container without it, the model will not survive a restart. I don't know the answer. I just run the docker command and ollma command (as shown above) without any settings about persistent storage. (maybe ollama has an option to set the path for the models pulled?) > What command are you running that causes this? I see this issue with `scripts/build_docker.sh` with original code. And there's no such issue to run with `PLATFORM="linux/amd64" scripts/build_docker.sh`
Author
Owner

@rick-github commented on GitHub (May 18, 2025):

I don't know the answer. I just run the docker command and ollma command (as shown above) without any settings about persistent storage. (maybe ollama has an option to set the path for the models pulled?)

The command above has ..., so I don't know what command you are running.

<!-- gh-comment-id:2889025005 --> @rick-github commented on GitHub (May 18, 2025): > I don't know the answer. I just run the docker command and ollma command (as shown above) without any settings about persistent storage. (maybe ollama has an option to set the path for the models pulled?) The command above has `...`, so I don't know what command you are running.
Author
Owner

@guoyejun commented on GitHub (May 18, 2025):

the ... is just -e HTTPS_PROXY=$https_proxy

<!-- gh-comment-id:2889036221 --> @guoyejun commented on GitHub (May 18, 2025): the `...` is just `-e HTTPS_PROXY=$https_proxy`
Author
Owner

@rick-github commented on GitHub (May 18, 2025):

Then you don't have persistent storage.

docker run -e HTTPS_PROXY=$https_proxy --gpus all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
<!-- gh-comment-id:2889038356 --> @rick-github commented on GitHub (May 18, 2025): Then you don't have persistent storage. ``` docker run -e HTTPS_PROXY=$https_proxy --gpus all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama ```
Author
Owner

@guoyejun commented on GitHub (May 19, 2025):

I see, OLLAMA_MODELS is needed to set the path.

thanks, will close the issue.

<!-- gh-comment-id:2889595443 --> @guoyejun commented on GitHub (May 19, 2025): I see, OLLAMA_MODELS is needed to set the path. thanks, will close the issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32824