[GH-ISSUE #13647] Model downloading into ollama docker container does not work #71032

Closed
opened 2026-05-04 23:49:15 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @makoit on GitHub (Jan 8, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13647

Originally assigned to: @BruceMacD on GitHub.

What is the issue?

I try to run a ollama docker container on MacOs. I used the command docker run -d -v "$(pwd)/container_cache/ollama/models:/root/.ollama" -p 11434:11434 --name ollama ollama/ollama to pull or run the container. I want to persist the models in a local path. The container is starting fine but when trying to pull a model ollama raises an error:

root@c775bd1a07f7:/# ollama run llama3.2
pulling manifest
pulling 966de95ca8a6: 100% ▕████████████████████████████████████████████████████████████████████████████████▏ 1.4 KB
pulling fcc5a6bec9da: 100% ▕████████████████████████████████████████████████████████████████████████████████▏ 7.7 KB
pulling a70ff7e570d9: 100% ▕████████████████████████████████████████████████████████████████████████████████▏ 6.0 KB
pulling 56bb8bd477a5: 100% ▕████████████████████████████████████████████████████████████████████████████████▏ 96 B
pulling 34bb5ab01051: 100% ▕████████████████████████████████████████████████████████████████████████████████▏ 561 B
verifying sha256 digest
Error: digest mismatch, file must be downloaded again: want sha256:dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855

Relevant log output


OS

Docker

GPU

No response

CPU

Apple

Ollama version

ollama version 0.13.5

Originally created by @makoit on GitHub (Jan 8, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13647 Originally assigned to: @BruceMacD on GitHub. ### What is the issue? I try to run a ollama docker container on MacOs. I used the command `docker run -d -v "$(pwd)/container_cache/ollama/models:/root/.ollama" -p 11434:11434 --name ollama ollama/ollama` to pull or run the container. I want to persist the models in a local path. The container is starting fine but when trying to pull a model ollama raises an error: root@c775bd1a07f7:/# ollama run llama3.2 pulling manifest pulling 966de95ca8a6: 100% ▕████████████████████████████████████████████████████████████████████████████████▏ 1.4 KB pulling fcc5a6bec9da: 100% ▕████████████████████████████████████████████████████████████████████████████████▏ 7.7 KB pulling a70ff7e570d9: 100% ▕████████████████████████████████████████████████████████████████████████████████▏ 6.0 KB pulling 56bb8bd477a5: 100% ▕████████████████████████████████████████████████████████████████████████████████▏ 96 B pulling 34bb5ab01051: 100% ▕████████████████████████████████████████████████████████████████████████████████▏ 561 B verifying sha256 digest Error: digest mismatch, file must be downloaded again: want sha256:dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 ### Relevant log output ```shell ``` ### OS Docker ### GPU _No response_ ### CPU Apple ### Ollama version ollama version 0.13.5
GiteaMirror added the bug label 2026-05-04 23:49:15 -05:00
Author
Owner

@starpit commented on GitHub (Jan 8, 2026):

FYI, in case this is related. We are now seeing these as persistent failures when run in a github action. Without any changes to our model pull logic, over the past 24 hours our pull tests went from stable to failing every time. This happens in both macOS and Linux runners.

Our solution was to insert a retry loop. Before each retry, we call /api/delete and sleep for 2 seconds. For now at least, this seems to pave over whatever is going on with ollama (I do not know whether this is needed due to an ollama change, or due to a github actions change).

<!-- gh-comment-id:3724876611 --> @starpit commented on GitHub (Jan 8, 2026): FYI, in case this is related. We are now seeing these as persistent failures when run in a github action. Without any changes to our model pull logic, over the past 24 hours our pull tests went from stable to failing every time. This happens in both macOS and Linux runners. Our solution was to insert a retry loop. Before each retry, we call `/api/delete` and sleep for 2 seconds. For now at least, this seems to pave over whatever is going on with ollama (I do not know whether this is needed due to an ollama change, or due to a github actions change).
Author
Owner

@ajarnold920 commented on GitHub (Jan 8, 2026):

I had a similar issue, and what solved it (temporarily) was deleting the contents of the blobs folder at /var/lib/docker/volumes/ollama/_data/models and retrying the pull. There were sha files left over after I had deleted my previous models, which were maybe somehow messing up the download. This solution does not work if you already have models downloaded that you want to keep, though; in my case I was starting fresh.

<!-- gh-comment-id:3724960613 --> @ajarnold920 commented on GitHub (Jan 8, 2026): I had a similar issue, and what solved it (temporarily) was deleting the contents of the blobs folder at /var/lib/docker/volumes/ollama/_data/models and retrying the pull. There were sha files left over after I had deleted my previous models, which were maybe somehow messing up the download. This solution does not work if you already have models downloaded that you want to keep, though; in my case I was starting fresh.
Author
Owner

@FabianKostenzer commented on GitHub (Jan 8, 2026):

I am facing the same problem.

My docker-compose.yaml:

services:
  ollama:
    image: ollama/ollama
    volumes:
      - /var/www/ai/ollama-models:/root/.ollama
    ports:
      - 11434:11434

I pulled the ollama docker image today, so i suppose to be running the latest image.

I tried
docker compose exec ollama ollama pull deepseek-r1:1.5b
and
docker compose exec ollama ollama pull gpt-oss:20b
multiple times.

I deleted the contents of the ollama-models directory between each try and restarted docker.

I noticed that the same pull command result in completely random pulls and order of pulls.

For deepseek I got the following output when trying to pull the model (resetting between each try, as described above):

pulling manifest 
pulling c5ad996bda6e: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏  556 B                         
pulling f4d24e9138dd: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏  148 B                         
pulling a85fe2a2e58e: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏  487 B                         
verifying sha256 digest 
Error: digest mismatch, file must be downloaded again: want sha256:aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
pulling manifest 
pulling aabd4debf0c8: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.1 GB                         
pulling f4d24e9138dd: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏  148 B                         
pulling a85fe2a2e58e: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏  487 B                         
verifying sha256 digest 
Error: digest mismatch, file must be downloaded again: want sha256:c5ad996bda6eed4df6e3b605a9869647624851ac248209d22fd5e2c0cc1121d3, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
pulling manifest 
pulling aabd4debf0c8: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.1 GB                         
pulling f4d24e9138dd: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏  148 B                         
verifying sha256 digest 
Error: digest mismatch, file must be downloaded again: want sha256:c5ad996bda6eed4df6e3b605a9869647624851ac248209d22fd5e2c0cc1121d3, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
<!-- gh-comment-id:3725275513 --> @FabianKostenzer commented on GitHub (Jan 8, 2026): I am facing the same problem. My docker-compose.yaml: ``` services: ollama: image: ollama/ollama volumes: - /var/www/ai/ollama-models:/root/.ollama ports: - 11434:11434 ``` I pulled the ollama docker image today, so i suppose to be running the latest image. I tried `docker compose exec ollama ollama pull deepseek-r1:1.5b` and `docker compose exec ollama ollama pull gpt-oss:20b` multiple times. I deleted the contents of the ollama-models directory between each try and restarted docker. I noticed that the same pull command result in completely random pulls and order of pulls. For deepseek I got the following output when trying to pull the model (resetting between each try, as described above): ``` pulling manifest pulling c5ad996bda6e: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 556 B pulling f4d24e9138dd: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 148 B pulling a85fe2a2e58e: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 487 B verifying sha256 digest Error: digest mismatch, file must be downloaded again: want sha256:aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 ``` ``` pulling manifest pulling aabd4debf0c8: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.1 GB pulling f4d24e9138dd: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 148 B pulling a85fe2a2e58e: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 487 B verifying sha256 digest Error: digest mismatch, file must be downloaded again: want sha256:c5ad996bda6eed4df6e3b605a9869647624851ac248209d22fd5e2c0cc1121d3, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 ``` ``` pulling manifest pulling aabd4debf0c8: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.1 GB pulling f4d24e9138dd: 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 148 B verifying sha256 digest Error: digest mismatch, file must be downloaded again: want sha256:c5ad996bda6eed4df6e3b605a9869647624851ac248209d22fd5e2c0cc1121d3, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 ```
Author
Owner

@pkesling commented on GitHub (Jan 8, 2026):

I'm having this same issue using the latest docker image. I didn't have this issue yesterday and it doesn't look like the docker image version changed between yesterday and today. It doesn't seem to matter what model I choose, they all fail this same way. Removing the blobs/manifests and retrying as others have suggested, hasn't worked for me.

<!-- gh-comment-id:3725281530 --> @pkesling commented on GitHub (Jan 8, 2026): I'm having this same issue using the latest docker image. I didn't have this issue yesterday and it doesn't look like the docker image version changed between yesterday and today. It doesn't seem to matter what model I choose, they all fail this same way. Removing the blobs/manifests and retrying as others have suggested, hasn't worked for me.
Author
Owner

@rogerdcarvalho commented on GitHub (Jan 8, 2026):

This even happens with older ollama versions. Non-dockerized. I'm wondering if there's something server side that went awry?

<!-- gh-comment-id:3725288908 --> @rogerdcarvalho commented on GitHub (Jan 8, 2026): This even happens with older ollama versions. Non-dockerized. I'm wondering if there's something server side that went awry?
Author
Owner

@JoeBouchard commented on GitHub (Jan 8, 2026):

I'm also seeing this issue in Azure Pipelines. It happens during the docker build step for me. In my dockerfile, I pull some images. It worked yesterday, but not today. Since it's a fresh pull of the ollama image for a build not linked to a volume, it doesn't seem like my issue is cache related.

<!-- gh-comment-id:3725355883 --> @JoeBouchard commented on GitHub (Jan 8, 2026): I'm also seeing this issue in Azure Pipelines. It happens during the docker build step for me. In my dockerfile, I pull some images. It worked yesterday, but not today. Since it's a fresh pull of the ollama image for a build not linked to a volume, it doesn't seem like my issue is cache related.
Author
Owner

@jmorganca commented on GitHub (Jan 8, 2026):

Hi all, we're investigating the issue. Sorry about this

<!-- gh-comment-id:3725444391 --> @jmorganca commented on GitHub (Jan 8, 2026): Hi all, we're investigating the issue. Sorry about this
Author
Owner

@udjuraev-ipa commented on GitHub (Jan 8, 2026):

Thanks for looking into this! Experiencing the issue in WSL2 environment.

<!-- gh-comment-id:3725492381 --> @udjuraev-ipa commented on GitHub (Jan 8, 2026): Thanks for looking into this! Experiencing the issue in WSL2 environment.
Author
Owner

@BruceMacD commented on GitHub (Jan 8, 2026):

Thanks for all the reports. The root cause is not 100% clear yet, but I've narrowed it down to a couple parts of our infrastructure. I believe this problem is resolved at the moment, if anyone is still experiencing this please let me know.

<!-- gh-comment-id:3725695695 --> @BruceMacD commented on GitHub (Jan 8, 2026): Thanks for all the reports. The root cause is not 100% clear yet, but I've narrowed it down to a couple parts of our infrastructure. I believe this problem is resolved at the moment, if anyone is still experiencing this please let me know.
Author
Owner

@makoit commented on GitHub (Jan 9, 2026):

@BruceMacD I tested it again and now it seems like to work again. If this was the final fix we can close the issue!?

<!-- gh-comment-id:3727550292 --> @makoit commented on GitHub (Jan 9, 2026): @BruceMacD I tested it again and now it seems like to work again. If this was the final fix we can close the issue!?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71032