[GH-ISSUE #1214] Cache models for system restarts to not download again in docker #47131

Closed
opened 2026-04-28 03:20:44 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @peteh on GitHub (Nov 20, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1214

I wrote a docker compose file and thought I mapped the right cache folder.
However after a system restart, the model is downloaded again.

My goal is to map the model cache dir to my local disk so when using the same model after a restart, it is not redownloaded again.

The .ollama folder contains a lot of sha256 files which seem to be the downloaded model files but not the final model.

My current docker file:

version: '3'
services:
  ollama:
    build: .
    image: ollama/ollama
    container_name: ollama
    volumes:
      - ./ollama:/root/.ollama
Originally created by @peteh on GitHub (Nov 20, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1214 I wrote a docker compose file and thought I mapped the right cache folder. However after a system restart, the model is downloaded again. My goal is to map the model cache dir to my local disk so when using the same model after a restart, it is not redownloaded again. The .ollama folder contains a lot of sha256 files which seem to be the downloaded model files but not the final model. My current docker file: ``` version: '3' services: ollama: build: . image: ollama/ollama container_name: ollama volumes: - ./ollama:/root/.ollama ```
Author
Owner

@mxyng commented on GitHub (Nov 20, 2023):

I'm not able to reproduce this. The models directory, as well as other files in the volume mount, are persisted across container restarts. Any models downloaded in a previous container should be available.

Can you verify it's downloading the same model? It's possible the model has been updated so a pull will redownload the updated parts

<!-- gh-comment-id:1819986431 --> @mxyng commented on GitHub (Nov 20, 2023): I'm not able to reproduce this. The models directory, as well as other files in the volume mount, are persisted across container restarts. Any models downloaded in a previous container should be available. Can you verify it's downloading the same model? It's possible the model has been updated so a pull will redownload the updated parts
Author
Owner

@peteh commented on GitHub (Nov 21, 2023):

You're right. I cannot reproduce it either again.
I'll try again later with the big model again. It happened for me with llama2uncensored:70b. It was a bit frustrating to see the 40gb download start again.

For now I'll close the issue and assume I did something wrong earlier. Thanks for helping and sorry to have kinda wasted your time.

<!-- gh-comment-id:1820005152 --> @peteh commented on GitHub (Nov 21, 2023): You're right. I cannot reproduce it either again. I'll try again later with the big model again. It happened for me with llama2uncensored:70b. It was a bit frustrating to see the 40gb download start again. For now I'll close the issue and assume I did something wrong earlier. Thanks for helping and sorry to have kinda wasted your time.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47131