[GH-ISSUE #4847] Ollama pull model without internet when run with docker #3066

Closed
opened 2026-04-12 13:30:11 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @david101-hunter on GitHub (Jun 6, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4847

When I run Ollama run gemma:2b inside docker, it will download blobs folder and manifests folder in /root/.ollama/models.

In private enviroment without interner, I copy blobs & manifests folders, when I run ollama run <name model>, it not working.

How can I achieve this?

Originally created by @david101-hunter on GitHub (Jun 6, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4847 When I run `Ollama run gemma:2b` inside docker, it will download blobs folder and manifests folder in `/root/.ollama/models`. In private enviroment without interner, I copy `blobs` & `manifests` folders, when I run `ollama run <name model>`, it not working. How can I achieve this?
GiteaMirror added the model label 2026-04-12 13:30:11 -05:00
Author
Owner

@sealad886 commented on GitHub (Jun 11, 2024):

The Ollama cache is really difficult to manually move because the manifest files for each model implement a symlink-like reference to the underlying blob files that is different from OS-level symlinking; moving the files or changing any of the parent directories' names will break the cache completely.

Note that my solution assumes that your Docker installation binds /root to your $HOME directory. If your virtual storage is configured differently, you may have to tweak this a bit.

To accomplish what you're asking, you would need to first (prior to downloading models) create your own symlink at $HOME/.ollama/models that points to a directory that you intend to use for your downloaded models. For example:

mkdir -p "$HOME/.ollama/sandbox/models"
ln -s -F "$HOME/.ollama/sandbox/models" "$HOME/.ollama/models"

Ollama will still look for models at $HOME/.ollama/models (and that symlink pointer will need to persist), but you could conceivably use this technique with your own setup; tweaks may be required depending on what you're doing with your environments.

Alternatively, you can set the OLLAMA_MODELS environment variable to point to a different download directory. Again, you can't move the cache files or change the names of parent directories, but you could set this to a directory that has shared access with your internet-accessing environment.

<!-- gh-comment-id:2159731000 --> @sealad886 commented on GitHub (Jun 11, 2024): The Ollama cache is really difficult to manually move because the manifest files for each model implement a symlink-like reference to the underlying blob files that is different from OS-level symlinking; moving the files or changing any of the parent directories' names will break the cache completely. Note that my solution assumes that your Docker installation binds `/root` to your `$HOME` directory. If your virtual storage is configured differently, you may have to tweak this a bit. To accomplish what you're asking, you would need to first (prior to downloading models) create your own symlink at `$HOME/.ollama/models` that points to a directory that you intend to use for your downloaded models. For example: ```bash mkdir -p "$HOME/.ollama/sandbox/models" ln -s -F "$HOME/.ollama/sandbox/models" "$HOME/.ollama/models" ``` Ollama will still look for models at `$HOME/.ollama/models` (and that symlink pointer will need to persist), but you could conceivably use this technique with your own setup; tweaks may be required depending on what you're doing with your environments. Alternatively, you can set the `OLLAMA_MODELS` environment variable to point to a different download directory. Again, you can't move the cache files or change the names of parent directories, but you could set this to a directory that has shared access with your internet-accessing environment.
Author
Owner

@david101-hunter commented on GitHub (Oct 29, 2024):

thanks

<!-- gh-comment-id:2444388072 --> @david101-hunter commented on GitHub (Oct 29, 2024): thanks
Author
Owner

@mchiang0610 commented on GitHub (Nov 21, 2024):

Hey, for using Ollama offline, you can import the models privately (or pull in advance of disconnecting from the internet)

https://github.com/ollama/ollama/blob/main/docs/import.md

Let me know if this helps! Thank you so much!

<!-- gh-comment-id:2490225630 --> @mchiang0610 commented on GitHub (Nov 21, 2024): Hey, for using Ollama offline, you can import the models privately (or pull in advance of disconnecting from the internet) https://github.com/ollama/ollama/blob/main/docs/import.md Let me know if this helps! Thank you so much!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3066