[GH-ISSUE #11052] Deploy a model by manul #33050

Closed
opened 2026-04-22 15:14:21 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @fishfl on GitHub (Jun 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11052

What is the issue?

So we have a stage environment and a production environment.
The production environment can't access the public internet. So we can't pull a model from your library, like: ollama pull llama3.2, it's not work.

But we can access the WLAN and download model files from a Local storage like HDFS.

So whether can we run
‘ollama pull llama3.2’ in a stage environment to download the models files and then upload the model files to our Local storage.
And in a production environment, we download model files from the Local storage,
Then move the files to a host path like '/root/.ollama/models/' or somewhere.

Is that work for ollama? Can ollama identify the model from the host files instead of pull it from your library?

Thanks very much!

Relevant log output


OS

Linux

GPU

No response

CPU

Intel

Ollama version

0.9.1

Originally created by @fishfl on GitHub (Jun 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11052 ### What is the issue? So we have a stage environment and a production environment. The production environment can't access the public internet. So we can't pull a model from your library, like: ollama pull llama3.2, it's not work.  But we can access the WLAN and download model files from a Local storage like HDFS.  So whether can we run ‘ollama pull llama3.2’ in a stage environment to download the models files and then upload the model files to our Local storage. And in a production environment, we download model files from the Local storage, Then move the files to a host path like '/root/.ollama/models/' or somewhere.  Is that work for ollama? Can ollama identify the model from the host files instead of pull it from your library?  Thanks very much! ### Relevant log output ```shell ``` ### OS Linux ### GPU _No response_ ### CPU Intel ### Ollama version 0.9.1
GiteaMirror added the question label 2026-04-22 15:14:21 -05:00
Author
Owner

@rick-github commented on GitHub (Jun 12, 2025):

Yes, just copy $OLLAMA_HOME/.ollama/models from staging to local storage, and then to /root/.ollama/models in production.

<!-- gh-comment-id:2965525659 --> @rick-github commented on GitHub (Jun 12, 2025): Yes, just copy `$OLLAMA_HOME/.ollama/models` from staging to local storage, and then to `/root/.ollama/models` in production.
Author
Owner

@hlstudio commented on GitHub (Jun 13, 2025):

The same as above, in a new Ollama environment, only download ollama pull llama3.2. You can confirm that only the llama3.2 model is present by using ollama list. Then directly copy the current models folder to the production environment's models, ensuring the folder hierarchy corresponds, so that the production environment can add a new llama3.2 large model.

<!-- gh-comment-id:2968744194 --> @hlstudio commented on GitHub (Jun 13, 2025): The same as above, in a new Ollama environment, only download ollama pull llama3.2. You can confirm that only the llama3.2 model is present by using ollama list. Then directly copy the current models folder to the production environment's models, ensuring the folder hierarchy corresponds, so that the production environment can add a new llama3.2 large model.
Author
Owner

@fishfl commented on GitHub (Jun 13, 2025):

Got it , Thank you very much. It works!

<!-- gh-comment-id:2969655434 --> @fishfl commented on GitHub (Jun 13, 2025): Got it , Thank you very much. It works!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33050