[GH-ISSUE #1601] Error: 403 on pulling manifest #62923

Closed
opened 2026-05-03 10:50:06 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @honggyukim on GitHub (Dec 19, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1601

Hi,

Thanks very much for building this great project!

I would like to set up ollama in the internal Linux server of my office but it fails pulling pre-trained models as follows.

# installation
$ curl https://ollama.ai/install.sh | sh 

# run
$ ollama run llama2 
pulling manifest 
Error: 403:

I've tested it before and it was fine in my home, but it only fails in the office internal server maybe due to security policy.

Could anyone please let me know where ollama downloads the pre-trained models? I need to know its URL to make a firewall exception.

Thanks.

Originally created by @honggyukim on GitHub (Dec 19, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1601 Hi, Thanks very much for building this great project! I would like to set up ollama in the internal Linux server of my office but it fails pulling pre-trained models as follows. ``` # installation $ curl https://ollama.ai/install.sh | sh # run $ ollama run llama2 pulling manifest Error: 403: ``` I've tested it before and it was fine in my home, but it only fails in the office internal server maybe due to security policy. Could anyone please let me know where ollama downloads the pre-trained models? I need to know its URL to make a firewall exception. Thanks.
Author
Owner

@honggyukim commented on GitHub (Dec 19, 2023):

Otherwise, is there any way manually download the models?

If possible, then I can put the models under /usr/share/ollama/.ollama/models manually.

<!-- gh-comment-id:1862286892 --> @honggyukim commented on GitHub (Dec 19, 2023): Otherwise, is there any way manually download the models? If possible, then I can put the models under `/usr/share/ollama/.ollama/models` manually.
Author
Owner

@ghost commented on GitHub (Dec 19, 2023):

I have the same problem (also trying to deploy in the office). We have a proxy installed. Does ollama use proxy settings when downloading a model?

<!-- gh-comment-id:1862349984 --> @ghost commented on GitHub (Dec 19, 2023): I have the same problem (also trying to deploy in the office). We have a proxy installed. Does ollama use proxy settings when downloading a model?
Author
Owner

@mxyng commented on GitHub (Dec 19, 2023):

Ollama uses your system proxy settings if they're set. See the FAQ for more details

<!-- gh-comment-id:1863192442 --> @mxyng commented on GitHub (Dec 19, 2023): Ollama uses your system proxy settings if they're set. See the [FAQ](https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-do-i-use-ollama-behind-a-proxy) for more details
Author
Owner

@honggyukim commented on GitHub (Dec 19, 2023):

Thanks for the reply but I haven't configured HTTP_PROXY nor HTTPS_PROXY.

My question is this. Where does ollama download pre-trained models such as llama2?

<!-- gh-comment-id:1863457714 --> @honggyukim commented on GitHub (Dec 19, 2023): Thanks for the reply but I haven't configured `HTTP_PROXY` nor `HTTPS_PROXY`. My question is this. Where does ollama download pre-trained models such as llama2?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62923