[GH-ISSUE #915] Cannot download models behind a proxy #46958

Closed
opened 2026-04-28 02:17:04 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @beettlle on GitHub (Oct 26, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/915

Originally assigned to: @mxyng on GitHub.

Seems like #769 doesn't catch all the corner cases when users are behind a proxy. Both @reactivetype and I can reproduce in 0.1.3 and 0.1.5.

$ ollama -v
ollama version 0.1.5
$ ollama pull llama2
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": dial tcp: lookup registry.ollama.ai on xxx.xxx.xxx.xxx:53: read udp xxx.xxx.xxx.xxx:49613->xxxx.xxx.xxx.xxx:53: i/o timeout
$ curl https://registry.ollama.ai/v2/library/llama2/manifests/latest
{"errors":[{"code":"MANIFEST_INVALID","message":"manifest invalid","detail":{}}]}
Originally created by @beettlle on GitHub (Oct 26, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/915 Originally assigned to: @mxyng on GitHub. Seems like #769 doesn't catch all the corner cases when users are behind a proxy. Both @reactivetype and I can reproduce in `0.1.3` and `0.1.5`. ``` $ ollama -v ollama version 0.1.5 $ ollama pull llama2 pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": dial tcp: lookup registry.ollama.ai on xxx.xxx.xxx.xxx:53: read udp xxx.xxx.xxx.xxx:49613->xxxx.xxx.xxx.xxx:53: i/o timeout $ curl https://registry.ollama.ai/v2/library/llama2/manifests/latest {"errors":[{"code":"MANIFEST_INVALID","message":"manifest invalid","detail":{}}]} ```
Author
Owner

@mxyng commented on GitHub (Oct 26, 2023):

Without divulging private information, can you describe your proxy configurations? e.g. what kind of proxy, how it's set, authentication, etc. It will help immensely with fixing this issue

<!-- gh-comment-id:1781581627 --> @mxyng commented on GitHub (Oct 26, 2023): Without divulging private information, can you describe your proxy configurations? e.g. what kind of proxy, how it's set, authentication, etc. It will help immensely with fixing this issue
Author
Owner

@beettlle commented on GitHub (Oct 27, 2023):

Sadly I don't know either, it's a black box managed by IT. I do have to set http_proxy and https_proxy in my shell to get connectivity. My Go is very poor so I could be wrong but I don't see anything in client.go where these environment variables are taken into account.

If that's not valid maybe I see that UDP is mentioned in the error message. I can't quite find why UDP is being used, but I have a feeling that all UDP traffic is blocked.

<!-- gh-comment-id:1783248212 --> @beettlle commented on GitHub (Oct 27, 2023): Sadly I don't know either, it's a black box managed by IT. I do have to set `http_proxy` and `https_proxy` in my shell to get connectivity. My Go is very poor so I could be wrong but I don't see anything in [client.go](https://github.com/jmorganca/ollama/blob/6d283882b16673e42dbe3c068f65271df010de77/api/client.go#L42) where these environment variables are taken into account. If that's not valid maybe I see that UDP is mentioned in the error message. I can't quite find why UDP is being used, but I have a feeling that all UDP traffic is blocked.
Author
Owner

@mxyng commented on GitHub (Oct 27, 2023):

Thanks for the response. client.go does use HTTP_PROXY and HTTPS_PROXY environment variables through http.ProxyFromEnvironment and sets the proxy URL in the http.Client.Transport

<!-- gh-comment-id:1783255992 --> @mxyng commented on GitHub (Oct 27, 2023): Thanks for the response. client.go does use HTTP_PROXY and HTTPS_PROXY environment variables through [`http.ProxyFromEnvironment`](https://github.com/jmorganca/ollama/blob/6d283882b16673e42dbe3c068f65271df010de77/api/client.go#L80) and sets the proxy URL in the [`http.Client.Transport`](https://github.com/jmorganca/ollama/blob/6d283882b16673e42dbe3c068f65271df010de77/api/client.go#L87)
Author
Owner

@beettlle commented on GitHub (Oct 27, 2023):

Good to know. Mine were in lowercase so I added the uppercase version but it still doesn't work.

In case it helps I changed the values of the IP to show when they are the same and when they are different.

$ ollama pull llama2
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": dial tcp: lookup registry.ollama.ai on xxx.xxx.xxx.xxx:53: read udp yyy.yyy.yyy.yyy:58915->xxx.xxx.xxx.xxx:53: i/o timeout
$ env | grep -i http
https_proxy=http://zzz.zzz.zzz.zzz:3128
HTTPS_PROXY=http://zzz.zzz.zzz.zzz:3128
HTTP_PROXY=http://zzz.zzz.zzz.zzz:3128
http_proxy=http://zzz.zzz.zzz.zzz:3128
<!-- gh-comment-id:1783273765 --> @beettlle commented on GitHub (Oct 27, 2023): Good to know. Mine were in lowercase so I added the uppercase version but it still doesn't work. In case it helps I changed the values of the IP to show when they are the same and when they are different. ``` $ ollama pull llama2 pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": dial tcp: lookup registry.ollama.ai on xxx.xxx.xxx.xxx:53: read udp yyy.yyy.yyy.yyy:58915->xxx.xxx.xxx.xxx:53: i/o timeout $ env | grep -i http https_proxy=http://zzz.zzz.zzz.zzz:3128 HTTPS_PROXY=http://zzz.zzz.zzz.zzz:3128 HTTP_PROXY=http://zzz.zzz.zzz.zzz:3128 http_proxy=http://zzz.zzz.zzz.zzz:3128 ```
Author
Owner

@daaniyaan commented on GitHub (Oct 29, 2023):

same thing #689 #676

<!-- gh-comment-id:1784065879 --> @daaniyaan commented on GitHub (Oct 29, 2023): same thing #689 #676
Author
Owner

@mxyng commented on GitHub (Oct 30, 2023):

Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": dial tcp: lookup registry.ollama.ai on xxx.xxx.xxx.xxx:53: read udp yyy.yyy.yyy.yyy:58915->xxx.xxx.xxx.xxx:53: i/o timeout

Does the proxy server have DNS and can it resolve registry.ollama.ai? A local test using mitmproxy works mostly as expected with just setting https_proxy or HTTPS_PROXY. I ran into similar errors messages when the mitmproxy container had bad networking configurations and couldn't resolve the ollama.ai hostname.

<!-- gh-comment-id:1786126808 --> @mxyng commented on GitHub (Oct 30, 2023): > Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": dial tcp: lookup registry.ollama.ai on xxx.xxx.xxx.xxx:53: read udp yyy.yyy.yyy.yyy:58915->xxx.xxx.xxx.xxx:53: i/o timeout Does the proxy server have DNS and can it resolve registry.ollama.ai? A local test using mitmproxy works mostly as expected with just setting `https_proxy` or `HTTPS_PROXY`. I ran into similar errors messages when the mitmproxy container had bad networking configurations and couldn't resolve the ollama.ai hostname.
Author
Owner

@beettlle commented on GitHub (Nov 2, 2023):

Interestingly, while dig doesn't resolve the domain, I can get to registry.ollama.ai with cURL but I'm still getting the error with the ollama CLI. Do you know if the GoLang library you are using doesn't something different than cURL?

$ dig registry.ollama.ai
;; communications error to 172.31.160.1#53: timed out
$ curl -I https://registry.ollama.ai
HTTP/1.1 200 Connection established

HTTP/2 404 
content-type: text/plain; charset=utf-8
date: Thu, 02 Nov 2023 15:07:14 GMT
content-length: 9
via: 1.1 google
alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
$ curl -s https://registry.ollama.ai | grep -i '<title>'
    <title>Ollama</title>
$ ollama pull llama2
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": dial tcp: lookup registry.ollama.ai on 172.31.160.1:53: read udp 172.31.167.180:35379->172.31.160.1:53: i/o timeout
<!-- gh-comment-id:1790946565 --> @beettlle commented on GitHub (Nov 2, 2023): Interestingly, while `dig` doesn't resolve the domain, I can get to `registry.ollama.ai` with cURL but I'm still getting the error with the ollama CLI. Do you know if the GoLang library you are using doesn't something different than cURL? ``` $ dig registry.ollama.ai ;; communications error to 172.31.160.1#53: timed out $ curl -I https://registry.ollama.ai HTTP/1.1 200 Connection established HTTP/2 404 content-type: text/plain; charset=utf-8 date: Thu, 02 Nov 2023 15:07:14 GMT content-length: 9 via: 1.1 google alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000 $ curl -s https://registry.ollama.ai | grep -i '<title>' <title>Ollama</title> $ ollama pull llama2 pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": dial tcp: lookup registry.ollama.ai on 172.31.160.1:53: read udp 172.31.167.180:35379->172.31.160.1:53: i/o timeout ```
Author
Owner

@dezoito commented on GitHub (Nov 6, 2023):

Having the same proxy issues after installing locally.
I suspect the dockerized container had issues behind the corporate proxy, but it only displayed a generic error message.

It would be cool if the ollama run and ollama pull commands read http_proxy (and related vars) from the environment, or if the proxy could be set via CLI arg or a config file.

<!-- gh-comment-id:1794960019 --> @dezoito commented on GitHub (Nov 6, 2023): Having the same proxy issues after installing locally. I suspect the dockerized container had issues behind the corporate proxy, but it only displayed a generic error message. It would be cool if the `ollama run` and `ollama pull` commands read `http_proxy` (and related vars) from the environment, or if the proxy could be set via CLI arg or a config file.
Author
Owner

@bblease commented on GitHub (Nov 14, 2023):

I am also having this issue. It's specifically HTTP_PROXY. If it's set, I get a "Something went wrong, please see the ollama server logs for more information" error.

<!-- gh-comment-id:1811391988 --> @bblease commented on GitHub (Nov 14, 2023): I am also having this issue. It's specifically HTTP_PROXY. If it's set, I get a "Something went wrong, please see the ollama server logs for more information" error.
Author
Owner

@dezoito commented on GitHub (Nov 14, 2023):

Rebooting my VM and running the install script again solved the problem for some reason.

<!-- gh-comment-id:1811437079 --> @dezoito commented on GitHub (Nov 14, 2023): Rebooting my VM and running the install script again solved the problem for some reason.
Author
Owner

@bblease commented on GitHub (Nov 16, 2023):

I'm running it in a docker container, with no luck. It works with the regular install script. However, I need it in Docker.

<!-- gh-comment-id:1815470194 --> @bblease commented on GitHub (Nov 16, 2023): I'm running it in a docker container, with no luck. It works with the regular install script. However, I need it in Docker.
Author
Owner

@mxyng commented on GitHub (Nov 17, 2023):

Ollama works with forward proxies when configured with *_PROXY. Here's the FAQ on how to set it up: https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-do-i-use-ollama-behind-a-proxy

<!-- gh-comment-id:1815517479 --> @mxyng commented on GitHub (Nov 17, 2023): Ollama works with forward proxies when configured with `*_PROXY`. Here's the FAQ on how to set it up: https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-do-i-use-ollama-behind-a-proxy
Author
Owner

@lfoppiano commented on GitHub (Dec 6, 2023):

Following the FAQ and other information and it seems there are some troubles when being behind a proxy:

luca@wanda:~$ HTTPS_PROXY=http://proxyout.nims.go.jp:8888 ollama pull llama2:70b-chat
pulling manifest 
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/70b-chat": read tcp 144.213.176.192:39158->34.120.132.20:443: read: connection reset by peer

The proxy is used correctly by wget and other commands, so I'm not sure where to start, to investigating this 😭

<!-- gh-comment-id:1841968545 --> @lfoppiano commented on GitHub (Dec 6, 2023): Following the FAQ and other information and it seems there are some troubles when being behind a proxy: ``` luca@wanda:~$ HTTPS_PROXY=http://proxyout.nims.go.jp:8888 ollama pull llama2:70b-chat pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/70b-chat": read tcp 144.213.176.192:39158->34.120.132.20:443: read: connection reset by peer ``` The proxy is used correctly by wget and other commands, so I'm not sure where to start, to investigating this 😭
Author
Owner

@applepieiris commented on GitHub (Apr 3, 2024):

Please refer to this link: issues1859
I solved this in my server(company network which should set proxy to download modles)

<!-- gh-comment-id:2033645115 --> @applepieiris commented on GitHub (Apr 3, 2024): Please refer to this link: [issues1859](https://github.com/ollama/ollama/issues/1859#issuecomment-1989375455) I solved this in my server(company network which should set proxy to download modles)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#46958