[GH-ISSUE #8632] Downloading a model with ollama pull or ollama run stalls #5590

Closed
opened 2026-04-12 16:51:26 -05:00 by GiteaMirror · 90 comments
Owner

Originally created by @arjunivor on GitHub (Jan 28, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8632

Originally assigned to: @bmizerany on GitHub.

What is the issue?

while trying to run ollama run deepseek-r1:7b it repeatedly fails at 6%. I tried to run llama 3.2 and it downloaded that flawlessly, but everytime i try to run deepseek i get an error saying error max retries exceeded: EOF

OS

WSL2

GPU

Nvidia

CPU

AMD

Ollama version

latest

Originally created by @arjunivor on GitHub (Jan 28, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8632 Originally assigned to: @bmizerany on GitHub. ### What is the issue? while trying to run `ollama run deepseek-r1:7b` it repeatedly fails at 6%. I tried to run llama 3.2 and it downloaded that flawlessly, but everytime i try to run deepseek i get an error saying `error max retries exceeded: EOF` ### OS WSL2 ### GPU Nvidia ### CPU AMD ### Ollama version latest
GiteaMirror added the networkingbug labels 2026-04-12 16:51:26 -05:00
Author
Owner

@efe3535 commented on GitHub (Jan 28, 2025):

Image

I have the same problem.

<!-- gh-comment-id:2618964355 --> @efe3535 commented on GitHub (Jan 28, 2025): <img width="744" alt="Image" src="https://github.com/user-attachments/assets/d2ffac6a-84d6-4dde-ba81-95de7ba699b3" /> I have the same problem.
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

https://github.com/ollama/ollama/issues/8535#issuecomment-2613241807

<!-- gh-comment-id:2618978370 --> @rick-github commented on GitHub (Jan 28, 2025): https://github.com/ollama/ollama/issues/8535#issuecomment-2613241807
Author
Owner

@epicwhale commented on GitHub (Jan 28, 2025):

having the same issue!

<!-- gh-comment-id:2619043834 --> @epicwhale commented on GitHub (Jan 28, 2025): having the same issue!
Author
Owner

@ajayjoshioutdosolutions commented on GitHub (Jan 28, 2025):

Issue with Ollama

ollama pull deepseek-r1:8b
pulling manifest
pulling 6340dc3229b0... 0% ▕ ▏ 0 B/4.9 GB
Error: max retries exceeded: Get "6340dc3229/data!F(MISSING)20250128%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20250128T135124Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=df1750b12731ec798303d375a9b75e4873a5ad7ea5c66aafc4e89cf29cd13cc7": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com on 127.0.0.53:53: server misbehaving

<!-- gh-comment-id:2619070483 --> @ajayjoshioutdosolutions commented on GitHub (Jan 28, 2025): Issue with Ollama ollama pull deepseek-r1:8b pulling manifest pulling 6340dc3229b0... 0% ▕ ▏ 0 B/4.9 GB Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/63/6340dc3229b0d08ea9cc49b75d4098702983e17b4c096d57afbbf2ffc813f2be/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20250128%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20250128T135124Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=df1750b12731ec798303d375a9b75e4873a5ad7ea5c66aafc4e89cf29cd13cc7": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com on 127.0.0.53:53: server misbehaving
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com on 127.0.0.53:53: server misbehaving

Not an issue with ollama. DNS server is acting up. What's the result of

nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com
<!-- gh-comment-id:2619076188 --> @rick-github commented on GitHub (Jan 28, 2025): ``` lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com on 127.0.0.53:53: server misbehaving ``` Not an issue with ollama. DNS server is acting up. What's the result of ``` nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com ```
Author
Owner

@efe3535 commented on GitHub (Jan 28, 2025):

Server:		1.1.1.1
Address:	1.1.1.1#53

Non-authoritative answer:
Name:	dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com
Address: 172.66.1.46
Name:	dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com
Address: 162.159.141.50```
<!-- gh-comment-id:2619077844 --> @efe3535 commented on GitHub (Jan 28, 2025): ``` Server: 1.1.1.1 Address: 1.1.1.1#53 Non-authoritative answer: Name: dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com Address: 172.66.1.46 Name: dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com Address: 162.159.141.50```
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

What's the result of

curl -sL https://registry.ollama.ai/v2/library/deepseek-r1/manifests/7b
curl -sL https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588

<!-- gh-comment-id:2619095345 --> @rick-github commented on GitHub (Jan 28, 2025): What's the result of ``` curl -sL https://registry.ollama.ai/v2/library/deepseek-r1/manifests/7b curl -sL https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588 ```
Author
Owner

@epicwhale commented on GitHub (Jan 28, 2025):

❯ nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com

Server:         ::1
Address:        ::1#53

Non-authoritative answer:
Name:   dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com
Address: 172.66.1.46
Name:   dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com
Address: 162.159.141.50
❯ curl -sL https://registry.ollama.ai/v2/library/deepseek-r1/manifests/7b

{"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:40fb844194b25e429204e5163fb379ab462978a262b86aadd73d8944445c09fd","size":487},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49","size":4683073184},{"mediaType":"application/vnd.ollama.image.template","digest":"sha256:369ca498f347f710d068cbb38bf0b8692dd3fa30f30ca2ff755e211c94768150","size":387},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:6e4c38e1172f42fdbff13edf9a7a017679fb82b0fde415a3e8b3c31c6ed4a4e4","size":1065},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588","size":148}]}%                                                         


❯ curl -sL https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588

{"stop":["\u003c|begin▁of▁sentence|\u003e","\u003c|end▁of▁sentence|\u003e","\u003c|User|\u003e","\u003c|Assistant|\u003e"]}
<!-- gh-comment-id:2619131862 --> @epicwhale commented on GitHub (Jan 28, 2025): ``` ❯ nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com Server: ::1 Address: ::1#53 Non-authoritative answer: Name: dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com Address: 172.66.1.46 Name: dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com Address: 162.159.141.50 ``` ``` ❯ curl -sL https://registry.ollama.ai/v2/library/deepseek-r1/manifests/7b {"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:40fb844194b25e429204e5163fb379ab462978a262b86aadd73d8944445c09fd","size":487},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49","size":4683073184},{"mediaType":"application/vnd.ollama.image.template","digest":"sha256:369ca498f347f710d068cbb38bf0b8692dd3fa30f30ca2ff755e211c94768150","size":387},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:6e4c38e1172f42fdbff13edf9a7a017679fb82b0fde415a3e8b3c31c6ed4a4e4","size":1065},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588","size":148}]}% ❯ curl -sL https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588 {"stop":["\u003c|begin▁of▁sentence|\u003e","\u003c|end▁of▁sentence|\u003e","\u003c|User|\u003e","\u003c|Assistant|\u003e"]} ```
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

@epicwhale And now output of

curl --head -v https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49

<!-- gh-comment-id:2619150269 --> @rick-github commented on GitHub (Jan 28, 2025): @epicwhale And now output of ``` curl --head -v https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 ```
Author
Owner

@epicwhale commented on GitHub (Jan 28, 2025):

❯ curl --head -v https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49

* Host registry.ollama.ai:443 was resolved.
* IPv6: 2606:4700:3036::6815:4be3, 2606:4700:3034::ac43:b6e5
* IPv4: 172.67.182.229, 104.21.75.227
*   Trying 172.67.182.229:443...
* Connected to registry.ollama.ai (172.67.182.229) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
*  CAfile: /etc/ssl/cert.pem
*  CApath: none
* (304) (IN), TLS handshake, Server hello (2):
* (304) (IN), TLS handshake, Unknown (8):
* (304) (IN), TLS handshake, Certificate (11):
* (304) (IN), TLS handshake, CERT verify (15):
* (304) (IN), TLS handshake, Finished (20):
* (304) (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
*  subject: CN=ollama.ai
*  start date: Dec 10 01:25:27 2024 GMT
*  expire date: Mar 10 01:25:26 2025 GMT
*  subjectAltName: host "registry.ollama.ai" matched cert's "*.ollama.ai"
*  issuer: C=US; O=Google Trust Services; CN=WE1
*  SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49
* [HTTP/2] [1] [:method: HEAD]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: registry.ollama.ai]
* [HTTP/2] [1] [:path: /v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
> HEAD /v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 HTTP/2
> Host: registry.ollama.ai
> User-Agent: curl/8.7.1
> Accept: */*
>
* Request completely sent off
< HTTP/2 200
HTTP/2 200
< date: Tue, 28 Jan 2025 14:27:57 GMT
date: Tue, 28 Jan 2025 14:27:57 GMT
< content-length: 4683073184
content-length: 4683073184
< via: 1.1 google
via: 1.1 google
< alt-svc: h3=":443"; ma=86400
alt-svc: h3=":443"; ma=86400
< cf-cache-status: DYNAMIC
cf-cache-status: DYNAMIC
< report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=MJTQqt7bhwvfc%2BgmfXz62I6G7csQ1bUEYQ4b7Bw25R8Hu%2FFSUywuJacfM3%2FADfM87H0t%2BqkGKNg%2BfyFNPxS5GcaK6abimZU%2F5wq9oCbf3NX40JmfvmlaHnaN89MUtk7Uy%2F%2BSyb8%3D"}],"group":"cf-nel","max_age":604800}
report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=MJTQqt7bhwvfc%2BgmfXz62I6G7csQ1bUEYQ4b7Bw25R8Hu%2FFSUywuJacfM3%2FADfM87H0t%2BqkGKNg%2BfyFNPxS5GcaK6abimZU%2F5wq9oCbf3NX40JmfvmlaHnaN89MUtk7Uy%2F%2BSyb8%3D"}],"group":"cf-nel","max_age":604800}
< nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
< server: cloudflare
server: cloudflare
< cf-ray: 9091aa89487548e6-BOM
cf-ray: 9091aa89487548e6-BOM
< server-timing: cfL4;desc="?proto=TCP&rtt=29237&min_rtt=26662&rtt_var=7807&sent=6&recv=11&lost=0&retrans=0&sent_bytes=2882&recv_bytes=651&delivery_rate=104117&cwnd=252&unsent_bytes=0&cid=a2ed43469571cd3b&ts=434&x=0"
server-timing: cfL4;desc="?proto=TCP&rtt=29237&min_rtt=26662&rtt_var=7807&sent=6&recv=11&lost=0&retrans=0&sent_bytes=2882&recv_bytes=651&delivery_rate=104117&cwnd=252&unsent_bytes=0&cid=a2ed43469571cd3b&ts=434&x=0"
<

* Connection #0 to host registry.ollama.ai left intact
* 
<!-- gh-comment-id:2619160052 --> @epicwhale commented on GitHub (Jan 28, 2025): ``` ❯ curl --head -v https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 * Host registry.ollama.ai:443 was resolved. * IPv6: 2606:4700:3036::6815:4be3, 2606:4700:3034::ac43:b6e5 * IPv4: 172.67.182.229, 104.21.75.227 * Trying 172.67.182.229:443... * Connected to registry.ollama.ai (172.67.182.229) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): * CAfile: /etc/ssl/cert.pem * CApath: none * (304) (IN), TLS handshake, Server hello (2): * (304) (IN), TLS handshake, Unknown (8): * (304) (IN), TLS handshake, Certificate (11): * (304) (IN), TLS handshake, CERT verify (15): * (304) (IN), TLS handshake, Finished (20): * (304) (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256 / [blank] / UNDEF * ALPN: server accepted h2 * Server certificate: * subject: CN=ollama.ai * start date: Dec 10 01:25:27 2024 GMT * expire date: Mar 10 01:25:26 2025 GMT * subjectAltName: host "registry.ollama.ai" matched cert's "*.ollama.ai" * issuer: C=US; O=Google Trust Services; CN=WE1 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 * [HTTP/2] [1] [:method: HEAD] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: registry.ollama.ai] * [HTTP/2] [1] [:path: /v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49] * [HTTP/2] [1] [user-agent: curl/8.7.1] * [HTTP/2] [1] [accept: */*] > HEAD /v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 HTTP/2 > Host: registry.ollama.ai > User-Agent: curl/8.7.1 > Accept: */* > * Request completely sent off < HTTP/2 200 HTTP/2 200 < date: Tue, 28 Jan 2025 14:27:57 GMT date: Tue, 28 Jan 2025 14:27:57 GMT < content-length: 4683073184 content-length: 4683073184 < via: 1.1 google via: 1.1 google < alt-svc: h3=":443"; ma=86400 alt-svc: h3=":443"; ma=86400 < cf-cache-status: DYNAMIC cf-cache-status: DYNAMIC < report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=MJTQqt7bhwvfc%2BgmfXz62I6G7csQ1bUEYQ4b7Bw25R8Hu%2FFSUywuJacfM3%2FADfM87H0t%2BqkGKNg%2BfyFNPxS5GcaK6abimZU%2F5wq9oCbf3NX40JmfvmlaHnaN89MUtk7Uy%2F%2BSyb8%3D"}],"group":"cf-nel","max_age":604800} report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=MJTQqt7bhwvfc%2BgmfXz62I6G7csQ1bUEYQ4b7Bw25R8Hu%2FFSUywuJacfM3%2FADfM87H0t%2BqkGKNg%2BfyFNPxS5GcaK6abimZU%2F5wq9oCbf3NX40JmfvmlaHnaN89MUtk7Uy%2F%2BSyb8%3D"}],"group":"cf-nel","max_age":604800} < nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800} nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800} < server: cloudflare server: cloudflare < cf-ray: 9091aa89487548e6-BOM cf-ray: 9091aa89487548e6-BOM < server-timing: cfL4;desc="?proto=TCP&rtt=29237&min_rtt=26662&rtt_var=7807&sent=6&recv=11&lost=0&retrans=0&sent_bytes=2882&recv_bytes=651&delivery_rate=104117&cwnd=252&unsent_bytes=0&cid=a2ed43469571cd3b&ts=434&x=0" server-timing: cfL4;desc="?proto=TCP&rtt=29237&min_rtt=26662&rtt_var=7807&sent=6&recv=11&lost=0&retrans=0&sent_bytes=2882&recv_bytes=651&delivery_rate=104117&cwnd=252&unsent_bytes=0&cid=a2ed43469571cd3b&ts=434&x=0" < * Connection #0 to host registry.ollama.ai left intact * ```
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

OK, not the expected output. What happens when you run

ollama pull deepseek-r1:7b
<!-- gh-comment-id:2619170263 --> @rick-github commented on GitHub (Jan 28, 2025): OK, not the expected output. What happens when you run ``` ollama pull deepseek-r1:7b ```
Author
Owner

@blackhaj commented on GitHub (Jan 28, 2025):

I am not @epicwhale but I am getting the same issues (and similar outputs). When I run ollama pull deepseek-r1:7b I get the same experience

Image

<!-- gh-comment-id:2619194245 --> @blackhaj commented on GitHub (Jan 28, 2025): I am not @epicwhale but I am getting the same issues (and similar outputs). When I run `ollama pull deepseek-r1:7b` I get the same experience ![Image](https://github.com/user-attachments/assets/b26163b6-32c9-482a-8398-d914587c1e52)
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

Something is broken at Cloudfare.

<?xml version="1.0" encoding="UTF-8"?>
  <Error>
    <Code>ServiceUnavailable</Code>
    <Message>Please look at https://www.cloudflarestatus.com for issues or contact customer support.</Message>
  </Error>

Image

<!-- gh-comment-id:2619233619 --> @rick-github commented on GitHub (Jan 28, 2025): Something is broken at Cloudfare. ``` <?xml version="1.0" encoding="UTF-8"?> <Error> <Code>ServiceUnavailable</Code> <Message>Please look at https://www.cloudflarestatus.com for issues or contact customer support.</Message> </Error> ``` [![Image](https://github.com/user-attachments/assets/029f96d5-765c-456a-8deb-f92df127c725)](https://www.cloudflarestatus.com/)
Author
Owner

@aidanxyz commented on GitHub (Jan 28, 2025):

Strange that it fails on deepseek models and not on others.

<!-- gh-comment-id:2619259673 --> @aidanxyz commented on GitHub (Jan 28, 2025): Strange that it fails on deepseek models and not on others.
Author
Owner

@epicwhale commented on GitHub (Jan 28, 2025):

OK, not the expected output. What happens when you run

ollama pull deepseek-r1:7b
❯ ollama pull deepseek-r1:7b
pulling manifest
pulling 96c415656d37...   0% ▕                                                                                                                        ▏    0 B/4.7 GB
Error: max retries exceeded: EOF
<!-- gh-comment-id:2619260191 --> @epicwhale commented on GitHub (Jan 28, 2025): > OK, not the expected output. What happens when you run > > ``` > ollama pull deepseek-r1:7b > ``` ``` ❯ ollama pull deepseek-r1:7b pulling manifest pulling 96c415656d37... 0% ▕ ▏ 0 B/4.7 GB Error: max retries exceeded: EOF ```
Author
Owner

@aidanxyz commented on GitHub (Jan 28, 2025):

Are there an alternative sources where the model can be downloaded and plugged into ollama?

<!-- gh-comment-id:2619263812 --> @aidanxyz commented on GitHub (Jan 28, 2025): Are there an alternative sources where the model can be downloaded and plugged into ollama?
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

Strange that it fails on deepseek models and not on others.

I think R2 in the cloudfare CDN is distributed storage, some backend hosting a chunk of the GGUF file is acting up.

Are there an alternative sources where the model can be downloaded and plugged into ollama?

ollama pull hf.co/DevQuasar/deepseek-ai.DeepSeek-R1-Distill-Qwen-7B-GGUF:Q4_K_M

It's not quite the same in that the GGUF file has a different hash and the template has FIM processing, but if you can't wait for Cloudfare to get its act together, it's better than nothing.

<!-- gh-comment-id:2619306244 --> @rick-github commented on GitHub (Jan 28, 2025): > Strange that it fails on deepseek models and not on others. I think R2 in the cloudfare CDN is distributed storage, some backend hosting a chunk of the GGUF file is acting up. > Are there an alternative sources where the model can be downloaded and plugged into ollama? ``` ollama pull hf.co/DevQuasar/deepseek-ai.DeepSeek-R1-Distill-Qwen-7B-GGUF:Q4_K_M ``` It's not quite the same in that the GGUF file has a different hash and the template has FIM processing, but if you can't wait for Cloudfare to get its act together, it's better than nothing.
Author
Owner

@ChaosCom commented on GitHub (Jan 28, 2025):

Hey @rick-github, one of the affected here.
If this is an R2 - related cloudflare issue, is it possible to mitigate this by doing what CF writes?

AWS recently updated their SDKs to enable CRC32 checksums on multiple object operations by default. R2 does not currently support CRC32 checksums, and the default configurations will return header related errors such as Header 'x-amz-checksum-algorithm' with value 'CRC32' not implemented. Impacted users can either pin AWS SDKs to a prior version or modify the configuration to restore the prior default behavior of not checking checksums on upload.

<!-- gh-comment-id:2619345195 --> @ChaosCom commented on GitHub (Jan 28, 2025): Hey @rick-github, one of the affected here. If this is an R2 - related cloudflare issue, is it possible to mitigate this by doing what CF writes? _AWS recently updated their SDKs to enable CRC32 checksums on multiple object operations by default. R2 does not currently support CRC32 checksums, and the default configurations will return header related errors such as Header 'x-amz-checksum-algorithm' with value 'CRC32' not implemented. **Impacted users can either pin AWS SDKs to a prior version or modify the configuration to restore the prior default behavior of not checking checksums on upload.**_
Author
Owner

@aidanxyz commented on GitHub (Jan 28, 2025):

8b version works ollama run deepseek-r1:8b

<!-- gh-comment-id:2619356864 --> @aidanxyz commented on GitHub (Jan 28, 2025): 8b version works `ollama run deepseek-r1:8b`
Author
Owner

@ChaosCom commented on GitHub (Jan 28, 2025):

The 8b version is based on llama, the 7b version is based on qwen2 - imo completely different architectures specializing in different things (qwen2 is code-focused).

What I also noticed: Investigating the "blobs" download directory, due to the oncomplete download, there's alot of state-tracking going on
sha256-<...HASH>-partial-0
sha256-<...HASH>-partial-1
...

These track how much of each respective chunk has been downloaded. For me, the 0-th chunk is
{"N":0,"Offset":0,"Size":292692074,"Completed":0}
whereas the 6-th chunk is
{"N":6,"Offset":1756152444,"Size":292692074,"Completed":292692074}

Which means, that the system ollama uses under-the-hood for downloads is not downloading things in sequence for deepseek-r1:7b. Is this normal behavior ?

<!-- gh-comment-id:2619415418 --> @ChaosCom commented on GitHub (Jan 28, 2025): The 8b version is based on llama, the 7b version is based on qwen2 - imo completely different architectures specializing in different things (qwen2 is code-focused). What I also noticed: Investigating the "blobs" download directory, due to the oncomplete download, there's alot of state-tracking going on sha256-<...HASH>-partial-0 sha256-<...HASH>-partial-1 ... These track how much of each respective chunk has been downloaded. For me, the 0-th chunk is {"N":0,"Offset":0,"Size":292692074,"Completed":0} whereas the 6-th chunk is {"N":6,"Offset":1756152444,"Size":292692074,"Completed":292692074} Which means, that the system ollama uses under-the-hood for downloads is not downloading things in sequence for deepseek-r1:7b. Is this normal behavior ?
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

Which means, that the system ollama uses under-the-hood for downloads is not downloading things in sequence for deepseek-r1:7b. Is this normal behavior ?

Yes. A download is split into a bunch of chunks, the completion downloads is asynchronous. The partials are deleted either when the download is complete or the server is restarted.

<!-- gh-comment-id:2619422476 --> @rick-github commented on GitHub (Jan 28, 2025): > Which means, that the system ollama uses under-the-hood for downloads is not downloading things in sequence for deepseek-r1:7b. Is this normal behavior ? Yes. A download is split into a bunch of chunks, the completion downloads is asynchronous. The partials are deleted either when the download is complete or the server is restarted.
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

deepseek-r1:7b: magnet:?xt=urn:btih:2JDGTZ7JT7KM24GCEXQQGDK4S3HKN23B&dn=models&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce

<!-- gh-comment-id:2619455495 --> @rick-github commented on GitHub (Jan 28, 2025): deepseek-r1:7b: magnet:?xt=urn:btih:2JDGTZ7JT7KM24GCEXQQGDK4S3HKN23B&dn=models&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce
Author
Owner

@ChaosCom commented on GitHub (Jan 28, 2025):

deepseek-r1:7b: magnet:?xt=urn:btih:2JDGTZ7JT7KM24GCEXQQGDK4S3HKN23B&dn=models&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce

Thanks a bunch, the model now works. I'll keep the seed running today too so people familiar with this sort fiddling can get the model this way.

<!-- gh-comment-id:2619660163 --> @ChaosCom commented on GitHub (Jan 28, 2025): > deepseek-r1:7b: magnet:?xt=urn:btih:2JDGTZ7JT7KM24GCEXQQGDK4S3HKN23B&dn=models&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce Thanks a bunch, the model now works. I'll keep the seed running today too so people familiar with this sort fiddling can get the model this way.
Author
Owner

@AvtarSinghChundawat commented on GitHub (Jan 29, 2025):

deepseek-r1:7b: magnet:?xt=urn:btih:2JDGTZ7JT7KM24GCEXQQGDK4S3HKN23B&dn=models&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce

How to use this sir ?

<!-- gh-comment-id:2620849885 --> @AvtarSinghChundawat commented on GitHub (Jan 29, 2025): > deepseek-r1:7b: magnet:?xt=urn:btih:2JDGTZ7JT7KM24GCEXQQGDK4S3HKN23B&dn=models&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce How to use this sir ?
Author
Owner

@rick-github commented on GitHub (Jan 29, 2025):

Use a torrent client to download the model, move the files to OLLAMA_MODELS.

<!-- gh-comment-id:2621005796 --> @rick-github commented on GitHub (Jan 29, 2025): Use a torrent client to download the model, move the files to [OLLAMA_MODELS]( https://github.com/ollama/ollama/blob/main/docs/faq.md#where-are-models-stored).
Author
Owner

@Paras1209 commented on GitHub (Jan 29, 2025):

Try changing your dns server . You can change by following ways:

To change your DNS settings:

  1. Open the Control Panel and go to Network and Sharing Center.

  2. Click on Change adapter settings.

  3. Right-click on your active network connection and select Properties.

  4. Select Internet Protocol Version 4 (TCP/IPv4) and click Properties.

  5. Choose Use the following DNS server addresses and enter:
    Preferred DNS server: 8.8.8.8 (Google DNS)
    Alternate DNS server: 1.1.1.1 (Cloudflare DNS)

  6. Click OK to save the changes.

<!-- gh-comment-id:2621168962 --> @Paras1209 commented on GitHub (Jan 29, 2025): Try changing your dns server . You can change by following ways: To change your DNS settings: 1) Open the Control Panel and go to Network and Sharing Center. 2) Click on Change adapter settings. 3) Right-click on your active network connection and select Properties. 4) Select Internet Protocol Version 4 (TCP/IPv4) and click Properties. 5) Choose Use the following DNS server addresses and enter: Preferred DNS server: 8.8.8.8 (Google DNS) Alternate DNS server: 1.1.1.1 (Cloudflare DNS) 6) Click OK to save the changes.
Author
Owner

@rabadur503 commented on GitHub (Jan 29, 2025):

so is there a solution to this problem or not?

<!-- gh-comment-id:2621723335 --> @rabadur503 commented on GitHub (Jan 29, 2025): so is there a solution to this problem or not?
Author
Owner

@rick-github commented on GitHub (Jan 29, 2025):

The solution is for Cloudfare to fix their CDN. The workaround is to switch DNS servers or get the model from a different source.

<!-- gh-comment-id:2621735562 --> @rick-github commented on GitHub (Jan 29, 2025): The solution is for Cloudfare to fix their CDN. The workaround is to switch DNS servers or get the model from a different source.
Author
Owner

@Paras1209 commented on GitHub (Jan 29, 2025):

For everyone who are confused i would like to make it clear for all that the problem in downloading is due to DNS server . And to solve this problem i already mentioned the steps in my above comment . For everyones conveinence i would like to mention all points again :

To change your DNS settings:

  1. Open the Control Panel and go to Network and Sharing Center.
  2. Click on Change adapter settings.
  3. Right-click on your active network connection and select Properties.
  4. Select Internet Protocol Version 4 (TCP/IPv4) and click Properties.
  5. Choose Use the following DNS server addresses and enter:

    Preferred DNS server: 8.8.8.8 (Google DNS) .
    Alternate DNS server: 1.1.1.1 (Cloudflare DNS) .

  6. Click OK to save the changes.
<!-- gh-comment-id:2622116623 --> @Paras1209 commented on GitHub (Jan 29, 2025): For everyone who are confused i would like to make it clear for all that the problem in downloading is due to DNS server . And to solve this problem i already mentioned the steps in my above comment . For everyones conveinence i would like to mention all points again : To change your DNS settings: 1. Open the Control Panel and go to Network and Sharing Center. 2. Click on Change adapter settings. 3. Right-click on your active network connection and select Properties. 4. Select Internet Protocol Version 4 (TCP/IPv4) and click Properties. 5. Choose Use the following DNS server addresses and enter: > Preferred DNS server: 8.8.8.8 (Google DNS) . > Alternate DNS server: 1.1.1.1 (Cloudflare DNS) . 6. Click OK to save the changes.
Author
Owner

@Paras1209 commented on GitHub (Jan 29, 2025):

You can solve this problem by changing your DNS server from control
panel . Steps for the same are mentioned in my comment .
For your convenience i would like to mention the same in your reply here :

To change your DNS settings:

  1. Open the Control Panel and go to Network and Sharing Center.
  2. Click on Change adapter settings.
  3. Right-click on your active network connection and select Properties.
  4. Select Internet Protocol Version 4 (TCP/IPv4) and click Properties.
  5. Choose Use the following DNS server addresses and enter:
    a. Preferred DNS server: 8.8.8.8 (Google DNS) .
    b. Alternate DNS server: 1.1.1.1 (Cloudflare DNS) .
  6. Click OK to save the changes.

On Wed, Jan 29, 2025 at 7:25 PM rabadur503 @.***> wrote:

so is there a solution to this problem or not?


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.***>

<!-- gh-comment-id:2622134107 --> @Paras1209 commented on GitHub (Jan 29, 2025): You can solve this problem by changing your DNS server from control panel . Steps for the same are mentioned in my comment . For your convenience i would like to mention the same in your reply here : To change your DNS settings: 1. Open the Control Panel and go to Network and Sharing Center. 2. Click on Change adapter settings. 3. Right-click on your active network connection and select Properties. 4. Select Internet Protocol Version 4 (TCP/IPv4) and click Properties. 5. Choose Use the following DNS server addresses and enter: a. Preferred DNS server: 8.8.8.8 (Google DNS) . b. Alternate DNS server: 1.1.1.1 (Cloudflare DNS) . 6. Click OK to save the changes. On Wed, Jan 29, 2025 at 7:25 PM rabadur503 ***@***.***> wrote: > > so is there a solution to this problem or not? > > — > Reply to this email directly, view it on GitHub, or unsubscribe. > You are receiving this because you commented.Message ID: ***@***.***>
Author
Owner

@rick-github commented on GitHub (Jan 29, 2025):

I suspect that Cloudfare have given dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com a new IP address but they have quite a long timeout value in the SOA, so that switching the DNS server or flushing the local DNS cache is required to get the new one.

<!-- gh-comment-id:2622138648 --> @rick-github commented on GitHub (Jan 29, 2025): I suspect that Cloudfare have given dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com a new IP address but they have quite a long timeout value in the SOA, so that switching the DNS server or flushing the local DNS cache is required to get the new one.
Author
Owner

@pdevine commented on GitHub (Jan 29, 2025):

Sorry about this, guys. We are looking at some ways to get more reliability out of Cloudflare.

<!-- gh-comment-id:2623181222 --> @pdevine commented on GitHub (Jan 29, 2025): Sorry about this, guys. We are looking at some ways to get more reliability out of Cloudflare.
Author
Owner

@bmizerany commented on GitHub (Jan 30, 2025):

This appears to be fixed now. Closing. Please open a new ticket if the issue comes back and we'll look into it.

<!-- gh-comment-id:2623211616 --> @bmizerany commented on GitHub (Jan 30, 2025): This appears to be fixed now. Closing. Please open a new ticket if the issue comes back and we'll look into it.
Author
Owner

@seanmavley commented on GitHub (Jan 30, 2025):

Still appears to be an issue somewhere and not just DNS

WSL
Ollama 0.5.7

Starts download, downloads about 30%, out of nowhere, percentage drops to about 10%

Not sure what's going on, but that's just weird. Seems to be going back and forth and does this repeatedly.

Downloads to a point, rolls itself back somehow, goes forward, rolls back, all the while consuming data.

These models are big and those of us not on infinite Internet services, pretty adds up real quick in costs.

<!-- gh-comment-id:2623320305 --> @seanmavley commented on GitHub (Jan 30, 2025): Still appears to be an issue somewhere and not just DNS WSL Ollama 0.5.7 Starts download, downloads about 30%, out of nowhere, percentage drops to about 10% Not sure what's going on, but that's just weird. Seems to be going back and forth and does this repeatedly. Downloads to a point, rolls itself back somehow, goes forward, rolls back, all the while consuming data. These models are big and those of us not on infinite Internet services, pretty adds up real quick in costs.
Author
Owner

@pdevine commented on GitHub (Jan 30, 2025):

@seanmavley What is your location and what kind of net connection are you using?

<!-- gh-comment-id:2623331917 --> @pdevine commented on GitHub (Jan 30, 2025): @seanmavley What is your location and what kind of net connection are you using?
Author
Owner

@seanmavley commented on GitHub (Jan 30, 2025):

@pdevine
In Ghana, on MTN, connected via cable to a modem on 4G.

By the way, I've never had issues on same network downloading models via command line.

Current DNS is scancom (MTN) with fast.com saying network is 10Mbps

<!-- gh-comment-id:2623339526 --> @seanmavley commented on GitHub (Jan 30, 2025): @pdevine In Ghana, on MTN, connected via cable to a modem on 4G. By the way, I've never had issues on same network downloading models via command line. Current DNS is scancom (MTN) with fast.com saying network is 10Mbps
Author
Owner

@rick-github commented on GitHub (Jan 30, 2025):

This is not the DNS problem, this is the stalling problem.

<!-- gh-comment-id:2623342256 --> @rick-github commented on GitHub (Jan 30, 2025): This is not the DNS problem, this is the stalling problem.
Author
Owner

@seanmavley commented on GitHub (Jan 30, 2025):

@rick-github aah I see now
https://github.com/ollama/ollama/issues/8484

Stalling issue in my case then. Will follow updates on the other issue. Thanks.

<!-- gh-comment-id:2623347303 --> @seanmavley commented on GitHub (Jan 30, 2025): @rick-github aah I see now https://github.com/ollama/ollama/issues/8484 Stalling issue in my case then. Will follow updates on the other issue. Thanks.
Author
Owner

@pdevine commented on GitHub (Jan 30, 2025):

@seanmavley I think you're unfortunately getting corrupted packets and the download is checking the file and seeing that it's incorrect and throwing out that chunk.

<!-- gh-comment-id:2623363937 --> @pdevine commented on GitHub (Jan 30, 2025): @seanmavley I think you're unfortunately getting corrupted packets and the download is checking the file and seeing that it's incorrect and throwing out that chunk.
Author
Owner

@rick-github commented on GitHub (Jan 30, 2025):

Verify by checking server log.

<!-- gh-comment-id:2623366557 --> @rick-github commented on GitHub (Jan 30, 2025): Verify by checking [server log](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues).
Author
Owner

@sabbirsam commented on GitHub (Jan 30, 2025):

It keeps falling back to 15–18% after reaching 20%.
ollama run deepseek-r1:8b

Image

<!-- gh-comment-id:2624714909 --> @sabbirsam commented on GitHub (Jan 30, 2025): It keeps falling back to 15–18% after reaching 20%. ollama run deepseek-r1:8b ![Image](https://github.com/user-attachments/assets/98d254a2-4d51-412e-a007-0782da3aac66)
Author
Owner

@rick-github commented on GitHub (Jan 30, 2025):

Server log would show whether it's a stall, a corrupt packet, or some other problem.

In the meantime, you can work around it by killing the download every 10 seconds with this script: https://github.com/ollama/ollama/issues/8484#issuecomment-2623757960

<!-- gh-comment-id:2624789538 --> @rick-github commented on GitHub (Jan 30, 2025): [Server log](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) would show whether it's a stall, a corrupt packet, or some other problem. In the meantime, you can work around it by killing the download every 10 seconds with this script: https://github.com/ollama/ollama/issues/8484#issuecomment-2623757960
Author
Owner

@ERICK-ZABALA commented on GitHub (Feb 1, 2025):

error to download deepseek-r1
C:>ollama run deepseek-r1
pulling manifest
pulling 96c415656d37... 0% ▕ ▏ 2.6 MB/4.7 GB
Error: max retries exceeded: Get "96c415656d/data": net/http: TLS handshake timeout

<!-- gh-comment-id:2628961673 --> @ERICK-ZABALA commented on GitHub (Feb 1, 2025): error to download deepseek-r1 C:\>ollama run deepseek-r1 pulling manifest pulling 96c415656d37... 0% ▕ ▏ 2.6 MB/4.7 GB Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/96/96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250201%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250201T132721Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=c038f2a8509d54a8d4d168110f5bf5b98ed2478e9c8cf92a2fbfeef7d456f607": net/http: TLS handshake timeout
Author
Owner

@yashwanth2706 commented on GitHub (Feb 2, 2025):

I tried several times to downlaod but ollama keeps failing, I have good internet connection

It just restarts the download after downloading more than 5%

https://github.com/user-attachments/assets/ad8cf855-f85b-4ec0-87a0-f1e99d037b12

<!-- gh-comment-id:2629452038 --> @yashwanth2706 commented on GitHub (Feb 2, 2025): I tried several times to downlaod but ollama keeps failing, I have good internet connection It just restarts the download after downloading more than 5% https://github.com/user-attachments/assets/ad8cf855-f85b-4ec0-87a0-f1e99d037b12
Author
Owner

@yashwanth2706 commented on GitHub (Feb 2, 2025):

OK, not the expected output. What happens when you run

ollama pull deepseek-r1:7b

Even if i use ollama pull deepseek-r1:7b issue still persists

<!-- gh-comment-id:2629455281 --> @yashwanth2706 commented on GitHub (Feb 2, 2025): > OK, not the expected output. What happens when you run > > ``` > ollama pull deepseek-r1:7b > ``` Even if i use `ollama pull deepseek-r1:7b` issue still persists
Author
Owner

@rick-github commented on GitHub (Feb 2, 2025):

The Cloudfare CDN is having problems. The ollama team don't seem to be interested in fixing the problem. If you know how to use a torrent client, you can get deepseek-r1:7b from here:

magnet:?xt=urn:btih:a43e5b893b14f6c3dc78678e766101eeb7ca10c1&dn=models&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce

<!-- gh-comment-id:2629458569 --> @rick-github commented on GitHub (Feb 2, 2025): The Cloudfare CDN is having problems. The ollama team don't seem to be interested in fixing the problem. If you know how to use a torrent client, you can get deepseek-r1:7b from here: magnet:?xt=urn:btih:a43e5b893b14f6c3dc78678e766101eeb7ca10c1&dn=models&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce
Author
Owner

@jmorganca commented on GitHub (Feb 2, 2025):

Re-opening this to track the stalling issue. Sorry for all the problems and @rick-github for helping debug - we're definitely interested in fixing this, and are working with Cloudflare to resolve issues while we also make changes to Ollama's downloader for reliability

<!-- gh-comment-id:2629528060 --> @jmorganca commented on GitHub (Feb 2, 2025): Re-opening this to track the stalling issue. Sorry for all the problems and @rick-github for helping debug - we're definitely interested in fixing this, and are working with Cloudflare to resolve issues while we also make changes to Ollama's downloader for reliability
Author
Owner

@rama-bin commented on GitHub (Feb 2, 2025):

Wondering if this issue is related. Can't pull anything from ollama toady (x509: negative serial number) -

○ → ollama pull nomic-embed-text
pulling manifest
pulling manifest
pulling 970aa74c0a90... 0% ▕ ▏ 0 B/274 MB
Error: max retries exceeded: Get "970aa74c0a/data": tls: failed to parse certificate from server: x509: negative serial number

<!-- gh-comment-id:2629566041 --> @rama-bin commented on GitHub (Feb 2, 2025): Wondering if this issue is related. Can't pull anything from ollama toady (x509: negative serial number) - ○ → ollama pull nomic-embed-text pulling manifest pulling manifest pulling 970aa74c0a90... 0% ▕ ▏ 0 B/274 MB Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/97/970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250202%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250202T212546Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=52f3be301cd93887166f0de4df4aab654be2622eb6641c107b2b4ce54840919f": **tls: failed to parse certificate from server: x509: negative serial number**
Author
Owner

@rick-github commented on GitHub (Feb 2, 2025):

Another problem with the Cloudfare CDN. dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com has been an issue for weeks.

<!-- gh-comment-id:2629567666 --> @rick-github commented on GitHub (Feb 2, 2025): Another problem with the Cloudfare CDN. dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com has been an issue for weeks.
Author
Owner

@seanmavley commented on GitHub (Feb 2, 2025):

<wearing-tinfoil-hat>is the US gov through cloud flare tryna sabotage the ease of use of Deepseek by the average person, a china-originating model? Wiping half a trillion, no matter how brief that was must count for something, I guess.</wearing-tinfoil-hat>

I mean ollama via cloud flare has been working for months for every model, no issues.

Deepseek arrives and all of a sudden Cloudflare doesn't work as expected for Deepseek in particular. Hmmm 🤔

<!-- gh-comment-id:2629604619 --> @seanmavley commented on GitHub (Feb 2, 2025): ``` <wearing-tinfoil-hat>is the US gov through cloud flare tryna sabotage the ease of use of Deepseek by the average person, a china-originating model? Wiping half a trillion, no matter how brief that was must count for something, I guess.</wearing-tinfoil-hat> ``` I mean ollama via cloud flare has been working for months for every model, no issues. Deepseek arrives and all of a sudden Cloudflare doesn't work as expected for Deepseek in particular. Hmmm 🤔
Author
Owner

@rick-github commented on GitHub (Feb 2, 2025):

Not unless they have a time machine, dd20bb891979d25aebc8bec07b2b3bbc has been a problem on and off since 2023. It's just gotten really bad lately, maybe due to the increased interest in using ollama.

<!-- gh-comment-id:2629609165 --> @rick-github commented on GitHub (Feb 2, 2025): Not unless they have a time machine, dd20bb891979d25aebc8bec07b2b3bbc has been a problem on and off since 2023. It's just gotten really bad lately, maybe due to the increased interest in using ollama.
Author
Owner

@pdevine commented on GitHub (Feb 3, 2025):

... maybe due to the increased interest in using ollama.

I think we're definitely putting a lot of pressure on CF right now. Deepseek alone peeked at over 1 million pulls/day, and that's not including the rest of the models. As @jmorganca mentioned we are looking at a bunch of improvements here; just trying to figure out what we can do short term vs. long term.

<!-- gh-comment-id:2629639182 --> @pdevine commented on GitHub (Feb 3, 2025): > ... maybe due to the increased interest in using ollama. I think we're definitely putting a lot of pressure on CF right now. Deepseek alone peeked at over 1 million pulls/day, and that's not including the rest of the models. As @jmorganca mentioned we are looking at a bunch of improvements here; just trying to figure out what we can do short term vs. long term.
Author
Owner

@gitdexgit commented on GitHub (Feb 3, 2025):

lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com on 127.0.0.53:53: server misbehaving

Not an issue with ollama. DNS server is acting up. What's the result of

nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com

thank you so much... I did nslookup... and it gave me unknown and time out .... yeah... my dns server was the problem.... now ollama pull deepseekR1:7b works I believe... . it also started from 27% not from 0.... maybe the .ps1 script that guy gave actually was working and doing the job

Image

I'm not sure but I think if you set the number of retirees to idk like 500 if you have a shitty internet... it should keep going and going and downloading stop where it was then continue until it's finished... right..... ||OR|| Change your DNS server ... maybe try 1.1.1.1 (primary for claude flare) and 8.8.8.8(google's DNS).... as I'm typing it's now 56%... seems to be working... try changing your dns server.... then download then change it back to what you used to use

Image

if it doesn't work ... .here I the .ps1 script I was talking about it's from this GitHub you can download it or read the .ps1 to make sure It's safe... here is it bellow

it's a simple PowerShell script.... just set the model you want.... in $ollamaCommand = "ollama pull deepseek-r1:7b" ......and the retires in $maxRetries = 100...... idk about Start-Sleep -Seconds 60 I just left it as is tbh... .I think leave it and it should run... I believe till it says complete... even if it doesn't show progress bar... it's downloading .... right... so yeah... this is the 2nd method if DNS doesn't work

!!!! IF YOU DON"T KNOW POWERSHELL JUST COPY PAST IT AND ASK AI WHAT IT DOES !!!!

here is the .ps1 script

# Script: olpull.ps1
# Author: dazistgut
# Date: 25/01/25
#
# Description:
# This script addresses the issue where Ollama deletes progress during model downloads 
# if connectivity is lost for a brief period (e.g., 5 seconds). It automates the process 
# by repeatedly interrupting and restarting the download, ensuring progress is retained.
#
# Steps to Use:
# 1. Update the `$ollamaCommand` variable with the model you want to pull.
# 2. Set the desired number of retries in `$maxRetries`.
# 3. Adjust the sleep duration (in seconds) to control the timeout period.
# 4. Save and run the script using: `./olpull.ps1`.
#
# Notes:
# - Ensure Ollama is installed and available in your system PATH.
# - The script will stop automatically once the model is fully downloaded 
#   or the maximum number of retries is reached.
# - Logs will indicate progress and any issues encountered during execution.
#
# Example Usage:
# ./olpull.ps1


$ollamaCommand = "ollama pull deepseek-r1:7b"
$maxRetries = 100
$retryCount = 0
$modelPath = "$env:USERPROFILE\.ollama\models\deepseek-r1:7b"

Write-Host "Starting Ollama pull script. Press Ctrl+C to stop manually."

while ($retryCount -lt $maxRetries) {
    Write-Host "$(Get-Date -Format 'HH:mm:ss'): Attempt #$($retryCount + 1)/$($maxRetries): Running the command..."

    # Start the job
    $job = Start-Job -ScriptBlock {
        param ($command)
        Invoke-Expression $command
    } -ArgumentList $ollamaCommand

    # Wait for a specific duration to allow progress
    Start-Sleep -Seconds 60

    # Stop the job if it's still running
    if (Get-Job -Id $job.Id | Where-Object { $_.State -eq "Running" }) {
        Write-Host "$(Get-Date -Format 'HH:mm:ss'): Stopping the job..."
        Stop-Job -Id $job.Id

        # Wait to ensure the job is stopped
        Start-Sleep -Milliseconds 500

        # Ensure the job is removed
        if (Get-Job -Id $job.Id) {
            Remove-Job -Id $job.Id
        }
    }

    # Check if the model has been downloaded
    if (Test-Path $modelPath) {
        Write-Host "$(Get-Date -Format 'HH:mm:ss'): Model download complete!"
        break
    }
	
    Write-Host "$(Get-Date -Format 'HH:mm:ss'): Model not yet complete.."
    # Increment the retry count and move to the next attempt
    $retryCount++
}

# Final message if retries are exhausted
if ($retryCount -ge $maxRetries) {
    Write-Host "$(Get-Date -Format 'HH:mm:ss'): Maximum retries reached. Exiting script."
}
<!-- gh-comment-id:2629697776 --> @gitdexgit commented on GitHub (Feb 3, 2025): > ``` > lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com on 127.0.0.53:53: server misbehaving > ``` > > Not an issue with ollama. DNS server is acting up. What's the result of > > ``` > nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com > ``` thank you so much... I did nslookup... and it gave me unknown and time out .... yeah... my dns server was the problem.... now ollama pull deepseekR1:7b works I believe... . it also started from 27% not from 0.... maybe the .ps1 script that guy gave actually was working and doing the job ![Image](https://github.com/user-attachments/assets/dcd517bf-de15-4c84-9a3a-14f427ffba8a) I'm not sure but I think if you set the number of retirees to idk like 500 if you have a shitty internet... it should keep going and going and downloading stop where it was then continue until it's finished... right..... ||OR|| Change your DNS server ... maybe try 1.1.1.1 (primary for claude flare) and 8.8.8.8(google's DNS).... as I'm typing it's now 56%... seems to be working... try changing your dns server.... then download then change it back to what you used to use ![Image](https://github.com/user-attachments/assets/5273b2f0-7fef-448c-9ed9-563dba81332b) if it doesn't work ... .here I the .ps1 script I was talking about it's from [this GitHub](https://github.com/dazistgut/ollamapullloop-win/tree/main) you can download it or read the .ps1 to make sure It's safe... here is it bellow it's a simple PowerShell script.... just set the model you want.... in $ollamaCommand = "ollama pull deepseek-r1:7b" ......and the retires in $maxRetries = 100...... idk about Start-Sleep -Seconds 60 I just left it as is tbh... .I think leave it and it should run... I believe till it says complete... even if it doesn't show progress bar... it's downloading .... right... so yeah... this is the 2nd method if DNS doesn't work !!!! IF YOU DON"T KNOW POWERSHELL JUST COPY PAST IT AND ASK AI WHAT IT DOES !!!! here is the .ps1 script ```ps1 # Script: olpull.ps1 # Author: dazistgut # Date: 25/01/25 # # Description: # This script addresses the issue where Ollama deletes progress during model downloads # if connectivity is lost for a brief period (e.g., 5 seconds). It automates the process # by repeatedly interrupting and restarting the download, ensuring progress is retained. # # Steps to Use: # 1. Update the `$ollamaCommand` variable with the model you want to pull. # 2. Set the desired number of retries in `$maxRetries`. # 3. Adjust the sleep duration (in seconds) to control the timeout period. # 4. Save and run the script using: `./olpull.ps1`. # # Notes: # - Ensure Ollama is installed and available in your system PATH. # - The script will stop automatically once the model is fully downloaded # or the maximum number of retries is reached. # - Logs will indicate progress and any issues encountered during execution. # # Example Usage: # ./olpull.ps1 $ollamaCommand = "ollama pull deepseek-r1:7b" $maxRetries = 100 $retryCount = 0 $modelPath = "$env:USERPROFILE\.ollama\models\deepseek-r1:7b" Write-Host "Starting Ollama pull script. Press Ctrl+C to stop manually." while ($retryCount -lt $maxRetries) { Write-Host "$(Get-Date -Format 'HH:mm:ss'): Attempt #$($retryCount + 1)/$($maxRetries): Running the command..." # Start the job $job = Start-Job -ScriptBlock { param ($command) Invoke-Expression $command } -ArgumentList $ollamaCommand # Wait for a specific duration to allow progress Start-Sleep -Seconds 60 # Stop the job if it's still running if (Get-Job -Id $job.Id | Where-Object { $_.State -eq "Running" }) { Write-Host "$(Get-Date -Format 'HH:mm:ss'): Stopping the job..." Stop-Job -Id $job.Id # Wait to ensure the job is stopped Start-Sleep -Milliseconds 500 # Ensure the job is removed if (Get-Job -Id $job.Id) { Remove-Job -Id $job.Id } } # Check if the model has been downloaded if (Test-Path $modelPath) { Write-Host "$(Get-Date -Format 'HH:mm:ss'): Model download complete!" break } Write-Host "$(Get-Date -Format 'HH:mm:ss'): Model not yet complete.." # Increment the retry count and move to the next attempt $retryCount++ } # Final message if retries are exhausted if ($retryCount -ge $maxRetries) { Write-Host "$(Get-Date -Format 'HH:mm:ss'): Maximum retries reached. Exiting script." } ```
Author
Owner

@gitdexgit commented on GitHub (Feb 3, 2025):

The Cloudfare CDN is having problems. The ollama team don't seem to be interested in fixing the problem. If you know how to use a torrent client, you can get deepseek-r1:7b from here:

magnet:?xt=urn:btih:a43e5b893b14f6c3dc78678e766101eeb7ca10c1&dn=models&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce

also 3rd method is to just download the model from turrent... using like Bitcomet or something... as shown above in the quote by the poser

download it then go to your C:\Users%A_UserName%.ollama\models\manifests\registry.ollama.ai\library\deepseek-r1

and inter the deepseek-r1 and put it there and it should show in "ollama list" in your terminal

(create the folder if it's your first time using a deepseek-r1 Model ... I'm using 1.5b and that folder is what it created first hehhehe --> deepseek-r1)

<!-- gh-comment-id:2629706049 --> @gitdexgit commented on GitHub (Feb 3, 2025): > The Cloudfare CDN is having problems. The ollama team don't seem to be interested in fixing the problem. If you know how to use a torrent client, you can get deepseek-r1:7b from here: > > magnet:?xt=urn:btih:a43e5b893b14f6c3dc78678e766101eeb7ca10c1&dn=models&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce also 3rd method is to just download the model from turrent... using like Bitcomet or something... as shown above in the quote by the poser download it then go to your C:\Users\%A_UserName%\.ollama\models\manifests\registry.ollama.ai\library\deepseek-r1 and inter the deepseek-r1 and put it there and it should show in "ollama list" in your terminal (create the folder if it's your first time using a deepseek-r1 Model ... I'm using 1.5b and that folder is what it created first hehhehe --> deepseek-r1)
Author
Owner

@gitdexgit commented on GitHub (Feb 3, 2025):

Image

Change DND and do Pull

1.1.1.1
8.8.8.8

<!-- gh-comment-id:2629713236 --> @gitdexgit commented on GitHub (Feb 3, 2025): ![Image](https://github.com/user-attachments/assets/a9f8b423-7d2d-417b-b740-35462027992e) Change DND and do Pull 1.1.1.1 8.8.8.8
Author
Owner

@rama-bin commented on GitHub (Feb 3, 2025):

Wondering if this issue is related. Can't pull anything from ollama toady (x509: negative serial number) -

○ → ollama pull nomic-embed-text pulling manifest pulling manifest pulling 970aa74c0a90... 0% ▕ ▏ 0 B/274 MB Error: max retries exceeded: Get "970aa74c0a/data": tls: failed to parse certificate from server: x509: negative serial number

I was able to fix it by rolling back to an old ollama version (v0.1.34). For some reason, v0.5.7 is throwing "x509: negative serial number" error.

<!-- gh-comment-id:2629864247 --> @rama-bin commented on GitHub (Feb 3, 2025): > Wondering if this issue is related. Can't pull anything from ollama toady (x509: negative serial number) - > > ○ → ollama pull nomic-embed-text pulling manifest pulling manifest pulling 970aa74c0a90... 0% ▕ ▏ 0 B/274 MB Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/97/970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250202%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250202T212546Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=52f3be301cd93887166f0de4df4aab654be2622eb6641c107b2b4ce54840919f": **tls: failed to parse certificate from server: x509: negative serial number** **I was able to fix it by rolling back to an old ollama version (v0.1.34). For some reason, v0.5.7 is throwing "x509: negative serial number" error.**
Author
Owner

@Rudxain commented on GitHub (Feb 3, 2025):

Termux, built ad22ace439 from source:

./ollama pull deepseek-r1:1.5b
# 100% success

# `ollama serve` mysteriously stopped itself on the other tab,
# so I re-run it

./ollama run deepseek-r1:1.5b
# starts downloading from 0% 🤦
<!-- gh-comment-id:2630740667 --> @Rudxain commented on GitHub (Feb 3, 2025): Termux, built ad22ace439eb3fab7230134e56bb6276a78347e4 from source: ```sh ./ollama pull deepseek-r1:1.5b # 100% success # `ollama serve` mysteriously stopped itself on the other tab, # so I re-run it ./ollama run deepseek-r1:1.5b # starts downloading from 0% 🤦 ```
Author
Owner

@rick-github commented on GitHub (Feb 3, 2025):

If the server quit during download it may be this problem: https://github.com/ollama/ollama/issues/8400. Server logs will confirm/deny.

When the ollama server starts, it does housekeeping which includes purging incomplete downloads, You can prevent this behaviour by setting OLLAMA_NOPRUNE=1.

<!-- gh-comment-id:2630764149 --> @rick-github commented on GitHub (Feb 3, 2025): If the server quit during download it may be this problem: https://github.com/ollama/ollama/issues/8400. Server logs will confirm/deny. When the ollama server starts, it does housekeeping which includes purging incomplete downloads, You can prevent this behaviour by setting `OLLAMA_NOPRUNE=1`.
Author
Owner

@RohanSardar commented on GitHub (Feb 4, 2025):

I found out a solution, if you are on Windows:

  • Go to Settings
  • Network & internet
  • Currently connected wifi properties
  • Edit DNS Server assignment from Automatic(DHCP) to Manual
  • Toggle IPv4 ON
  • Use 1.1.1.1 on Preferred DNS and 1.0.0.1 on Alternate DNS
  • Similarly toggle IPv6 and use 2606:4700:4700::1111 as Preferred DNS and 2606:4700:4700::1001 as Alternate DNS

This worked in my case

<!-- gh-comment-id:2632943246 --> @RohanSardar commented on GitHub (Feb 4, 2025): I found out a solution, if you are on Windows: - Go to Settings - Network & internet - Currently connected wifi properties - Edit DNS Server assignment from Automatic(DHCP) to Manual - Toggle IPv4 ON - Use 1.1.1.1 on Preferred DNS and 1.0.0.1 on Alternate DNS - Similarly toggle IPv6 and use 2606:4700:4700::1111 as Preferred DNS and 2606:4700:4700::1001 as Alternate DNS This worked in my case
Author
Owner

@m-petra-fn commented on GitHub (Feb 4, 2025):

The following script fixed it for me:

https://www.andreagrandi.it/posts/how-to-workaround-ollama-pull-issues/

<!-- gh-comment-id:2635204134 --> @m-petra-fn commented on GitHub (Feb 4, 2025): The following script fixed it for me: https://www.andreagrandi.it/posts/how-to-workaround-ollama-pull-issues/
Author
Owner

@rick-github commented on GitHub (Feb 4, 2025):

This script only works for the stalling problem. If the client has problems connecting to dd20bb891979d25aebc8bec07b2b3bbc it won't help. Anecdotally, changing DNS servers has helped in the latter case, see above

<!-- gh-comment-id:2635211488 --> @rick-github commented on GitHub (Feb 4, 2025): This script only works for the stalling problem. If the client has problems connecting to dd20bb891979d25aebc8bec07b2b3bbc it won't help. Anecdotally, changing DNS servers has helped in the latter case, see [above](https://github.com/ollama/ollama/issues/8632#issuecomment-2632943246)
Author
Owner

@Maltz42 commented on GitHub (Feb 5, 2025):

... maybe due to the increased interest in using ollama.

I think we're definitely putting a lot of pressure on CF right now. Deepseek alone peeked at over 1 million pulls/day, and that's not including the rest of the models. As @jmorganca mentioned we are looking at a bunch of improvements here; just trying to figure out what we can do short term vs. long term.

Probably exacerbated by ollama using 16 simultaneous download connections when you pull a file. There's even a pull request to fix this back in August (https://github.com/ollama/ollama/pull/5683) but it was closed, even though it's still causing problems. An ollama pull drives my gigabit fiber connection into the 10-15% packet loss range when it's running. It's a very aggressive/unfriendly app when it comes to network traffic.

<!-- gh-comment-id:2638246054 --> @Maltz42 commented on GitHub (Feb 5, 2025): > > ... maybe due to the increased interest in using ollama. > > I think we're definitely putting a lot of pressure on CF right now. Deepseek alone peeked at over 1 million pulls/day, and that's not including the rest of the models. As [@jmorganca](https://github.com/jmorganca) mentioned we are looking at a bunch of improvements here; just trying to figure out what we can do short term vs. long term. Probably exacerbated by ollama using 16 simultaneous download connections when you pull a file. There's even a pull request to fix this back in August (https://github.com/ollama/ollama/pull/5683) but it was closed, even though it's still causing problems. An ollama pull drives my gigabit fiber connection into the 10-15% packet loss range when it's running. It's a very aggressive/unfriendly app when it comes to network traffic.
Author
Owner

@MSR2201 commented on GitHub (Feb 6, 2025):

C:\Users\sanke>ollama pull mxbai-embed-large
pulling manifest
pulling 819c2adf5ce6... 0% ▕ ▏ 0 B/669 MB
Error: max retries exceeded: Get "819c2adf5c/data": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host

its not even starting to download the model for me

<!-- gh-comment-id:2638912330 --> @MSR2201 commented on GitHub (Feb 6, 2025): C:\Users\sanke>ollama pull mxbai-embed-large pulling manifest pulling 819c2adf5ce6... 0% ▕ ▏ 0 B/669 MB Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/81/819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250206%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250206T055724Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=9bed7057939699ac2ddaec50cec207f353d16c71241c38eea2bf6fb380402ea6": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host its not even starting to download the model for me
Author
Owner

@MSR2201 commented on GitHub (Feb 6, 2025):

I found out a solution, if you are on Windows:

  • Go to Settings
  • Network & internet
  • Currently connected wifi properties
  • Edit DNS Server assignment from Automatic(DHCP) to Manual
  • Toggle IPv4 ON
  • Use 1.1.1.1 on Preferred DNS and 1.0.0.1 on Alternate DNS
  • Similarly toggle IPv6 and use 2606:4700:4700::1111 as Preferred DNS and 2606:4700:4700::1001 as Alternate DNS

This worked in my case

this i tried but didnt work for me

<!-- gh-comment-id:2638913035 --> @MSR2201 commented on GitHub (Feb 6, 2025): > I found out a solution, if you are on Windows: > > * Go to Settings > * Network & internet > * Currently connected wifi properties > * Edit DNS Server assignment from Automatic(DHCP) to Manual > * Toggle IPv4 ON > * Use 1.1.1.1 on Preferred DNS and 1.0.0.1 on Alternate DNS > * Similarly toggle IPv6 and use 2606:4700:4700::1111 as Preferred DNS and 2606:4700:4700::1001 as Alternate DNS > > This worked in my case this i tried but didnt work for me
Author
Owner

@meglio commented on GitHub (Feb 6, 2025):

I can no longer download any models. It stalls forevery, the number of MB downloaded goes up, then drops down, and does so in circles. Also, sometime after 10 minutes of trying hard, it ends with a TLS handshake timeout.

<!-- gh-comment-id:2639634023 --> @meglio commented on GitHub (Feb 6, 2025): I can no longer download any models. It stalls forevery, the number of MB downloaded goes up, then drops down, and does so in circles. Also, sometime after 10 minutes of trying hard, it ends with a `TLS handshake timeout`.
Author
Owner

@rick-github commented on GitHub (Feb 6, 2025):

The hacky way around this is to run the downloader for a few seconds and then restart: https://github.com/ollama/ollama/issues/8484#issuecomment-2627410336. If you are on Linux/MacOS you can avoid the stall restart and download directly with https://github.com/ollama/ollama/issues/8535#issuecomment-2613241807.

<!-- gh-comment-id:2639654836 --> @rick-github commented on GitHub (Feb 6, 2025): The hacky way around this is to run the downloader for a few seconds and then restart: https://github.com/ollama/ollama/issues/8484#issuecomment-2627410336. If you are on Linux/MacOS you can avoid the stall restart and download directly with https://github.com/ollama/ollama/issues/8535#issuecomment-2613241807.
Author
Owner

@meglio commented on GitHub (Feb 6, 2025):

Is the bug reproducible and being fixed? It hasn't been working for more than a week. Just unusable app atm.

<!-- gh-comment-id:2639681251 --> @meglio commented on GitHub (Feb 6, 2025): Is the bug reproducible and being fixed? It hasn't been working for more than a week. Just unusable app atm.
Author
Owner

@rick-github commented on GitHub (Feb 6, 2025):

https://github.com/ollama/ollama/pull/8831

<!-- gh-comment-id:2639688309 --> @rick-github commented on GitHub (Feb 6, 2025): https://github.com/ollama/ollama/pull/8831
Author
Owner

@yashwanth2706 commented on GitHub (Feb 6, 2025):

Is the bug reproducible and being fixed? It hasn't been working for more than a week. Just unusable app atm.

This is being fixed, previously download would re-start after 5s if no packets received, but now the time has been increased to 30s and being worked to optimize

<!-- gh-comment-id:2640312026 --> @yashwanth2706 commented on GitHub (Feb 6, 2025): > Is the bug reproducible and being fixed? It hasn't been working for more than a week. Just unusable app atm. This is being fixed, previously download would re-start after 5s if no packets received, but now the time has been increased to 30s and being worked to optimize
Author
Owner

@QinCai-rui commented on GitHub (Feb 7, 2025):

Same here. pulling a model less than 1GB is kind of OK for me but any larger, it just 'reverses' the download.
https://github.com/ollama/ollama/issues/8280

<!-- gh-comment-id:2641841483 --> @QinCai-rui commented on GitHub (Feb 7, 2025): Same here. pulling a model less than 1GB is kind of OK for me but any larger, it just 'reverses' the download. https://github.com/ollama/ollama/issues/8280
Author
Owner

@ajayjoshioutdosolutions commented on GitHub (Feb 7, 2025):

Issue with Ollama

ollama pull deepseek-r1:8b pulling manifest pulling 6340dc3229b0... 0% ▕ ▏ 0 B/4.9 GB Error: max retries exceeded: Get "6340dc3229/data!F(MISSING)20250128%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20250128T135124Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=df1750b12731ec798303d375a9b75e4873a5ad7ea5c66aafc4e89cf29cd13cc7": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com on 127.0.0.53:53: server misbehaving

This issue is no longer present after updating Google DNS. It worked for me. @rick-github

<!-- gh-comment-id:2641984195 --> @ajayjoshioutdosolutions commented on GitHub (Feb 7, 2025): > Issue with Ollama > > ollama pull deepseek-r1:8b pulling manifest pulling 6340dc3229b0... 0% ▕ ▏ 0 B/4.9 GB Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/63/6340dc3229b0d08ea9cc49b75d4098702983e17b4c096d57afbbf2ffc813f2be/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20250128%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20250128T135124Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=df1750b12731ec798303d375a9b75e4873a5ad7ea5c66aafc4e89cf29cd13cc7": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com on 127.0.0.53:53: server misbehaving This issue is no longer present after updating Google DNS. It worked for me. @rick-github
Author
Owner

@patillacode commented on GitHub (Feb 9, 2025):

It is still happening, I have tried both the timeout solution

Attempting to download model...
pulling manifest
pulling aabd4debf0c8...   0% ▕                                                ▏    0 B/1.1 GB
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/aa/aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250208%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250208T235539Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=4f7b26fcbf89a0e05fb230d42bfa2663cb29b18de14a38f5e1f3fe806e3b574e": dial tcp 162.159.141.50:443: i/o timeout
Download failed. Retrying...
Attempting to download model...
pulling manifest
pulling aabd4debf0c8...   0% ▕                                                ▏    0 B/1.1 GB

and the DNS solution without success.

Also, the magnet link above is to download the 7b image has only 1 peer (and we might be stressing that connection out)
I am more interested in the 1.5B anyway so I wont leech, just wanted to try.

Is there a way we can P2P this?
any updates from CF?

If I may be of service I'm around.

<!-- gh-comment-id:2645989835 --> @patillacode commented on GitHub (Feb 9, 2025): It is still happening, I have tried both the `timeout` solution ``` Attempting to download model... pulling manifest pulling aabd4debf0c8... 0% ▕ ▏ 0 B/1.1 GB Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/aa/aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250208%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250208T235539Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=4f7b26fcbf89a0e05fb230d42bfa2663cb29b18de14a38f5e1f3fe806e3b574e": dial tcp 162.159.141.50:443: i/o timeout Download failed. Retrying... Attempting to download model... pulling manifest pulling aabd4debf0c8... 0% ▕ ▏ 0 B/1.1 GB ``` and the `DNS` solution without success. Also, the magnet link above is to download the `7b` image has only 1 peer (and we might be stressing that connection out) I am more interested in the `1.5B` anyway so I wont leech, just wanted to try. Is there a way we can P2P this? any updates from CF? If I may be of service I'm around.
Author
Owner

@yashwanth2706 commented on GitHub (Feb 9, 2025):

@rick-github @jmorganca

Successfully Built Ollama from Source on a Virtual Machine & Ran DeepSeek-R1:7B

Description

I cloned the Ollama repository and built it from source and it worked.!

Environment Details

  • OS: Linux Mint on VirtualBox
  • Virtual Machine: VirtualBox
  • Go Version: go1.22.1 linux/amd64
  • Installed Model: deepseek-r1:7b

Image

Image

Ollama current pre-relese version: v0.5.8
https://github.com/ollama/ollama/releases/tag/v0.5.8-rc12

Let me know if any system or version information that'll help to rectify the current issue, thanks.!

<!-- gh-comment-id:2646083401 --> @yashwanth2706 commented on GitHub (Feb 9, 2025): @rick-github @jmorganca # Successfully Built Ollama from Source on a Virtual Machine & Ran DeepSeek-R1:7B ## Description I cloned the Ollama repository and built it from source and it worked.! ## Environment Details - **OS**: Linux Mint on VirtualBox - **Virtual Machine**: VirtualBox - **Go Version**: go1.22.1 linux/amd64 - **Installed Model**: `deepseek-r1:7b` ![Image](https://github.com/user-attachments/assets/229dd4ad-12b2-4175-a822-add0332c5158) ![Image](https://github.com/user-attachments/assets/fedd6ba7-f7b1-4393-9aa3-80ce2c933ea0) Ollama current pre-relese version: v0.5.8 https://github.com/ollama/ollama/releases/tag/v0.5.8-rc12 Let me know if any system or version information that'll help to rectify the current issue, thanks.!
Author
Owner

@uripont commented on GitHub (Feb 9, 2025):

dial tcp XXX.XX.X.XX:XXX: i/o timeout, retrying

Same from Mac running macOS Sequoia 15.2 (24C101), won't even start pulling the model. No proxies, no VPN, not even Firewall. Neither with default WiFi settings, nor setting a different DNS (Preferred DNS server: 8.8.8.8 (Google DNS)
Alternate DNS server: 1.1.1.1 (Cloudflare DNS)).

Have tried on 2 different home networks from same ISP, none work.

It gets stuck here via CLI:

ollama pull phi4
pulling manifest 
pulling fd7b6731c33c...   0% ▕                                                           ▏    0 B/9.1 GB

Inspecting server logs using cat ~/.ollama/logs/server.log:

(...)
time=2025-02-09T09:58:07.969+01:00 level=INFO source=download.go:291 msg="fd7b6731c33c part 2 attempt 5 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/fd/fd7b6731c33c57f61767612f56517460ec2d1e2e5a3f0163e0eb3d8d8cb5df20/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250209%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250209T085436Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=d3f5be52cdda301b35d65e1954c87975c539aa7a62c2a50b043ae8b2d25170f1\": dial tcp XXX.XX.X.XX:XXX: i/o timeout, retrying in 32s"
(...)

Lots of these, for different "parts", and different seconds until next retry (seems an exponential backoff).

Tried pretty much everything, even a clean reinstall of Ollama, and still it can't pull any model.


Trying commands that @rick-github suggested:

nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com
Server:		80.58.61.250
Address:	80.58.61.250#53

Non-authoritative answer:
Name:	dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com
Address: 172.66.1.46
Name:	dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com
Address: 162.159.141.50

When running:

curl -sL https://registry.ollama.ai/v2/library/deepseek-r1/manifests/7b
curl -sL https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588

I can get the manifest:

{"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:40fb844194b25e429204e5163fb379ab462978a262b86aadd73d8944445c09fd","size":487},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49","size":4683073184},{"mediaType":"application/vnd.ollama.image.template","digest":"sha256:369ca498f347f710d068cbb38bf0b8692dd3fa30f30ca2ff755e211c94768150","size":387},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:6e4c38e1172f42fdbff13edf9a7a017679fb82b0fde415a3e8b3c31c6ed4a4e4","size":1065},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588","size":148}]}

But the second request times out.

curl --head -v https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49

* Host registry.ollama.ai:443 was resolved.
* IPv6: (none)
* IPv4: 172.67.182.229, 104.21.75.227
*   Trying 172.67.182.229:443...
* Connected to registry.ollama.ai (172.67.182.229) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
*  CAfile: /etc/ssl/cert.pem
*  CApath: none
* (304) (IN), TLS handshake, Server hello (2):
* (304) (IN), TLS handshake, Unknown (8):
* (304) (IN), TLS handshake, Certificate (11):
* (304) (IN), TLS handshake, CERT verify (15):
* (304) (IN), TLS handshake, Finished (20):
* (304) (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
*  subject: CN=ollama.ai
*  start date: Feb  7 03:23:16 2025 GMT
*  expire date: May  8 04:21:34 2025 GMT
*  subjectAltName: host "registry.ollama.ai" matched cert's "*.ollama.ai"
*  issuer: C=US; O=Google Trust Services; CN=WE1
*  SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49
* [HTTP/2] [1] [:method: HEAD]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: registry.ollama.ai]
* [HTTP/2] [1] [:path: /v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
> HEAD /v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 HTTP/2
> Host: registry.ollama.ai
> User-Agent: curl/8.7.1
> Accept: */*
> 
* Request completely sent off
< HTTP/2 200 
HTTP/2 200 
< date: Sun, 09 Feb 2025 09:32:38 GMT
date: Sun, 09 Feb 2025 09:32:38 GMT
< content-length: 4683073184
content-length: 4683073184
< via: 1.1 google
via: 1.1 google
< alt-svc: h3=":443"; ma=86400
alt-svc: h3=":443"; ma=86400
< cf-cache-status: DYNAMIC
cf-cache-status: DYNAMIC
< report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=bV3Itz9xCTlC3jy2N%2Btf2jIFWXBLY2CIyERXVg083y5oR0P8vAZAN45cgGmVnh%2FCPjLuztNHxoQAphEBEyZOqo%2FtjJYtBdatlHBCrEeK8Ajhg6NX%2Bird%2B%2BeUeF21hQvhFl5m0dk%3D"}],"group":"cf-nel","max_age":604800}
report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=bV3Itz9xCTlC3jy2N%2Btf2jIFWXBLY2CIyERXVg083y5oR0P8vAZAN45cgGmVnh%2FCPjLuztNHxoQAphEBEyZOqo%2FtjJYtBdatlHBCrEeK8Ajhg6NX%2Bird%2B%2BeUeF21hQvhFl5m0dk%3D"}],"group":"cf-nel","max_age":604800}
< nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
< server: cloudflare
server: cloudflare
< cf-ray: 90f2da755e1eecab-MAD
cf-ray: 90f2da755e1eecab-MAD
< server-timing: cfL4;desc="?proto=TCP&rtt=38023&min_rtt=37557&rtt_var=10845&sent=7&recv=10&lost=0&retrans=0&sent_bytes=2882&recv_bytes=651&delivery_rate=76683&cwnd=226&unsent_bytes=0&cid=a3da2b0821c54c77&ts=230&x=0"
server-timing: cfL4;desc="?proto=TCP&rtt=38023&min_rtt=37557&rtt_var=10845&sent=7&recv=10&lost=0&retrans=0&sent_bytes=2882&recv_bytes=651&delivery_rate=76683&cwnd=226&unsent_bytes=0&cid=a3da2b0821c54c77&ts=230&x=0"
< 

* Connection #0 to host registry.ollama.ai left intact

When running a ping as suggested on #8533:

PING dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com (172.66.1.46): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
Request timeout for icmp_seq 3
Request timeout for icmp_seq 4
Request timeout for icmp_seq 5
Request timeout for icmp_seq 6
Request timeout for icmp_seq 7
Request timeout for icmp_seq 8
Request timeout for icmp_seq 9
(...)

So the issue seems regarding the connection with cloudflare r2 where the data that needs to be pulled from is stored.

Ollama worked well for me a few weeks ago.

EDIT: It seems to work when using mobile hotspot, different ISP. The two curl end successfully, and pulling starts actually getting data. Will try on a different WiFi network somewhere else and report back.

EDIT 2: On another network it works well. It seems like the issue is with the networks of one specific ISP (Movistar), which may have blocked Cloudflare?

EDIT 3: Most likely it was point number 2. Everything works as expected on those previously failing networks. Everything back to normal 👍

EDIT 4: As confirmed in https://www.youtube.com/watch?v=pj66vftqZZM, there was a weekend-long, large-scale ban on Cloudflare IP ranges made by Movistar/O2 ISPs, as an attempt to combat football pirating.

<!-- gh-comment-id:2646140673 --> @uripont commented on GitHub (Feb 9, 2025): # dial tcp XXX.XX.X.XX:XXX: i/o timeout, retrying Same from Mac running macOS Sequoia 15.2 (24C101), won't even start pulling the model. No proxies, no VPN, not even Firewall. Neither with default WiFi settings, nor setting a different DNS (Preferred DNS server: 8.8.8.8 (Google DNS) Alternate DNS server: 1.1.1.1 (Cloudflare DNS)). Have tried on 2 different home networks from same ISP, none work. It gets stuck here via CLI: ``` ollama pull phi4 pulling manifest pulling fd7b6731c33c... 0% ▕ ▏ 0 B/9.1 GB ``` Inspecting server logs using `cat ~/.ollama/logs/server.log`: ``` (...) time=2025-02-09T09:58:07.969+01:00 level=INFO source=download.go:291 msg="fd7b6731c33c part 2 attempt 5 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/fd/fd7b6731c33c57f61767612f56517460ec2d1e2e5a3f0163e0eb3d8d8cb5df20/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250209%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250209T085436Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=d3f5be52cdda301b35d65e1954c87975c539aa7a62c2a50b043ae8b2d25170f1\": dial tcp XXX.XX.X.XX:XXX: i/o timeout, retrying in 32s" (...) ``` Lots of these, for different "parts", and different seconds until next retry (seems an exponential backoff). Tried pretty much everything, even a clean reinstall of Ollama, and still it can't pull any model. --- Trying commands that @rick-github suggested: ``` nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com Server: 80.58.61.250 Address: 80.58.61.250#53 Non-authoritative answer: Name: dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com Address: 172.66.1.46 Name: dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com Address: 162.159.141.50 ``` When running: ``` curl -sL https://registry.ollama.ai/v2/library/deepseek-r1/manifests/7b curl -sL https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588 ``` I can get the manifest: ``` {"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:40fb844194b25e429204e5163fb379ab462978a262b86aadd73d8944445c09fd","size":487},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49","size":4683073184},{"mediaType":"application/vnd.ollama.image.template","digest":"sha256:369ca498f347f710d068cbb38bf0b8692dd3fa30f30ca2ff755e211c94768150","size":387},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:6e4c38e1172f42fdbff13edf9a7a017679fb82b0fde415a3e8b3c31c6ed4a4e4","size":1065},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588","size":148}]} ``` But the second request times out. ``` curl --head -v https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 * Host registry.ollama.ai:443 was resolved. * IPv6: (none) * IPv4: 172.67.182.229, 104.21.75.227 * Trying 172.67.182.229:443... * Connected to registry.ollama.ai (172.67.182.229) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): * CAfile: /etc/ssl/cert.pem * CApath: none * (304) (IN), TLS handshake, Server hello (2): * (304) (IN), TLS handshake, Unknown (8): * (304) (IN), TLS handshake, Certificate (11): * (304) (IN), TLS handshake, CERT verify (15): * (304) (IN), TLS handshake, Finished (20): * (304) (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256 / [blank] / UNDEF * ALPN: server accepted h2 * Server certificate: * subject: CN=ollama.ai * start date: Feb 7 03:23:16 2025 GMT * expire date: May 8 04:21:34 2025 GMT * subjectAltName: host "registry.ollama.ai" matched cert's "*.ollama.ai" * issuer: C=US; O=Google Trust Services; CN=WE1 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 * [HTTP/2] [1] [:method: HEAD] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: registry.ollama.ai] * [HTTP/2] [1] [:path: /v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49] * [HTTP/2] [1] [user-agent: curl/8.7.1] * [HTTP/2] [1] [accept: */*] > HEAD /v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 HTTP/2 > Host: registry.ollama.ai > User-Agent: curl/8.7.1 > Accept: */* > * Request completely sent off < HTTP/2 200 HTTP/2 200 < date: Sun, 09 Feb 2025 09:32:38 GMT date: Sun, 09 Feb 2025 09:32:38 GMT < content-length: 4683073184 content-length: 4683073184 < via: 1.1 google via: 1.1 google < alt-svc: h3=":443"; ma=86400 alt-svc: h3=":443"; ma=86400 < cf-cache-status: DYNAMIC cf-cache-status: DYNAMIC < report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=bV3Itz9xCTlC3jy2N%2Btf2jIFWXBLY2CIyERXVg083y5oR0P8vAZAN45cgGmVnh%2FCPjLuztNHxoQAphEBEyZOqo%2FtjJYtBdatlHBCrEeK8Ajhg6NX%2Bird%2B%2BeUeF21hQvhFl5m0dk%3D"}],"group":"cf-nel","max_age":604800} report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=bV3Itz9xCTlC3jy2N%2Btf2jIFWXBLY2CIyERXVg083y5oR0P8vAZAN45cgGmVnh%2FCPjLuztNHxoQAphEBEyZOqo%2FtjJYtBdatlHBCrEeK8Ajhg6NX%2Bird%2B%2BeUeF21hQvhFl5m0dk%3D"}],"group":"cf-nel","max_age":604800} < nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800} nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800} < server: cloudflare server: cloudflare < cf-ray: 90f2da755e1eecab-MAD cf-ray: 90f2da755e1eecab-MAD < server-timing: cfL4;desc="?proto=TCP&rtt=38023&min_rtt=37557&rtt_var=10845&sent=7&recv=10&lost=0&retrans=0&sent_bytes=2882&recv_bytes=651&delivery_rate=76683&cwnd=226&unsent_bytes=0&cid=a3da2b0821c54c77&ts=230&x=0" server-timing: cfL4;desc="?proto=TCP&rtt=38023&min_rtt=37557&rtt_var=10845&sent=7&recv=10&lost=0&retrans=0&sent_bytes=2882&recv_bytes=651&delivery_rate=76683&cwnd=226&unsent_bytes=0&cid=a3da2b0821c54c77&ts=230&x=0" < * Connection #0 to host registry.ollama.ai left intact ``` When running a ping as suggested on #8533: ``` PING dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com (172.66.1.46): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 Request timeout for icmp_seq 3 Request timeout for icmp_seq 4 Request timeout for icmp_seq 5 Request timeout for icmp_seq 6 Request timeout for icmp_seq 7 Request timeout for icmp_seq 8 Request timeout for icmp_seq 9 (...) ``` So the issue seems regarding the connection with cloudflare r2 where the data that needs to be pulled from is stored. Ollama worked well for me a few weeks ago. EDIT: It seems to work when using mobile hotspot, different ISP. The two curl end successfully, and pulling starts actually getting data. Will try on a different WiFi network somewhere else and report back. EDIT 2: On another network it works well. It seems like the issue is with the networks of one specific ISP (Movistar), which may have blocked Cloudflare? EDIT 3: Most likely it was point number 2. Everything works as expected on those previously failing networks. Everything back to normal 👍 EDIT 4: As confirmed in https://www.youtube.com/watch?v=pj66vftqZZM, there was a weekend-long, large-scale ban on Cloudflare IP ranges made by Movistar/O2 ISPs, as an attempt to combat football pirating.
Author
Owner

@bishwayan-saha commented on GitHub (Mar 1, 2025):

I found out a solution, if you are on Windows:

  • Go to Settings
  • Network & internet
  • Currently connected wifi properties
  • Edit DNS Server assignment from Automatic(DHCP) to Manual
  • Toggle IPv4 ON
  • Use 1.1.1.1 on Preferred DNS and 1.0.0.1 on Alternate DNS
  • Similarly toggle IPv6 and use 2606:4700:4700::1111 as Preferred DNS and 2606:4700:4700::1001 as Alternate DNS

This worked in my case

this i tried but didnt work for me

worked for me.

<!-- gh-comment-id:2691924436 --> @bishwayan-saha commented on GitHub (Mar 1, 2025): > > I found out a solution, if you are on Windows: > > > > * Go to Settings > > * Network & internet > > * Currently connected wifi properties > > * Edit DNS Server assignment from Automatic(DHCP) to Manual > > * Toggle IPv4 ON > > * Use 1.1.1.1 on Preferred DNS and 1.0.0.1 on Alternate DNS > > * Similarly toggle IPv6 and use 2606:4700:4700::1111 as Preferred DNS and 2606:4700:4700::1001 as Alternate DNS > > > > This worked in my case > > this i tried but didnt work for me worked for me.
Author
Owner

@rick-github commented on GitHub (Mar 4, 2025):

The download stalls should be mitigated as of 0.5.8 by #8831 and #9294 provides an overhaul of model pulling, so closing but feel free to add updates if you are still having issues.

The connection failures to r2.cloudfarestorage.com are being tracked in #8605.

<!-- gh-comment-id:2698060126 --> @rick-github commented on GitHub (Mar 4, 2025): The download stalls should be mitigated as of 0.5.8 by #8831 and #9294 provides an overhaul of model pulling, so closing but feel free to add updates if you are still having issues. The connection failures to r2.cloudfarestorage.com are being tracked in #8605.
Author
Owner

@jagarojgrdev commented on GitHub (Mar 16, 2025):

dial tcp XXX.XX.X.XX:XXX: i/o timeout, retrying

Same from Mac running macOS Sequoia 15.2 (24C101), won't even start pulling the model. No proxies, no VPN, not even Firewall. Neither with default WiFi settings, nor setting a different DNS (Preferred DNS server: 8.8.8.8 (Google DNS) Alternate DNS server: 1.1.1.1 (Cloudflare DNS)).

Have tried on 2 different home networks from same ISP, none work.

It gets stuck here via CLI:

ollama pull phi4
pulling manifest 
pulling fd7b6731c33c...   0% ▕                                                           ▏    0 B/9.1 GB

Inspecting server logs using cat ~/.ollama/logs/server.log:

(...)
time=2025-02-09T09:58:07.969+01:00 level=INFO source=download.go:291 msg="fd7b6731c33c part 2 attempt 5 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/fd/fd7b6731c33c57f61767612f56517460ec2d1e2e5a3f0163e0eb3d8d8cb5df20/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250209%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250209T085436Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=d3f5be52cdda301b35d65e1954c87975c539aa7a62c2a50b043ae8b2d25170f1\": dial tcp XXX.XX.X.XX:XXX: i/o timeout, retrying in 32s"
(...)

Lots of these, for different "parts", and different seconds until next retry (seems an exponential backoff).

Tried pretty much everything, even a clean reinstall of Ollama, and still it can't pull any model.

Trying commands that @rick-github suggested:

nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com
Server:		80.58.61.250
Address:	80.58.61.250#53

Non-authoritative answer:
Name:	dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com
Address: 172.66.1.46
Name:	dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com
Address: 162.159.141.50

When running:

curl -sL https://registry.ollama.ai/v2/library/deepseek-r1/manifests/7b
curl -sL https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588

I can get the manifest:

{"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:40fb844194b25e429204e5163fb379ab462978a262b86aadd73d8944445c09fd","size":487},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49","size":4683073184},{"mediaType":"application/vnd.ollama.image.template","digest":"sha256:369ca498f347f710d068cbb38bf0b8692dd3fa30f30ca2ff755e211c94768150","size":387},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:6e4c38e1172f42fdbff13edf9a7a017679fb82b0fde415a3e8b3c31c6ed4a4e4","size":1065},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588","size":148}]}

But the second request times out.

curl --head -v https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49

* Host registry.ollama.ai:443 was resolved.
* IPv6: (none)
* IPv4: 172.67.182.229, 104.21.75.227
*   Trying 172.67.182.229:443...
* Connected to registry.ollama.ai (172.67.182.229) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
*  CAfile: /etc/ssl/cert.pem
*  CApath: none
* (304) (IN), TLS handshake, Server hello (2):
* (304) (IN), TLS handshake, Unknown (8):
* (304) (IN), TLS handshake, Certificate (11):
* (304) (IN), TLS handshake, CERT verify (15):
* (304) (IN), TLS handshake, Finished (20):
* (304) (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
*  subject: CN=ollama.ai
*  start date: Feb  7 03:23:16 2025 GMT
*  expire date: May  8 04:21:34 2025 GMT
*  subjectAltName: host "registry.ollama.ai" matched cert's "*.ollama.ai"
*  issuer: C=US; O=Google Trust Services; CN=WE1
*  SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49
* [HTTP/2] [1] [:method: HEAD]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: registry.ollama.ai]
* [HTTP/2] [1] [:path: /v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
> HEAD /v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 HTTP/2
> Host: registry.ollama.ai
> User-Agent: curl/8.7.1
> Accept: */*
> 
* Request completely sent off
< HTTP/2 200 
HTTP/2 200 
< date: Sun, 09 Feb 2025 09:32:38 GMT
date: Sun, 09 Feb 2025 09:32:38 GMT
< content-length: 4683073184
content-length: 4683073184
< via: 1.1 google
via: 1.1 google
< alt-svc: h3=":443"; ma=86400
alt-svc: h3=":443"; ma=86400
< cf-cache-status: DYNAMIC
cf-cache-status: DYNAMIC
< report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=bV3Itz9xCTlC3jy2N%2Btf2jIFWXBLY2CIyERXVg083y5oR0P8vAZAN45cgGmVnh%2FCPjLuztNHxoQAphEBEyZOqo%2FtjJYtBdatlHBCrEeK8Ajhg6NX%2Bird%2B%2BeUeF21hQvhFl5m0dk%3D"}],"group":"cf-nel","max_age":604800}
report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=bV3Itz9xCTlC3jy2N%2Btf2jIFWXBLY2CIyERXVg083y5oR0P8vAZAN45cgGmVnh%2FCPjLuztNHxoQAphEBEyZOqo%2FtjJYtBdatlHBCrEeK8Ajhg6NX%2Bird%2B%2BeUeF21hQvhFl5m0dk%3D"}],"group":"cf-nel","max_age":604800}
< nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
< server: cloudflare
server: cloudflare
< cf-ray: 90f2da755e1eecab-MAD
cf-ray: 90f2da755e1eecab-MAD
< server-timing: cfL4;desc="?proto=TCP&rtt=38023&min_rtt=37557&rtt_var=10845&sent=7&recv=10&lost=0&retrans=0&sent_bytes=2882&recv_bytes=651&delivery_rate=76683&cwnd=226&unsent_bytes=0&cid=a3da2b0821c54c77&ts=230&x=0"
server-timing: cfL4;desc="?proto=TCP&rtt=38023&min_rtt=37557&rtt_var=10845&sent=7&recv=10&lost=0&retrans=0&sent_bytes=2882&recv_bytes=651&delivery_rate=76683&cwnd=226&unsent_bytes=0&cid=a3da2b0821c54c77&ts=230&x=0"
< 

* Connection #0 to host registry.ollama.ai left intact

When running a ping as suggested on #8533:

PING dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com (172.66.1.46): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
Request timeout for icmp_seq 3
Request timeout for icmp_seq 4
Request timeout for icmp_seq 5
Request timeout for icmp_seq 6
Request timeout for icmp_seq 7
Request timeout for icmp_seq 8
Request timeout for icmp_seq 9
(...)

So the issue seems regarding the connection with cloudflare r2 where the data that needs to be pulled from is stored.

Ollama worked well for me a few weeks ago.

EDIT: It seems to work when using mobile hotspot, different ISP. The two curl end successfully, and pulling starts actually getting data. Will try on a different WiFi network somewhere else and report back.

EDIT 2: On another network it works well. It seems like the issue is with the networks of one specific ISP (Movistar), which may have blocked Cloudflare?

EDIT 3: Most likely it was point number 2. Everything works as expected on those previously failing networks. Everything back to normal 👍

EDIT 4: As confirmed in https://www.youtube.com/watch?v=pj66vftqZZM, there was a weekend-long, large-scale ban on Cloudflare IP ranges made by Movistar/O2 ISPs, as an attempt to combat football pirating.

I had the same problem: I ran "docker exec -it ollama ollama pull llama3.1:8b" and got no response.

After watching the referenced video, I installed WARP (Cloudflare VPN), and the error was resolved.

<!-- gh-comment-id:2727615707 --> @jagarojgrdev commented on GitHub (Mar 16, 2025): > # dial tcp XXX.XX.X.XX:XXX: i/o timeout, retrying > > Same from Mac running macOS Sequoia 15.2 (24C101), won't even start pulling the model. No proxies, no VPN, not even Firewall. Neither with default WiFi settings, nor setting a different DNS (Preferred DNS server: 8.8.8.8 (Google DNS) Alternate DNS server: 1.1.1.1 (Cloudflare DNS)). > > Have tried on 2 different home networks from same ISP, none work. > > It gets stuck here via CLI: > > ``` > ollama pull phi4 > pulling manifest > pulling fd7b6731c33c... 0% ▕ ▏ 0 B/9.1 GB > ``` > > Inspecting server logs using `cat ~/.ollama/logs/server.log`: > > ``` > (...) > time=2025-02-09T09:58:07.969+01:00 level=INFO source=download.go:291 msg="fd7b6731c33c part 2 attempt 5 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/fd/fd7b6731c33c57f61767612f56517460ec2d1e2e5a3f0163e0eb3d8d8cb5df20/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250209%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250209T085436Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=d3f5be52cdda301b35d65e1954c87975c539aa7a62c2a50b043ae8b2d25170f1\": dial tcp XXX.XX.X.XX:XXX: i/o timeout, retrying in 32s" > (...) > ``` > > Lots of these, for different "parts", and different seconds until next retry (seems an exponential backoff). > > Tried pretty much everything, even a clean reinstall of Ollama, and still it can't pull any model. > > Trying commands that [@rick-github](https://github.com/rick-github) suggested: > > ``` > nslookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com > Server: 80.58.61.250 > Address: 80.58.61.250#53 > > Non-authoritative answer: > Name: dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com > Address: 172.66.1.46 > Name: dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com > Address: 162.159.141.50 > ``` > > When running: > > ``` > curl -sL https://registry.ollama.ai/v2/library/deepseek-r1/manifests/7b > curl -sL https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588 > ``` > > I can get the manifest: > > ``` > {"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:40fb844194b25e429204e5163fb379ab462978a262b86aadd73d8944445c09fd","size":487},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49","size":4683073184},{"mediaType":"application/vnd.ollama.image.template","digest":"sha256:369ca498f347f710d068cbb38bf0b8692dd3fa30f30ca2ff755e211c94768150","size":387},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:6e4c38e1172f42fdbff13edf9a7a017679fb82b0fde415a3e8b3c31c6ed4a4e4","size":1065},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588","size":148}]} > ``` > > But the second request times out. > > ``` > curl --head -v https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 > > * Host registry.ollama.ai:443 was resolved. > * IPv6: (none) > * IPv4: 172.67.182.229, 104.21.75.227 > * Trying 172.67.182.229:443... > * Connected to registry.ollama.ai (172.67.182.229) port 443 > * ALPN: curl offers h2,http/1.1 > * (304) (OUT), TLS handshake, Client hello (1): > * CAfile: /etc/ssl/cert.pem > * CApath: none > * (304) (IN), TLS handshake, Server hello (2): > * (304) (IN), TLS handshake, Unknown (8): > * (304) (IN), TLS handshake, Certificate (11): > * (304) (IN), TLS handshake, CERT verify (15): > * (304) (IN), TLS handshake, Finished (20): > * (304) (OUT), TLS handshake, Finished (20): > * SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256 / [blank] / UNDEF > * ALPN: server accepted h2 > * Server certificate: > * subject: CN=ollama.ai > * start date: Feb 7 03:23:16 2025 GMT > * expire date: May 8 04:21:34 2025 GMT > * subjectAltName: host "registry.ollama.ai" matched cert's "*.ollama.ai" > * issuer: C=US; O=Google Trust Services; CN=WE1 > * SSL certificate verify ok. > * using HTTP/2 > * [HTTP/2] [1] OPENED stream for https://registry.ollama.ai/v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 > * [HTTP/2] [1] [:method: HEAD] > * [HTTP/2] [1] [:scheme: https] > * [HTTP/2] [1] [:authority: registry.ollama.ai] > * [HTTP/2] [1] [:path: /v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49] > * [HTTP/2] [1] [user-agent: curl/8.7.1] > * [HTTP/2] [1] [accept: */*] > > HEAD /v2/library/deepseek-r1/blobs/sha256:96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 HTTP/2 > > Host: registry.ollama.ai > > User-Agent: curl/8.7.1 > > Accept: */* > > > * Request completely sent off > < HTTP/2 200 > HTTP/2 200 > < date: Sun, 09 Feb 2025 09:32:38 GMT > date: Sun, 09 Feb 2025 09:32:38 GMT > < content-length: 4683073184 > content-length: 4683073184 > < via: 1.1 google > via: 1.1 google > < alt-svc: h3=":443"; ma=86400 > alt-svc: h3=":443"; ma=86400 > < cf-cache-status: DYNAMIC > cf-cache-status: DYNAMIC > < report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=bV3Itz9xCTlC3jy2N%2Btf2jIFWXBLY2CIyERXVg083y5oR0P8vAZAN45cgGmVnh%2FCPjLuztNHxoQAphEBEyZOqo%2FtjJYtBdatlHBCrEeK8Ajhg6NX%2Bird%2B%2BeUeF21hQvhFl5m0dk%3D"}],"group":"cf-nel","max_age":604800} > report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=bV3Itz9xCTlC3jy2N%2Btf2jIFWXBLY2CIyERXVg083y5oR0P8vAZAN45cgGmVnh%2FCPjLuztNHxoQAphEBEyZOqo%2FtjJYtBdatlHBCrEeK8Ajhg6NX%2Bird%2B%2BeUeF21hQvhFl5m0dk%3D"}],"group":"cf-nel","max_age":604800} > < nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800} > nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800} > < server: cloudflare > server: cloudflare > < cf-ray: 90f2da755e1eecab-MAD > cf-ray: 90f2da755e1eecab-MAD > < server-timing: cfL4;desc="?proto=TCP&rtt=38023&min_rtt=37557&rtt_var=10845&sent=7&recv=10&lost=0&retrans=0&sent_bytes=2882&recv_bytes=651&delivery_rate=76683&cwnd=226&unsent_bytes=0&cid=a3da2b0821c54c77&ts=230&x=0" > server-timing: cfL4;desc="?proto=TCP&rtt=38023&min_rtt=37557&rtt_var=10845&sent=7&recv=10&lost=0&retrans=0&sent_bytes=2882&recv_bytes=651&delivery_rate=76683&cwnd=226&unsent_bytes=0&cid=a3da2b0821c54c77&ts=230&x=0" > < > > * Connection #0 to host registry.ollama.ai left intact > ``` > > When running a ping as suggested on [#8533](https://github.com/ollama/ollama/issues/8533): > > ``` > PING dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com (172.66.1.46): 56 data bytes > Request timeout for icmp_seq 0 > Request timeout for icmp_seq 1 > Request timeout for icmp_seq 2 > Request timeout for icmp_seq 3 > Request timeout for icmp_seq 4 > Request timeout for icmp_seq 5 > Request timeout for icmp_seq 6 > Request timeout for icmp_seq 7 > Request timeout for icmp_seq 8 > Request timeout for icmp_seq 9 > (...) > ``` > > So the issue seems regarding the connection with cloudflare r2 where the data that needs to be pulled from is stored. > > Ollama worked well for me a few weeks ago. > > EDIT: It seems to work when using mobile hotspot, different ISP. The two curl end successfully, and pulling starts actually getting data. Will try on a different WiFi network somewhere else and report back. > > EDIT 2: On another network it works well. It seems like the issue is with the networks of one specific ISP (Movistar), which may have blocked Cloudflare? > > EDIT 3: Most likely it was point number 2. Everything works as expected on those previously failing networks. Everything back to normal 👍 > > EDIT 4: As confirmed in https://www.youtube.com/watch?v=pj66vftqZZM, there was a weekend-long, large-scale ban on Cloudflare IP ranges made by Movistar/O2 ISPs, as an attempt to combat football pirating. I had the same problem: I ran "docker exec -it ollama ollama pull llama3.1:8b" and got no response. After watching the referenced video, I installed WARP (Cloudflare VPN), and the error was resolved.
Author
Owner

@marcelb commented on GitHub (Mar 17, 2025):

running ollama 0.5.11 and it is still happening all the time. I can download it by ctrl-c after every 5gb and restart the pull until it is done though.

Location: Germany
ISP: Vodafone 1gbit
OS: Windows 11

<!-- gh-comment-id:2731124171 --> @marcelb commented on GitHub (Mar 17, 2025): running ollama 0.5.11 and it is still happening all the time. I can download it by ctrl-c after every 5gb and restart the pull until it is done though. Location: Germany ISP: Vodafone 1gbit OS: Windows 11
Author
Owner

@rick-github commented on GitHub (Mar 17, 2025):

Server logs will aid in debugging.

<!-- gh-comment-id:2731130473 --> @rick-github commented on GitHub (Mar 17, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@keithkmyers commented on GitHub (Mar 18, 2025):

I suspect it is DDOS protection run amok. I throttled my container down to 200Mbit/s and its staying connected. I was pulling at the full 1Gbit of my connection and it triggered the connection drops just as folks describe in this thread (and elsewhere).

I can download it by ctrl-c after every 5gb and restart the pull until it is done though.

Nice tip! It seems the resume feature built into the pull tool is faulty, it loses most of its progress upon stall, resulting in an infinite loop during this outage. As a result, it just serves to compound the server load issue they're facing. Folks are pulling the same data blocks over and over, downloading well more than 100% of the total file size in an infinite loop. It probably looks like a DDOS to the server admins but its a bug in the ollama pull tool.

<!-- gh-comment-id:2731586440 --> @keithkmyers commented on GitHub (Mar 18, 2025): I suspect it is DDOS protection run amok. I throttled my container down to 200Mbit/s and its staying connected. I was pulling at the full 1Gbit of my connection and it triggered the connection drops just as folks describe in this thread (and elsewhere). > I can download it by ctrl-c after every 5gb and restart the pull until it is done though. Nice tip! It seems the resume feature built into the pull tool is faulty, it loses most of its progress upon stall, resulting in an infinite loop during this outage. As a result, it just serves to compound the server load issue they're facing. Folks are pulling the same data blocks over and over, downloading well more than 100% of the total file size in an infinite loop. It probably looks like a DDOS to the server admins but its a bug in the ollama pull tool.
Author
Owner

@keithkmyers commented on GitHub (Mar 18, 2025):

A few more findings that might be helpful for the admins:

While 200Mbit works to stay connected, I still notice it delivers data in bursts. Here's what I think is happening:

  • Ollama pulls are being provided a quota of data from the server. X MB in Y time.
  • At 200Mbit, you need to wait only a few seconds between bursts for your quota to reset.
  • But at 1Gbit, you exhaust your quota quickly, and need to wait 15-20 seconds for your next block of allowance
  • The 15-20 second wait time is triggering stall logic in the ollama pull command. It thinks it is disconnected because its waiting so long.
  • Ollama pull's resume logic then drops the partially downloaded blob upon resume. So the user sees the % bar go BACKWARD by a considerable amount.
  • This results in an infinite loop when they hit a really big blob. Users with fast connections stall out, and keep downloading the same blocks over and over. This hammers the **** out of your download servers. You see DDOS and turtle harder. Compounding the issue.

This is probably why you think you're being DDOS'd every time a big model drops and you get heavy server usage.

... Maybe. Maybe not. Anyways, best of luck chaps!

<!-- gh-comment-id:2731620430 --> @keithkmyers commented on GitHub (Mar 18, 2025): A few more findings that might be helpful for the admins: While 200Mbit works to stay connected, I still notice it delivers data in bursts. Here's what I think is happening: - Ollama pulls are being provided a quota of data from the server. X MB in Y time. - At 200Mbit, you need to wait only a few seconds between bursts for your quota to reset. - But at 1Gbit, you exhaust your quota quickly, and need to wait 15-20 seconds for your next block of allowance - The 15-20 second wait time is triggering stall logic in the ollama pull command. It thinks it is disconnected because its waiting so long. - Ollama pull's resume logic then drops the partially downloaded blob upon resume. So the user sees the % bar go BACKWARD by a considerable amount. - This results in an infinite loop when they hit a really big blob. Users with fast connections stall out, and keep downloading the same blocks over and over. This hammers the **** out of your download servers. You see DDOS and turtle harder. Compounding the issue. This is probably why you think you're being DDOS'd every time a big model drops and you get heavy server usage. ... Maybe. Maybe not. Anyways, best of luck chaps!
Author
Owner

@suredanish commented on GitHub (Aug 26, 2025):

mine gets stuck here

$ ollama list
NAME                ID              SIZE      MODIFIED       
gemma2:2b           8ccf136fdd52    1.6 GB    45 minutes ago    
deepseek-r1:1.5b    e0979632db5a    1.1 GB    2 hours ago       
deepseek-r1:8b      6995872bfe4c    5.2 GB    3 hours ago
       
$ ollama run deepseek-r1:1.5b
⠏
<!-- gh-comment-id:3223378096 --> @suredanish commented on GitHub (Aug 26, 2025): mine gets stuck here ```bash $ ollama list NAME ID SIZE MODIFIED gemma2:2b 8ccf136fdd52 1.6 GB 45 minutes ago deepseek-r1:1.5b e0979632db5a 1.1 GB 2 hours ago deepseek-r1:8b 6995872bfe4c 5.2 GB 3 hours ago $ ollama run deepseek-r1:1.5b ⠏ ```
Author
Owner

@cknotz commented on GitHub (Aug 27, 2025):

I'm having the same issue as @suredanish This started after a recent OS & llama update. Before, everything worked fine. I can still pull models without getting an error, but run gets stuck (I tried different llama versions, deepseek, and gpt-oss versions, all of them stall.

<!-- gh-comment-id:3227681482 --> @cknotz commented on GitHub (Aug 27, 2025): I'm having the same issue as @suredanish This started after a recent OS & llama update. Before, everything worked fine. I can still `pull` models without getting an error, but `run` gets stuck (I tried different llama versions, deepseek, and gpt-oss versions, all of them stall.
Author
Owner

@rick-github commented on GitHub (Aug 27, 2025):

If you can pull a model then it's not a download problem. Open a new issue and include logs.

<!-- gh-comment-id:3227694458 --> @rick-github commented on GitHub (Aug 27, 2025): If you can pull a model then it's not a download problem. Open a new issue and include logs.
Author
Owner

@cknotz commented on GitHub (Aug 27, 2025):

If you can pull a model then it's not a download problem. Open a new issue and include logs.

Thanks, @rick-github I actually did post a separate issue (totally fine if responses take a bit!). Perhaps a noob question, but what exact logs would you need to identify the issue? cat ~/.ollama/logs/server.log?

<!-- gh-comment-id:3227753750 --> @cknotz commented on GitHub (Aug 27, 2025): > If you can pull a model then it's not a download problem. Open a new issue and include logs. Thanks, @rick-github I actually did post a separate issue (totally fine if responses take a bit!). Perhaps a noob question, but what exact logs would you need to identify the issue? `cat ~/.ollama/logs/server.log`?
Author
Owner
<!-- gh-comment-id:3227766055 --> @rick-github commented on GitHub (Aug 27, 2025): https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues
Author
Owner

@dkayser commented on GitHub (Sep 7, 2025):

I solved this by adding OLLAMA_DEBUG=1

docker exec ollama-server sh -c "OLLAMA_DEBUG=1 ollama pull mixtral" - pulling 26GB at 119MB/s and no crash

anything without OLLAMA_DEBUG=1 stalls. It even crashed my dedicated server, which is extremely weird. Both with Alma 9 and Fedora 24. Straight up froze it. No idea why.

I first thought the NIC is toast, but with this settings everything works.

<!-- gh-comment-id:3263986767 --> @dkayser commented on GitHub (Sep 7, 2025): I solved this by adding OLLAMA_DEBUG=1 `docker exec ollama-server sh -c "OLLAMA_DEBUG=1 ollama pull mixtral"` - pulling 26GB at 119MB/s and no crash anything without OLLAMA_DEBUG=1 stalls. It even crashed my dedicated server, which is extremely weird. Both with Alma 9 and Fedora 24. Straight up froze it. No idea why. I first thought the NIC is toast, but with this settings everything works.
Author
Owner

@gitdexgit commented on GitHub (Sep 7, 2025):

I solved this by adding OLLAMA_DEBUG=1

docker exec ollama-server sh -c "OLLAMA_DEBUG=1 ollama pull mixtral" - pulling 26GB at 119MB/s and no crash

anything without OLLAMA_DEBUG=1 stalls. It even crashed my dedicated server, which is extremely weird. Both with Alma 9 and Fedora 24. Straight up froze it. No idea why.

I first thought the NIC is toast, but with this settings everything works.

wow nice.

btw are you on linux? I see you are using docker to execute ollama-server? that's really nice. I don't have a strong server but I would love to run tiny models with olama how can I do that ?

<!-- gh-comment-id:3264102571 --> @gitdexgit commented on GitHub (Sep 7, 2025): > I solved this by adding OLLAMA_DEBUG=1 > > `docker exec ollama-server sh -c "OLLAMA_DEBUG=1 ollama pull mixtral"` - pulling 26GB at 119MB/s and no crash > > anything without OLLAMA_DEBUG=1 stalls. It even crashed my dedicated server, which is extremely weird. Both with Alma 9 and Fedora 24. Straight up froze it. No idea why. > > I first thought the NIC is toast, but with this settings everything works. wow nice. btw are you on linux? I see you are using docker to execute ollama-server? that's really nice. I don't have a strong server but I would love to run tiny models with olama how can I do that ?
Author
Owner

@rick-github commented on GitHub (Sep 7, 2025):

docker exec ollama-server sh -c "OLLAMA_DEBUG=1 ollama pull mixtral" - pulling 26GB at 119MB/s and no crash

It's good that it's working for you, but unless ollama-server is a non-standard ollama container, OLLAMA_DEBUG=1 has literally no effect on the pull command.

<!-- gh-comment-id:3264122326 --> @rick-github commented on GitHub (Sep 7, 2025): > docker exec ollama-server sh -c "OLLAMA_DEBUG=1 ollama pull mixtral" - pulling 26GB at 119MB/s and no crash It's good that it's working for you, but unless `ollama-server` is a non-standard ollama container, `OLLAMA_DEBUG=1` has literally no effect on the pull command.
Author
Owner

@dkayser commented on GitHub (Sep 8, 2025):

It's good that it's working for you, but unless ollama-server is a non-standard ollama container, OLLAMA_DEBUG=1 has literally no effect on the pull command.

Good to know, thanks. It did solve the problem for me, tested it with many models now. I have no idea what other side effects may have contributed.

btw are you on linux? I see you are using docker to execute ollama-server? that's really nice. I don't have a strong server but I would love to run tiny models with olama how can I do that ?

I have an old dedicated server at a local hosting company with 256GB EEC and 2x10 Xeon Cores. It is not fast at all, but allows a lot of flexibility to run a couple of smaller models in parallel for vision, text extraction, and general tasks. The lack of VRAM is painfully obvious.

<!-- gh-comment-id:3265047095 --> @dkayser commented on GitHub (Sep 8, 2025): > It's good that it's working for you, but unless `ollama-server` is a non-standard ollama container, `OLLAMA_DEBUG=1` has literally no effect on the pull command. Good to know, thanks. It did solve the problem for me, tested it with many models now. I have no idea what other side effects may have contributed. > btw are you on linux? I see you are using docker to execute ollama-server? that's really nice. I don't have a strong server but I would love to run tiny models with olama how can I do that ? I have an old dedicated server at a local hosting company with 256GB EEC and 2x10 Xeon Cores. It is not fast at all, but allows a lot of flexibility to run a couple of smaller models in parallel for vision, text extraction, and general tasks. The lack of VRAM is painfully obvious.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5590