[GH-ISSUE #1036] unexpected EOF when running ollama pull #47016

Open
opened 2026-04-28 02:41:36 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @BruceMacD on GitHub (Nov 7, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1036

Occasionally while pulling a model the download may get stuck waiting for a part which experienced an error.

$ ollama run llama2:70b
pulling manifest
pulling 153664158022...  99% |█████████████████████████████████████████████████████████████████████████████████ | (38/39 GB, 619 kB/s) [24m57s:9m15s]

Server log:

2023/11/07 11:36:24 download.go:160: 153664158022 part 75 attempt 0 failed: unexpected EOF, retrying

Workaround is to stop and resume the download.

Originally created by @BruceMacD on GitHub (Nov 7, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1036 Occasionally while pulling a model the download may get stuck waiting for a part which experienced an error. ```bash $ ollama run llama2:70b pulling manifest pulling 153664158022... 99% |█████████████████████████████████████████████████████████████████████████████████ | (38/39 GB, 619 kB/s) [24m57s:9m15s] ``` Server log: ```bash 2023/11/07 11:36:24 download.go:160: 153664158022 part 75 attempt 0 failed: unexpected EOF, retrying ``` Workaround is to stop and resume the download.
GiteaMirror added the networkingbug labels 2026-04-28 02:41:38 -05:00
Author
Owner

@killthekitten commented on GitHub (Nov 8, 2023):

I've accumulated a handful of different connection errors, and EOF sometimes appeared among them. I have two terminals running the pull in parellel, one on my local machine, and another one on GCP, and the failures seem correlated.

One moment both terminals go unresponsive or throw an error, while after some time and several retries the download speed reaches hundreds of mb/s. Could it be related to this issue somehow https://github.com/jmorganca/ollama/issues/850?

images.go:1172: couldn't start upload: Get "https://registry.ollama.ai/v2/library/llama2/manifests/13b-chat-q5_1": context canceled
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/13b-chat-q5_1": dial tcp 34.120.132.20:443: connect: connection timed out
llm-deployments-ollama-1    | 2023/11/08 20:54:17 download.go:122: downloading 6ae280299950 in 64 64 MB part(s)
llm-deployments-ollama-1    | 2023/11/08 20:55:42 download.go:160: 6ae280299950 part 58 attempt 0 failed: unexpected EOF, retrying
llm-deployments-ollama-1    | 2023/11/08 20:55:50 images.go:1172: couldn't start upload: Get "https://registry.ollama.ai/v2/library/mistral/blobs/sha256:6ae28029995007a3ee8d0b8556d50f3b59b831074cf19c84de87acf51fb54054": dial tcp: lookup registry.ollama.ai on 127.0.0.11:53: server misbehaving
llm-deployments-ollama-1    | 2023/11/08 20:55:50 download.go:160: 6ae280299950 part 58 attempt 1 failed: Get "https://registry.ollama.ai/v2/library/mistral/blobs/sha256:6ae28029995007a3ee8d0b8556d50f3b59b831074cf19c84de87acf51fb54054": dial tcp: lookup registry.ollama.ai on 127.0.0.11:53: server misbehaving, retrying
llm-deployments-ollama-1    | 2023/11/08 20:55:58 images.go:1172: couldn't start upload: Get "https://registry.ollama.ai/v2/library/mistral/blobs/sha256:6ae28029995007a3ee8d0b8556d50f3b59b831074cf19c84de87acf51fb54054": dial tcp: lookup registry.ollama.ai on 127.0.0.11:53: server misbehaving
llm-deployments-ollama-1    | 2023/11/08 20:55:58 download.go:160: 6ae280299950 part 58 attempt 2 failed: Get "https://registry.ollama.ai/v2/library/mistral/blobs/sha256:6ae28029995007a3ee8d0b8556d50f3b59b831074cf19c84de87acf51fb54054": dial tcp: lookup registry.ollama.ai on 127.0.0.11:53: server misbehaving, retrying
<!-- gh-comment-id:1802765080 --> @killthekitten commented on GitHub (Nov 8, 2023): I've accumulated a handful of different connection errors, and EOF sometimes appeared among them. I have two terminals running the pull in parellel, one on my local machine, and another one on GCP, and the failures seem correlated. One moment both terminals go unresponsive or throw an error, while after some time and several retries the download speed reaches hundreds of mb/s. Could it be related to this issue somehow https://github.com/jmorganca/ollama/issues/850? ``` images.go:1172: couldn't start upload: Get "https://registry.ollama.ai/v2/library/llama2/manifests/13b-chat-q5_1": context canceled ``` ``` Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/13b-chat-q5_1": dial tcp 34.120.132.20:443: connect: connection timed out ``` ``` llm-deployments-ollama-1 | 2023/11/08 20:54:17 download.go:122: downloading 6ae280299950 in 64 64 MB part(s) llm-deployments-ollama-1 | 2023/11/08 20:55:42 download.go:160: 6ae280299950 part 58 attempt 0 failed: unexpected EOF, retrying llm-deployments-ollama-1 | 2023/11/08 20:55:50 images.go:1172: couldn't start upload: Get "https://registry.ollama.ai/v2/library/mistral/blobs/sha256:6ae28029995007a3ee8d0b8556d50f3b59b831074cf19c84de87acf51fb54054": dial tcp: lookup registry.ollama.ai on 127.0.0.11:53: server misbehaving llm-deployments-ollama-1 | 2023/11/08 20:55:50 download.go:160: 6ae280299950 part 58 attempt 1 failed: Get "https://registry.ollama.ai/v2/library/mistral/blobs/sha256:6ae28029995007a3ee8d0b8556d50f3b59b831074cf19c84de87acf51fb54054": dial tcp: lookup registry.ollama.ai on 127.0.0.11:53: server misbehaving, retrying llm-deployments-ollama-1 | 2023/11/08 20:55:58 images.go:1172: couldn't start upload: Get "https://registry.ollama.ai/v2/library/mistral/blobs/sha256:6ae28029995007a3ee8d0b8556d50f3b59b831074cf19c84de87acf51fb54054": dial tcp: lookup registry.ollama.ai on 127.0.0.11:53: server misbehaving llm-deployments-ollama-1 | 2023/11/08 20:55:58 download.go:160: 6ae280299950 part 58 attempt 2 failed: Get "https://registry.ollama.ai/v2/library/mistral/blobs/sha256:6ae28029995007a3ee8d0b8556d50f3b59b831074cf19c84de87acf51fb54054": dial tcp: lookup registry.ollama.ai on 127.0.0.11:53: server misbehaving, retrying ```
Author
Owner

@cwatt commented on GitHub (Dec 6, 2023):

I also seem to be experiencing this or a related problem while tring to pull models. Unfortunately, I haven't been able to succesfully pull any models since installing Ollama (0.1.13). Here's an example:

gpajd@WUST056705 ~ % ollama pull codellama
pulling manifest 
pulling 3a43f93b78ec... 100% ▕████████████████▏ 3.8 GB                         
Error: max retries exceeded: unexpected EOF
<!-- gh-comment-id:1843403412 --> @cwatt commented on GitHub (Dec 6, 2023): I also seem to be experiencing this or a related problem while tring to pull models. Unfortunately, I haven't been able to succesfully pull any models since installing Ollama (0.1.13). Here's an example: ``` gpajd@WUST056705 ~ % ollama pull codellama pulling manifest pulling 3a43f93b78ec... 100% ▕████████████████▏ 3.8 GB Error: max retries exceeded: unexpected EOF ```
Author
Owner

@jmorganca commented on GitHub (Dec 24, 2023):

@cwatt so sorry you hit this error – wondering if this is still something you're hitting on every pull? Thanks for sharing will make sure to take a look at this

<!-- gh-comment-id:1868599430 --> @jmorganca commented on GitHub (Dec 24, 2023): @cwatt so sorry you hit this error – wondering if this is still something you're hitting on every pull? Thanks for sharing will make sure to take a look at this
Author
Owner

@cwatt commented on GitHub (Jan 2, 2024):

@jmorganca On subsequent pull attempts I actually haven't been hitting any more EOF errors, but rather digest mismatch errors like what is described in this issue.

ollama pull codellama
pulling manifest 
pulling 3a43f93b78ec... 100% ▕████████████████▏ 3.8 GB                         
pulling 8c17c2ebb0ea... 100% ▕████████████████▏ 7.0 KB                         
pulling 590d74a5569b... 100% ▕████████████████▏ 4.8 KB                         
pulling 2e0493f67d0c... 100% ▕████████████████▏   59 B                         
pulling 7f6a57943a88... 100% ▕████████████████▏  120 B                         
pulling 316526ac7323... 100% ▕████████████████▏  529 B                         
verifying sha256 digest 
Error: digest mismatch, file must be downloaded again: want sha256:3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac, got sha256:95e0eb0f860fe71bb37e83832c1bc1300ae827244bca7b8a89651a2c87d49770

I'm not sure what caused the change in behavior. I hope this helps!

<!-- gh-comment-id:1874266782 --> @cwatt commented on GitHub (Jan 2, 2024): @jmorganca On subsequent pull attempts I actually haven't been hitting any more EOF errors, but rather digest mismatch errors like what is described in [this issue](https://github.com/jmorganca/ollama/issues/941). ``` ollama pull codellama pulling manifest pulling 3a43f93b78ec... 100% ▕████████████████▏ 3.8 GB pulling 8c17c2ebb0ea... 100% ▕████████████████▏ 7.0 KB pulling 590d74a5569b... 100% ▕████████████████▏ 4.8 KB pulling 2e0493f67d0c... 100% ▕████████████████▏ 59 B pulling 7f6a57943a88... 100% ▕████████████████▏ 120 B pulling 316526ac7323... 100% ▕████████████████▏ 529 B verifying sha256 digest Error: digest mismatch, file must be downloaded again: want sha256:3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac, got sha256:95e0eb0f860fe71bb37e83832c1bc1300ae827244bca7b8a89651a2c87d49770 ``` I'm not sure what caused the change in behavior. I hope this helps!
Author
Owner

@sammcj commented on GitHub (May 19, 2024):

FYI the model registry seems to have been broken for 3-4 days.

(https://github.com/ollama/ollama/issues/1736#issuecomment-2119237940)

<!-- gh-comment-id:2119454089 --> @sammcj commented on GitHub (May 19, 2024): FYI the model registry seems to have been broken for 3-4 days. (https://github.com/ollama/ollama/issues/1736#issuecomment-2119237940)
Author
Owner

@sammcj commented on GitHub (May 24, 2024):

This PR fixes the issue for me: https://github.com/ollama/ollama/pull/4619

<!-- gh-comment-id:2130467426 --> @sammcj commented on GitHub (May 24, 2024): This PR fixes the issue for me: https://github.com/ollama/ollama/pull/4619
Author
Owner

@rampageservices commented on GitHub (Aug 2, 2024):

There is a follow-on PR to #4619 that is yet to be pulled that seems to resolve the remaining issues that people are facing in this issue thread.

https://github.com/ollama/ollama/pull/4625

<!-- gh-comment-id:2265279337 --> @rampageservices commented on GitHub (Aug 2, 2024): There is a follow-on PR to #4619 that is yet to be pulled that seems to resolve the remaining issues that people are facing in this issue thread. https://github.com/ollama/ollama/pull/4625
Author
Owner

@conglei1981 commented on GitHub (Sep 20, 2024):

docker ollama:0.3.11 ollama pull qwen2:7b
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/qwen2/manifests/7b": dial tcp: lookup registry.ollama.ai on 127.0.0.11:53: read udp 127.0.0.1:57926->127.0.0.11:53: i/o timeout

<!-- gh-comment-id:2362574725 --> @conglei1981 commented on GitHub (Sep 20, 2024): docker ollama:0.3.11 ollama pull qwen2:7b pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/qwen2/manifests/7b": dial tcp: lookup registry.ollama.ai on 127.0.0.11:53: read udp 127.0.0.1:57926->127.0.0.11:53: i/o timeout
Author
Owner

@rampageservices commented on GitHub (Oct 2, 2024):

docker ollama:0.3.11 ollama pull qwen2:7b pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/qwen2/manifests/7b": dial tcp: lookup registry.ollama.ai on 127.0.0.11:53: read udp 127.0.0.1:57926->127.0.0.11:53: i/o timeout

@conglei1981 That appears to be a local DNS issue. Please check your device's local DNS resolver.

<!-- gh-comment-id:2387521519 --> @rampageservices commented on GitHub (Oct 2, 2024): > docker ollama:0.3.11 ollama pull qwen2:7b pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/qwen2/manifests/7b": dial tcp: lookup registry.ollama.ai on 127.0.0.11:53: read udp 127.0.0.1:57926->127.0.0.11:53: i/o timeout @conglei1981 That appears to be a local DNS issue. Please check your device's local DNS resolver.
Author
Owner

@helperShang commented on GitHub (Oct 4, 2024):

Error: max retries exceeded: unexpected EOF

<!-- gh-comment-id:2393552249 --> @helperShang commented on GitHub (Oct 4, 2024): Error: max retries exceeded: unexpected EOF
Author
Owner

@maximiliankraft commented on GitHub (Dec 31, 2024):

I run into this issue when downloading a model overnight. I assume because my access point restarts itself during the night. So there is no internet connection for a couple of minutes. Can I manually increase the amount of retries? When I set it to try reconnecting for 1000s of times it should eventually work again

Nevermind, in bash I can just do:

until ollama pull <huge-model>; do echo "Trying again..."; sleep 2; done
<!-- gh-comment-id:2566137712 --> @maximiliankraft commented on GitHub (Dec 31, 2024): <s>I run into this issue when downloading a model overnight. I assume because my access point restarts itself during the night. So there is no internet connection for a couple of minutes. Can I manually increase the amount of retries? When I set it to try reconnecting for 1000s of times it should eventually work again</s> Nevermind, in bash I can just do: ``` until ollama pull <huge-model>; do echo "Trying again..."; sleep 2; done ```
Author
Owner

@richardARPANET commented on GitHub (Jan 28, 2025):

I run into this issue when downloading a model overnight. I assume because my access point restarts itself during the night. So there is no internet connection for a couple of minutes. Can I manually increase the amount of retries? When I set it to try reconnecting for 1000s of times it should eventually work again

Nevermind, in bash I can just do:

until ollama pull <huge-model>; do echo "Trying again..."; sleep 2; done

This actually works, thanks

<!-- gh-comment-id:2619116113 --> @richardARPANET commented on GitHub (Jan 28, 2025): > I run into this issue when downloading a model overnight. I assume because my access point restarts itself during the night. So there is no internet connection for a couple of minutes. Can I manually increase the amount of retries? When I set it to try reconnecting for 1000s of times it should eventually work again > > Nevermind, in bash I can just do: > > ``` > until ollama pull <huge-model>; do echo "Trying again..."; sleep 2; done > ``` This actually works, thanks
Author
Owner

@negal commented on GitHub (Apr 25, 2026):

For anyone still hitting this on SSL-inspected networks (corporate proxies, GFW, etc.): the root cause is often Go's TLS ClientHello fingerprint being dropped by middleboxes while plain curl/aria2c get through.

I put together a small Python tool as a workaround — fetches the manifest with curl, resumably downloads blobs with aria2c, then constructs the local manifest so ollama list recognizes the model:

https://github.com/negal/ollama-pull-fix

Drop-in alternative to ollama pull for library/ namespace models. Pure stdlib (no pip install), resumes on rerun.

<!-- gh-comment-id:4318425300 --> @negal commented on GitHub (Apr 25, 2026): For anyone still hitting this on SSL-inspected networks (corporate proxies, GFW, etc.): the root cause is often Go's TLS ClientHello fingerprint being dropped by middleboxes while plain curl/aria2c get through. I put together a small Python tool as a workaround — fetches the manifest with curl, resumably downloads blobs with aria2c, then constructs the local manifest so `ollama list` recognizes the model: https://github.com/negal/ollama-pull-fix Drop-in alternative to `ollama pull` for `library/` namespace models. Pure stdlib (no pip install), resumes on rerun.
Author
Owner

@rampageservices commented on GitHub (Apr 25, 2026):

Nice find!

On Sat, Apr 25, 2026, 3:09 PM negal @.***> wrote:

negal left a comment (ollama/ollama#1036)
https://github.com/ollama/ollama/issues/1036#issuecomment-4318425300

For anyone still hitting this on SSL-inspected networks (corporate
proxies, GFW, etc.): the root cause is often Go's TLS ClientHello
fingerprint being dropped by middleboxes while plain curl/aria2c get
through.

I put together a small Python tool as a workaround — fetches the manifest
with curl, resumably downloads blobs with aria2c, then constructs the local
manifest so ollama list recognizes the model:

https://github.com/negal/ollama-pull-fix

Drop-in alternative to ollama pull for library/ namespace models. Pure
stdlib (no pip install), resumes on rerun.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/1036#issuecomment-4318425300,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AE32PCV4QWJ3PAELYEAP33D4XRQDZAVCNFSM6AAAAACYF7RLF6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DGMJYGQZDKMZQGA
.
Triage notifications, keep track of coding agent tasks and review pull
requests on the go with GitHub Mobile for iOS
https://github.com/notifications/mobile/ios/AE32PCUS3DE3SKCJZZUBLBL4XRQDZA5CNFSNUABFM5UWIORPF5TWS5BNNB2WEL2JONZXKZKDN5WW2ZLOOQXTIMZRHA2DENJTGAYKM4TFMFZW63VHMNXW23LFNZ2KKZLWMVXHJKTGN5XXIZLSL5UW64Y
and Android
https://github.com/notifications/mobile/android/AE32PCQV2JUPUJRBLDL73ST4XRQDZA5CNFSNUABFM5UWIORPF5TWS5BNNB2WEL2JONZXKZKDN5WW2ZLOOQXTIMZRHA2DENJTGAYKM4TFMFZW63VHMNXW23LFNZ2KKZLWMVXHJLTGN5XXIZLSL5QW4ZDSN5UWI.
Download it today!
You are receiving this because you commented.Message ID:
@.***>

<!-- gh-comment-id:4318438273 --> @rampageservices commented on GitHub (Apr 25, 2026): Nice find! On Sat, Apr 25, 2026, 3:09 PM negal ***@***.***> wrote: > *negal* left a comment (ollama/ollama#1036) > <https://github.com/ollama/ollama/issues/1036#issuecomment-4318425300> > > For anyone still hitting this on SSL-inspected networks (corporate > proxies, GFW, etc.): the root cause is often Go's TLS ClientHello > fingerprint being dropped by middleboxes while plain curl/aria2c get > through. > > I put together a small Python tool as a workaround — fetches the manifest > with curl, resumably downloads blobs with aria2c, then constructs the local > manifest so ollama list recognizes the model: > > https://github.com/negal/ollama-pull-fix > > Drop-in alternative to ollama pull for library/ namespace models. Pure > stdlib (no pip install), resumes on rerun. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/1036#issuecomment-4318425300>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AE32PCV4QWJ3PAELYEAP33D4XRQDZAVCNFSM6AAAAACYF7RLF6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DGMJYGQZDKMZQGA> > . > Triage notifications, keep track of coding agent tasks and review pull > requests on the go with GitHub Mobile for iOS > <https://github.com/notifications/mobile/ios/AE32PCUS3DE3SKCJZZUBLBL4XRQDZA5CNFSNUABFM5UWIORPF5TWS5BNNB2WEL2JONZXKZKDN5WW2ZLOOQXTIMZRHA2DENJTGAYKM4TFMFZW63VHMNXW23LFNZ2KKZLWMVXHJKTGN5XXIZLSL5UW64Y> > and Android > <https://github.com/notifications/mobile/android/AE32PCQV2JUPUJRBLDL73ST4XRQDZA5CNFSNUABFM5UWIORPF5TWS5BNNB2WEL2JONZXKZKDN5WW2ZLOOQXTIMZRHA2DENJTGAYKM4TFMFZW63VHMNXW23LFNZ2KKZLWMVXHJLTGN5XXIZLSL5QW4ZDSN5UWI>. > Download it today! > You are receiving this because you commented.Message ID: > ***@***.***> >
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47016