[GH-ISSUE #4520] llama3:70B pull error #28594

Closed
opened 2026-04-22 06:59:52 -05:00 by GiteaMirror · 25 comments
Owner

Originally created by @DimIsaev on GitHub (May 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4520

Originally assigned to: @bmizerany on GitHub.

What is the issue?

image

Error: max retries exceeded: unexpected EOF

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.33

Originally created by @DimIsaev on GitHub (May 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4520 Originally assigned to: @bmizerany on GitHub. ### What is the issue? ![image](https://github.com/ollama/ollama/assets/11172642/999167fc-a6d3-4833-999c-ece170007074) `Error: max retries exceeded: unexpected EOF` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.33
GiteaMirror added the networkingbug labels 2026-04-22 06:59:53 -05:00
Author
Owner

@kha84 commented on GitHub (May 19, 2024):

Is it just one off thing? Have you tried to restart it?
To me it looks like you might have some network issues. Or ollama "model registry" might have them.

But I also agree that ollama should have handled such issues better, and try to resume the download number of times before giving up, especially when we're speaking about downloading some massive files. It will be very disappointing to get this error at 99% without an option to resume, when you have to start over again from 0% :)

<!-- gh-comment-id:2119367078 --> @kha84 commented on GitHub (May 19, 2024): Is it just one off thing? Have you tried to restart it? To me it looks like you might have some network issues. Or ollama "model registry" might have them. But I also agree that ollama should have handled such issues better, and try to resume the download number of times before giving up, especially when we're speaking about downloading some massive files. It will be very disappointing to get this error at 99% without an option to resume, when you have to start over again from 0% :)
Author
Owner

@sammcj commented on GitHub (May 19, 2024):

I think the model registry might be a bit hosed, I can't pull any models as am getting the same error.

ollama pull llama3:8b-text-q6_K
pulling manifest
pulling ce446d4caf83...  99% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████  ▏ 6.5 GB/6.6 GB
Error: max retries exceeded: EOF

Might be related to https://github.com/ollama/ollama/issues/1736#issuecomment-2119237940

<!-- gh-comment-id:2119452237 --> @sammcj commented on GitHub (May 19, 2024): I think the model registry might be a bit hosed, I can't pull any models as am getting the same error. ``` ollama pull llama3:8b-text-q6_K pulling manifest pulling ce446d4caf83... 99% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████ ▏ 6.5 GB/6.6 GB Error: max retries exceeded: EOF ``` Might be related to https://github.com/ollama/ollama/issues/1736#issuecomment-2119237940
Author
Owner

@kha84 commented on GitHub (May 19, 2024):

By any chance aren't you behind any PROXY or VPN?

<!-- gh-comment-id:2119452958 --> @kha84 commented on GitHub (May 19, 2024): By any chance aren't you behind any PROXY or VPN?
Author
Owner

@sammcj commented on GitHub (May 19, 2024):

By any chance aren't you behind any PROXY or VPN?

Nope.
Tested on 3 machines and on two different internet connections (see https://github.com/ollama/ollama/issues/1736#issuecomment-2119237940)

<!-- gh-comment-id:2119453275 --> @sammcj commented on GitHub (May 19, 2024): > By any chance aren't you behind any PROXY or VPN? Nope. Tested on 3 machines and on two different internet connections (see https://github.com/ollama/ollama/issues/1736#issuecomment-2119237940)
Author
Owner

@sammcj commented on GitHub (May 19, 2024):

Ooo see also https://github.com/ollama/ollama/issues/1036 and https://github.com/ollama/ollama/issues/941

<!-- gh-comment-id:2119454306 --> @sammcj commented on GitHub (May 19, 2024): Ooo see also https://github.com/ollama/ollama/issues/1036 and https://github.com/ollama/ollama/issues/941
Author
Owner

@DimIsaev commented on GitHub (May 20, 2024):

Is it just one off thing? Have you tried to restart it? To me it looks like you might have some network issues. Or ollama "model registry" might have them.

Yes 3 Attemps

<!-- gh-comment-id:2119546377 --> @DimIsaev commented on GitHub (May 20, 2024): > Is it just one off thing? Have you tried to restart it? To me it looks like you might have some network issues. Or ollama "model registry" might have them. > Yes 3 Attemps
Author
Owner

@coder543 commented on GitHub (May 23, 2024):

I'm hitting this issue repeatedly with several llama3 models. The registry definitely seems like it needs a little help.

$ ollama pull llama3:8b-instruct-q8_0
pulling manifest
pulling 11a9680b0168... 100% ▕███████████████████████████████████████████████████████ ▏ 8.5 GB/8.5 GB
Error: max retries exceeded: EOF
<!-- gh-comment-id:2127583650 --> @coder543 commented on GitHub (May 23, 2024): I'm hitting this issue repeatedly with several llama3 models. The registry definitely seems like it needs a little help. ```bash $ ollama pull llama3:8b-instruct-q8_0 pulling manifest pulling 11a9680b0168... 100% ▕███████████████████████████████████████████████████████ ▏ 8.5 GB/8.5 GB Error: max retries exceeded: EOF ```
Author
Owner

@sammcj commented on GitHub (May 23, 2024):

Yeah it seems the registry has been completely broken for almost a week now. I’ve pretty much given up on it and now build all my models myself which is ok but kind of negates one of the primary benefits of ollama.

<!-- gh-comment-id:2128048785 --> @sammcj commented on GitHub (May 23, 2024): Yeah it seems the registry has been completely broken for almost a week now. I’ve pretty much given up on it and now build all my models myself which is ok but kind of negates one of the primary benefits of ollama.
Author
Owner

@sammcj commented on GitHub (May 23, 2024):

@jmorganca do you know what’s going on here? Is there a discussion thread we should be following / contributing to?

<!-- gh-comment-id:2128049731 --> @sammcj commented on GitHub (May 23, 2024): @jmorganca do you know what’s going on here? Is there a discussion thread we should be following / contributing to?
Author
Owner

@sammcj commented on GitHub (May 23, 2024):

Actually - this looks to be in an improved state this morning, now it just goes back to pulling very slowly (from 80MB/s down to 70KB/s) at 99% again like it used to - perhaps a fix was made?

<!-- gh-comment-id:2128166621 --> @sammcj commented on GitHub (May 23, 2024): Actually - this looks to be in an improved state this morning, now it just goes back to pulling very slowly (from 80MB/s down to 70KB/s) at 99% again like it used to - perhaps a fix was made?
Author
Owner

@coder543 commented on GitHub (May 23, 2024):

I just tested again, and I'm still seeing the issue on llama3:8b-instruct-q8_0.

<!-- gh-comment-id:2128175746 --> @coder543 commented on GitHub (May 23, 2024): I just tested again, and I'm still seeing the issue on `llama3:8b-instruct-q8_0`.
Author
Owner

@DimIsaev commented on GitHub (May 24, 2024):

yes the problem remains

<!-- gh-comment-id:2128663556 --> @DimIsaev commented on GitHub (May 24, 2024): yes the problem remains
Author
Owner

@ahoepf commented on GitHub (May 24, 2024):

yes, have the same problem but with all models

<!-- gh-comment-id:2128884347 --> @ahoepf commented on GitHub (May 24, 2024): yes, have the same problem but with all models
Author
Owner

@FairyTail2000 commented on GitHub (May 24, 2024):

I'm currently running ollama 0.1.39-rc1 from germany. My cloudflare target domain is: dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com where the first part seems to be your project id. The domain get's resolved to 2606:4700::6812:85a. Forcing an IPv4 connection does not alleviate the issue. So an IPv6 network level issue should not be the source. I have gotten this information using OpenSnitch

What I just noticed, although model download fails due to unexpected EOF the ollama server responds with http 200, which seems not entirely correct, since the operation was a failure.

I'm trying to pull llama3:instruct

In wireshark I have noticed the following:
image
Which shows the connection being finalized (dropped), which is initiated from my local ollama instance.

image

Wireshark then shows that cloudflare responds with an ACK (to the packet before the FIN) and in the next packet it resets the connection while acknowledging the FIN packet. This behaviour appears further down more often as more and more connections are reset.

I don't actually know how the http download is implemented, but could it be that ollama receives a byte count of the part, allocates a buffer and terminates the connection when the buffer is full? Even if the error message indicates a stalling of the connection?

EDIT: I'm no Go magician, however one thing stuck out to me:
afd2b058b4/server/download.go (L222C4-L222C26)

If the defer line runs prematurely, due to whatever reason, that would result in the observed sequence of tcp packets. However this is just an uneducated guess

<!-- gh-comment-id:2129510362 --> @FairyTail2000 commented on GitHub (May 24, 2024): I'm currently running ollama 0.1.39-rc1 from germany. My cloudflare target domain is: dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com where the first part seems to be your project id. The domain get's resolved to 2606:4700::6812:85a. Forcing an IPv4 connection does not alleviate the issue. So an IPv6 network level issue should not be the source. I have gotten this information using OpenSnitch What I just noticed, although model download fails due to unexpected EOF the ollama server responds with http 200, which seems not entirely correct, since the operation was a failure. I'm trying to pull llama3:instruct In wireshark I have noticed the following: ![image](https://github.com/ollama/ollama/assets/22645621/9cb08b6d-bce6-4b2e-bc11-e7bea6d02341) Which shows the connection being finalized (dropped), which is initiated from my local ollama instance. ![image](https://github.com/ollama/ollama/assets/22645621/25d83695-0779-42df-80c8-d2e18d925428) Wireshark then shows that cloudflare responds with an ACK (to the packet before the FIN) and in the next packet it resets the connection while acknowledging the FIN packet. This behaviour appears further down more often as more and more connections are reset. I don't actually know how the http download is implemented, but could it be that ollama receives a byte count of the part, allocates a buffer and terminates the connection when the buffer is full? Even if the error message indicates a stalling of the connection? EDIT: I'm no Go magician, however one thing stuck out to me: https://github.com/ollama/ollama/blob/afd2b058b4ee36230ab2a06927bdc0ff41b1e7ae/server/download.go#L222C4-L222C26 If the defer line runs prematurely, due to whatever reason, that would result in the observed sequence of tcp packets. However this is just an uneducated guess
Author
Owner

@coder543 commented on GitHub (May 24, 2024):

I tried again this morning, and it still could not download. So, I tried completely removing /usr/share/ollama and reinstalling ollama. Now, it was able to successfully download and install the model I mentioned before, which makes me think there could be some bug in ollama itself that is corrupting the local cache.

EDIT: well... that optimism is behind me now. I'm trying to redownload some of the other models I liked before I wiped the cache, and now they won't download completely either.

<!-- gh-comment-id:2129713913 --> @coder543 commented on GitHub (May 24, 2024): I tried again this morning, and it still could not download. So, I tried completely removing `/usr/share/ollama` and reinstalling ollama. Now, it was able to successfully download and install the model I mentioned before, which makes me think there could be some bug in `ollama` itself that is corrupting the local cache. EDIT: well... that optimism is behind me now. I'm trying to redownload some of the other models I liked before I wiped the cache, and now _they_ won't download completely either.
Author
Owner

@coder543 commented on GitHub (May 24, 2024):

Well... I picked an older release at random (0.1.35), and now it is able to successfully download models that 0.1.39 was unable to download. I have not tried to pinpoint the exact release that introduced this serious bug, but I'd say it was probably in the past week.

Not being able to download models reliably will make ollama extremely painful to use and remove most of its value. If this isn't a high priority issue for the project, then I don't know what would be.

For the moment, I'm working around the issue by downloading an old release of ollama and using that to pull models, which isn't great.

<!-- gh-comment-id:2129769755 --> @coder543 commented on GitHub (May 24, 2024): Well... I picked an older release at random (0.1.35), and now it is able to successfully download models that 0.1.39 was unable to download. I have not tried to pinpoint the exact release that introduced this serious bug, but I'd say it was probably in the past week. Not being able to download models reliably will make ollama extremely painful to use and remove most of its value. If this isn't a high priority issue for the project, then I don't know what would be. For the moment, I'm working around the issue by downloading an old release of ollama and using that to pull models, which isn't great.
Author
Owner

@coder543 commented on GitHub (May 24, 2024):

Tagging @mxyng, since I see some changes that affect the model download code paths in the past week or so, and something in there might not be right.

<!-- gh-comment-id:2129780145 --> @coder543 commented on GitHub (May 24, 2024): Tagging @mxyng, since I see some changes that affect the model download code paths in the past week or so, and something in there might not be right.
Author
Owner

@noxer commented on GitHub (May 24, 2024):

Maybe a piece of the puzzle (and a quick fix for anyone stuck on this).

  • Check the ollama serve log for the numbers of the parts that are stuck
  • Open the corresponding sha265-{huge hash}-partial-{nn} (nn being the number) files in the models/blobs folder as a text file
  • Now replace the number behind Completed: with a 0
  • Save the file
  • Retry the pull

This forces ollama to download the failed parts from the start and hopefully completes them this time.

<!-- gh-comment-id:2130128490 --> @noxer commented on GitHub (May 24, 2024): Maybe a piece of the puzzle (and a quick fix for anyone stuck on this). - Check the `ollama serve` log for the numbers of the parts that are stuck - Open the corresponding `sha265-{huge hash}-partial-{nn}` (nn being the number) files in the `models/blobs` folder as a text file - Now replace the number behind `Completed:` with a 0 - Save the file - Retry the pull This forces ollama to download the failed parts from the start and hopefully completes them this time.
Author
Owner

@noxer commented on GitHub (May 24, 2024):

The bug is in this line

n, err := io.CopyN(w, io.TeeReader(resp.Body, part), part.Size)

It always tries to re-download the full chunk size even if parts have already been downloaded. Correct would be

n, err := io.CopyN(w, io.TeeReader(resp.Body, part), part.Size-part.Completed)
<!-- gh-comment-id:2130137798 --> @noxer commented on GitHub (May 24, 2024): The bug is in this line ```go n, err := io.CopyN(w, io.TeeReader(resp.Body, part), part.Size) ``` It always tries to re-download the full chunk size even if parts have already been downloaded. Correct would be ```go n, err := io.CopyN(w, io.TeeReader(resp.Body, part), part.Size-part.Completed) ```
Author
Owner

@DimIsaev commented on GitHub (May 30, 2024):

pulling manifest
pulling 0bd51f8f0c97...  29% ▕█████████████████████                                                     ▏  11 GB/ 39 GB  2.7 MB/s   2
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/s/0b/0bd51f8f0c975ce910ed067dcb962a9af05b77bafcdc595ef02178387f10e51d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77a7c3af820529859349a%!F(MISSING)20240530%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240530T155259Z&X-Amz-Expire0&X-Amz-SignedHeaders=host&X-Amz-Signature=dd62ec94899015534ce1d7e7ecf720685ff06b5a7687e53b3d219ddb657e56cc": net/http: TLS handshakeout

will the problem be solved?

<!-- gh-comment-id:2140179710 --> @DimIsaev commented on GitHub (May 30, 2024): ``` pulling manifest pulling 0bd51f8f0c97... 29% ▕█████████████████████ ▏ 11 GB/ 39 GB 2.7 MB/s 2 Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/s/0b/0bd51f8f0c975ce910ed067dcb962a9af05b77bafcdc595ef02178387f10e51d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77a7c3af820529859349a%!F(MISSING)20240530%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240530T155259Z&X-Amz-Expire0&X-Amz-SignedHeaders=host&X-Amz-Signature=dd62ec94899015534ce1d7e7ecf720685ff06b5a7687e53b3d219ddb657e56cc": net/http: TLS handshakeout ``` will the problem be solved?
Author
Owner

@DimIsaev commented on GitHub (Jun 8, 2024):

up

<!-- gh-comment-id:2155867759 --> @DimIsaev commented on GitHub (Jun 8, 2024): up
Author
Owner

@rampageservices commented on GitHub (Aug 2, 2024):

This yet to be pulled PR may help some of those here facing EOF issues:
#4625

<!-- gh-comment-id:2265356077 --> @rampageservices commented on GitHub (Aug 2, 2024): This yet to be pulled PR may help some of those here facing EOF issues: #4625
Author
Owner

@besonders-santhosh commented on GitHub (Jan 4, 2025):

in my case:
its the firewall thats blocking pulling of some images.. and leads to "Error: max retries exceeded: EOF"
And it happens after the manifest is pulled and after some 10-20% of main files..

How i figured:
Latest version of ollama windows desktop is hiding this firewall error everywhere (not even found in system/server logs)
I tried installing older version, 0.5 ? or something..
It showed the error: "your zscalar is not allowing me to download from this site blah blah blah"

<!-- gh-comment-id:2571254492 --> @besonders-santhosh commented on GitHub (Jan 4, 2025): in my case: its the firewall thats blocking pulling of some images.. and leads to "Error: max retries exceeded: EOF" And it happens after the manifest is pulled and after some 10-20% of main files.. How i figured: Latest version of ollama windows desktop is hiding this firewall error everywhere (not even found in system/server logs) I tried installing older version, 0.5 ? or something.. It showed the error: "your zscalar is not allowing me to download from this site blah blah blah"
Author
Owner

@yifan0011 commented on GitHub (Jan 23, 2025):

By any chance aren't you behind any PROXY or VPN?

Hi, I am behind the company proxy and having the exact same problem:

U:>ollama pull mistral
pulling manifest
pulling ff82381e2bea... 0% ▕ ▏ 0 B/4.1 GB
Error: max retries exceeded: EOF

Have you had a solution?

<!-- gh-comment-id:2609456004 --> @yifan0011 commented on GitHub (Jan 23, 2025): > By any chance aren't you behind any PROXY or VPN? Hi, I am behind the company proxy and having the exact same problem: U:\>ollama pull mistral pulling manifest pulling ff82381e2bea... 0% ▕ ▏ 0 B/4.1 GB Error: max retries exceeded: EOF Have you had a solution?
Author
Owner

@OnlyRen commented on GitHub (Jan 28, 2025):

Have you had a solution?

I just walked around the issue by running the same command repeatedly. It will keep resuming so your progress is save. Just hit CTRL+C and run the pull command once it stops or hits you with an EOF.

<!-- gh-comment-id:2619089934 --> @OnlyRen commented on GitHub (Jan 28, 2025): > Have you had a solution? I just walked around the issue by running the same command repeatedly. It will keep resuming so your progress is save. Just hit `CTRL+C` and run the pull command once it stops or hits you with an EOF.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28594