[GH-ISSUE #3162] Possibility to remove max retries exceeded when downloading models from a slow connection #27705

Open
opened 2026-04-22 05:14:48 -05:00 by GiteaMirror · 16 comments
Owner

Originally created by @DaRetriever on GitHub (Mar 15, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3162

What are you trying to do?

I'm trying to download Mixtral (26Gb), but every 120 mb an error pops up stating:
Error: max retries exceeded: Get "e9e56e8bb5/data!F(MISSING)20240315%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240315T063326Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=d14a2deeadcc4f625c71535f456b49a6f8915521ddc7352f2f81aa0f4635bb47": net/http: TLS handshake timeout

How should we solve this?

Wouldn't it be possible to add a feature disabling or setting a higher number of retries for the user?

What is the impact of not solving this?

I understand most people have fast internet, but with my max 500 Kb/s (hopefully cable internet is on the way) all models are a pain. Mistral I can babysit through the download since it takes a couple of hours, but I can't babysit and relaunch Mixtral over 2 days...

Anything else?

No response

Originally created by @DaRetriever on GitHub (Mar 15, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3162 ### What are you trying to do? I'm trying to download Mixtral (26Gb), but every 120 mb an error pops up stating: Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/e9/e9e56e8bb5f0fcd4860675e6837a8f6a94e659f5fa7dce6a1076279336320f2b/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240315%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240315T063326Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=d14a2deeadcc4f625c71535f456b49a6f8915521ddc7352f2f81aa0f4635bb47": net/http: TLS handshake timeout ### How should we solve this? Wouldn't it be possible to add a feature disabling or setting a higher number of retries for the user? ### What is the impact of not solving this? I understand most people have fast internet, but with my max 500 Kb/s (hopefully cable internet is on the way) all models are a pain. Mistral I can babysit through the download since it takes a couple of hours, but I can't babysit and relaunch Mixtral over 2 days... ### Anything else? _No response_
GiteaMirror added the feature request label 2026-04-22 05:14:48 -05:00
Author
Owner

@g-i-o-r-g-i-o commented on GitHub (Apr 29, 2024):

Same, I had to run the command 10 times because of possibly unstable wifi.

ollama run llama3

<!-- gh-comment-id:2082508569 --> @g-i-o-r-g-i-o commented on GitHub (Apr 29, 2024): Same, I had to run the command 10 times because of possibly unstable wifi. > ollama run llama3
Author
Owner

@TipuatGit commented on GitHub (May 22, 2024):

Damn, then doesnt that mean if Im downloading llama3 and it fails at every 6% (which it is) then I have to run the command ollama run llama3 16 times!?

<!-- gh-comment-id:2124784467 --> @TipuatGit commented on GitHub (May 22, 2024): Damn, then doesnt that mean if Im downloading llama3 and it fails at every 6% (which it is) then I have to run the command `ollama run llama3` 16 times!?
Author
Owner

@b1tfl0w commented on GitHub (May 22, 2024):

Damn, then doesnt that mean if Im downloading llama3 and it fails at every 6% (which it is) then I have to run the command ollama run llama3 16 times!?

hello, if you are on linux, as a workaround you can simply put the command in a while loop so at least you don't have to run it again manually:
while : ; do ollama run llama3 ; done

<!-- gh-comment-id:2124861317 --> @b1tfl0w commented on GitHub (May 22, 2024): > Damn, then doesnt that mean if Im downloading llama3 and it fails at every 6% (which it is) then I have to run the command `ollama run llama3` 16 times!? hello, if you are on linux, as a workaround you can simply put the command in a while loop so at least you don't have to run it again manually: `while : ; do ollama run llama3 ; done`
Author
Owner

@TipuatGit commented on GitHub (May 22, 2024):

hello, if you are on linux, as a workaround you can simply put the command in a while loop so at least you don't have to run it again manually: while : ; do ollama run llama3 ; done

good suggestion, but Im on windows and I dont know how CMD scripts work.

<!-- gh-comment-id:2125406055 --> @TipuatGit commented on GitHub (May 22, 2024): > hello, if you are on linux, as a workaround you can simply put the command in a while loop so at least you don't have to run it again manually: `while : ; do ollama run llama3 ; done` good suggestion, but Im on windows and I dont know how CMD scripts work.
Author
Owner

@zioalex commented on GitHub (May 29, 2024):

I think that a better solution is needed here. If you are behind a proxy checking the data in transit the process can be very slow and none of the above proposal will work.

<!-- gh-comment-id:2136868035 --> @zioalex commented on GitHub (May 29, 2024): I think that a better solution is needed here. If you are behind a proxy checking the data in transit the process can be very slow and none of the above proposal will work.
Author
Owner

@Ishant-Subhash-Dahiwale commented on GitHub (May 29, 2024):

Open Notepad or any text editor.
:loop
ollama run llama3
goto loop
Save the file with a .bat extension
This batch script will continuously execute the command ollama run llama3 in an infinite loop.
You can run this batch script by double-clicking on it, or you can run it from the command prompt.

<!-- gh-comment-id:2138165655 --> @Ishant-Subhash-Dahiwale commented on GitHub (May 29, 2024): Open Notepad or any text editor. `:loop` `ollama run llama3` `goto loop` Save the file with a .bat extension This batch script will continuously execute the command ollama run llama3 in an infinite loop. You can run this batch script by double-clicking on it, or you can run it from the command prompt.
Author
Owner

@Revnoplex commented on GitHub (Dec 10, 2024):

Damn, then doesnt that mean if Im downloading llama3 and it fails at every 6% (which it is) then I have to run the command ollama run llama3 16 times!?

hello, if you are on linux, as a workaround you can simply put the command in a while loop so at least you don't have to run it again manually: while : ; do ollama run llama3 ; done

A better way to do this would be to check the exit code in a bash script so it exits the loop once it has downloaded successfully.

OLLAMA_EXIT_CODE=
while [[ $OLLAMA_EXIT_CODE != 0 ]]
do
    ollama pull "$@"
    OLLAMA_EXIT_CODE=$?
done

<!-- gh-comment-id:2530576770 --> @Revnoplex commented on GitHub (Dec 10, 2024): > > Damn, then doesnt that mean if Im downloading llama3 and it fails at every 6% (which it is) then I have to run the command `ollama run llama3` 16 times!? > > hello, if you are on linux, as a workaround you can simply put the command in a while loop so at least you don't have to run it again manually: `while : ; do ollama run llama3 ; done` A better way to do this would be to check the exit code in a bash script so it exits the loop once it has downloaded successfully. ```bash OLLAMA_EXIT_CODE= while [[ $OLLAMA_EXIT_CODE != 0 ]] do ollama pull "$@" OLLAMA_EXIT_CODE=$? done ```
Author
Owner

@peg-leg commented on GitHub (Dec 23, 2024):

Some sort of resume support would be welcome here. I've been at this for 6 hours with no end in sight becuase i'm on a slow connection that keeps being reset. Really silly. Why should a data stream be started from scratch every time the connection is reset?

<!-- gh-comment-id:2559541370 --> @peg-leg commented on GitHub (Dec 23, 2024): Some sort of resume support would be welcome here. I've been at this for 6 hours with no end in sight becuase i'm on a slow connection that keeps being reset. Really silly. Why should a data stream be started from scratch every time the connection is reset?
Author
Owner

@SinnieOnFire commented on GitHub (Jan 25, 2025):

Having the same issue on Windows:

time=2025-01-25T15:48:44.917+02:00 level=INFO source=download.go:291 msg="4cd576d9aa16 part 26 attempt 0 failed: net/http: TLS handshake timeout, retrying in 1s" time=2025-01-25T15:48:44.918+02:00 level=INFO source=download.go:370 msg="4cd576d9aa16 part 30 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."

After downloading for 3-5 minutes, it stops showing download speed and stalls or even rolls back the progress. After interrupting download and restarting it, it resumes normally where it left off until the issue repeats. Not sure but possibly Cloudflare is not happy with my ISP?

<!-- gh-comment-id:2613973436 --> @SinnieOnFire commented on GitHub (Jan 25, 2025): Having the same issue on Windows: `time=2025-01-25T15:48:44.917+02:00 level=INFO source=download.go:291 msg="4cd576d9aa16 part 26 attempt 0 failed: net/http: TLS handshake timeout, retrying in 1s" time=2025-01-25T15:48:44.918+02:00 level=INFO source=download.go:370 msg="4cd576d9aa16 part 30 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."` After downloading for 3-5 minutes, it stops showing download speed and stalls or even rolls back the progress. After interrupting download and restarting it, it resumes normally where it left off until the issue repeats. Not sure but possibly Cloudflare is not happy with my ISP?
Author
Owner

@psmukhopadhyay commented on GitHub (Feb 5, 2025):

Today, I encountered the same issue on my Ubuntu 22.04 LTS and found a solution that made my Ollama instance great again. It is actually an issue with the name resolution service on the local machine.

Step 1: Edit systemd-resolved Configuration

Run:
sudo nano /etc/systemd/resolved.conf

Find the line that starts with:
#DNS=
Uncomment it (remove #) and set Google’s DNS:
DNS=8.8.8.8 8.8.4.4

Also, find:
#FallbackDNS=
Uncomment it and set Cloudflare's DNS as a backup:
FallbackDNS=1.1.1.1 1.0.0.1

Save the file.

Step 2: Restart systemd-resolved

sudo systemctl restart systemd-resolved

Step 3: Verify the New DNS

Run:
resolvectl status

You should now see:
Global
DNS Servers: 8.8.8.8 8.8.4.4
Fallback DNS Servers: 1.1.1.1 1.0.0.1

Also, check:
cat /etc/resolv.conf

It should show:
nameserver 127.0.0.53

This means systemd-resolved is handling DNS, but queries will go to Google DNS.

Step 4: Test DNS Resolution

Run:
dig google.com OR dig google.com @8.8.8.8
or
nslookup google.com

The ANSWER SECTION should now show an IP address, confirming that DNS resolution is working.

Try Ollama run now

Image

<!-- gh-comment-id:2635828616 --> @psmukhopadhyay commented on GitHub (Feb 5, 2025): Today, I encountered the same issue on my **Ubuntu 22.04 LTS** and found a solution that made my Ollama instance great again. It is actually an issue with the name resolution service on the local machine. Step 1: Edit systemd-resolved Configuration Run: sudo nano /etc/systemd/resolved.conf Find the line that starts with: #DNS= Uncomment it (remove #) and set Google’s DNS: DNS=8.8.8.8 8.8.4.4 Also, find: #FallbackDNS= Uncomment it and set Cloudflare's DNS as a backup: FallbackDNS=1.1.1.1 1.0.0.1 Save the file. Step 2: Restart systemd-resolved sudo systemctl restart systemd-resolved Step 3: Verify the New DNS Run: resolvectl status You should now see: Global DNS Servers: 8.8.8.8 8.8.4.4 Fallback DNS Servers: 1.1.1.1 1.0.0.1 Also, check: cat /etc/resolv.conf It should show: nameserver 127.0.0.53 This means systemd-resolved is handling DNS, but queries will go to Google DNS. Step 4: Test DNS Resolution Run: dig google.com OR dig google.com @8.8.8.8 or nslookup google.com The ANSWER SECTION should now show an IP address, confirming that DNS resolution is working. Try Ollama run now ![Image](https://github.com/user-attachments/assets/b3e8c5b5-09b0-4dd5-b568-ab831b982b35)
Author
Owner

@cheachu commented on GitHub (Feb 13, 2025):

I have a good internet connection but i still face it
currently i have one temproary solution for this
I saw that using a VPN solves this issue
I know this is a tem solution but its working

<!-- gh-comment-id:2655745931 --> @cheachu commented on GitHub (Feb 13, 2025): I have a good internet connection but i still face it currently i have one temproary solution for this I saw that using a VPN solves this issue I know this is a tem solution but its working
Author
Owner

@SegFaulty commented on GitHub (Mar 8, 2025):

The ollama pull / download system is totally broken for me.
Can not download a model bigger then 2GB.
But I use a simple workaround.

  • start server with debug on (see the ENV vars)
  • then ollama pull {model}
  • then it starts and short time after, it starts to complain about stalled chunks
  • at some point it presents the specific url in the debug output
  • stop your pull (control-c)
  • copy that url, start download in browser ... even if the browser fail to download it on the first try, they can resume at the partly downloaded file
  • find the ollama blob directory there is a _partly file with the expected size (and x parts for download chunks)
  • put your browser downloaded file in this dir named as the partly file, without partly
  • delete all other files starting with this hash-name
  • then go back to your pull command and start it again
  • now it detects the existing blob and installs it
  • ready to use your new model
<!-- gh-comment-id:2708387001 --> @SegFaulty commented on GitHub (Mar 8, 2025): The ollama pull / download system is totally broken for me. Can not download a model bigger then 2GB. But I use a simple workaround. - start server with debug on (see the ENV vars) - then `ollama pull {model}` - then it starts and short time after, it starts to complain about stalled chunks - at some point it presents the specific url in the debug output - stop your pull (control-c) - copy that url, start download in browser ... even if the browser fail to download it on the first try, they can resume at the partly downloaded file - find the ollama blob directory there is a _partly file with the expected size (and x parts for download chunks) - put your browser downloaded file in this dir named as the partly file, without partly - delete all other files starting with this hash-name - then go back to your pull command and start it again - now it detects the existing blob and installs it - ready to use your new model
Author
Owner

@peg-leg commented on GitHub (Mar 9, 2025):

Oh this is brilliant, thanks @SegFaulty - that's definitely a workaround I'm prepared to do.

<!-- gh-comment-id:2708732611 --> @peg-leg commented on GitHub (Mar 9, 2025): Oh this is brilliant, thanks @SegFaulty - that's definitely a workaround I'm prepared to do.
Author
Owner

@alexpaul commented on GitHub (Apr 5, 2025):

Oops my issue was that I was on VPN 😬

<!-- gh-comment-id:2780717131 --> @alexpaul commented on GitHub (Apr 5, 2025): Oops my issue was that I was on VPN 😬
Author
Owner

@MasoudYazdi commented on GitHub (Jun 21, 2025):

silly !
Even with :loop , my download goes up to 100Mb and then comes back to 50Mb !
How is that possible?

<!-- gh-comment-id:2993489811 --> @MasoudYazdi commented on GitHub (Jun 21, 2025): silly ! Even with :loop , my download goes up to 100Mb and then comes back to 50Mb ! How is that possible?
Author
Owner

@shafenbadar commented on GitHub (Apr 13, 2026):

Download Bigger file with IDM, place to models/blobs folder run that model with ollama (ollama run ), and ollama will complete the prerequisites automatically and run the model
https://github.com/ollama/ollama/issues/8484#issuecomment-4238393291

<!-- gh-comment-id:4238515464 --> @shafenbadar commented on GitHub (Apr 13, 2026): Download Bigger file with IDM, place to models/blobs folder run that model with ollama (ollama run <model>), and ollama will complete the prerequisites automatically and run the model https://github.com/ollama/ollama/issues/8484#issuecomment-4238393291
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#27705