[GH-ISSUE #8484] Issue with Ollama Model Download: Progress Reverting During Download #67520

Closed
opened 2026-05-04 10:37:44 -05:00 by GiteaMirror · 62 comments
Owner

Originally created by @mdjamilkashemporosh on GitHub (Jan 19, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8484

Originally assigned to: @bmizerany on GitHub.

What is the issue?

While downloading models using ollama run <model_name>, the progress often reverts—sometimes after 10-12% or even after 60%. The total download size also decreases before continuing. I've tested different networks but faced the same issue. A few weeks ago, I downloaded models without any problems.

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.5.7

Originally created by @mdjamilkashemporosh on GitHub (Jan 19, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8484 Originally assigned to: @bmizerany on GitHub. ### What is the issue? While downloading models using `ollama run <model_name>`, the progress often reverts—sometimes after 10-12% or even after 60%. The total download size also decreases before continuing. I've tested different networks but faced the same issue. A few weeks ago, I downloaded models without any problems. ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.5.7
GiteaMirror added the networkingbug labels 2026-05-04 10:37:45 -05:00
Author
Owner

@privacyfreak84 commented on GitHub (Jan 19, 2025):

Same issue on my side

<!-- gh-comment-id:2600818434 --> @privacyfreak84 commented on GitHub (Jan 19, 2025): Same issue on my side
Author
Owner

@h3isenbug commented on GitHub (Jan 19, 2025):

Same here.
Version: 0.5.7-1
Model that i'm trying to pull: llava:7b
This message is repeatedly logged while this problem occurs:

time=2025-01-19T16:09:54.920+03:30 level=INFO source=download.go:370 msg="170370233dd5 part 11 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
<!-- gh-comment-id:2600840962 --> @h3isenbug commented on GitHub (Jan 19, 2025): Same here. Version: 0.5.7-1 Model that i'm trying to pull: llava:7b This message is repeatedly logged while this problem occurs: ``` time=2025-01-19T16:09:54.920+03:30 level=INFO source=download.go:370 msg="170370233dd5 part 11 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." ```
Author
Owner

@h3isenbug commented on GitHub (Jan 19, 2025):

This problem is happening when a connection does not receive data for more than 5 seconds.
Unfortunately the 5 second limit is hard-coded:
https://github.com/ollama/ollama/blob/main/server/download.go#L368

<!-- gh-comment-id:2600844677 --> @h3isenbug commented on GitHub (Jan 19, 2025): This problem is happening when a connection does not receive data for more than 5 seconds. Unfortunately the 5 second limit is hard-coded: https://github.com/ollama/ollama/blob/main/server/download.go#L368
Author
Owner

@rick-github commented on GitHub (Jan 19, 2025):

Might be an upstream issue, this has been reported several times over the last few weeks.

#8406, #8384, #8330, #8280

https://github.com/ollama/ollama/issues/8330#issuecomment-2574510267 contains an explanation for the observed behavior, but no resolution.

<!-- gh-comment-id:2600860889 --> @rick-github commented on GitHub (Jan 19, 2025): Might be an upstream issue, this has been reported several times over the last few weeks. #8406, #8384, #8330, #8280 https://github.com/ollama/ollama/issues/8330#issuecomment-2574510267 contains an explanation for the observed behavior, but no resolution.
Author
Owner

@ahmedtalaltwd7 commented on GitHub (Jan 20, 2025):

Same issue.
Version: 0.5.7
Model : granite3.1-dense:8b
Or any model, even with "llama3.2:1b-instruct-q2_K".

<!-- gh-comment-id:2601701637 --> @ahmedtalaltwd7 commented on GitHub (Jan 20, 2025): Same issue. Version: 0.5.7 Model : granite3.1-dense:8b Or any model, even with "llama3.2:1b-instruct-q2_K".
Author
Owner

@jyomu commented on GitHub (Jan 21, 2025):

As a temporary workaround, you can continue the download by stopping the ollama pull within 5 seconds after the download speed drops by pressing Ctrl+C, and then running it again. At least, this method works in my environment.

<!-- gh-comment-id:2603851316 --> @jyomu commented on GitHub (Jan 21, 2025): As a temporary workaround, you can continue the download by stopping the `ollama pull` within 5 seconds after the download speed drops by pressing `Ctrl+C`, and then running it again. At least, this method works in my environment.
Author
Owner

@ahmedtalaltwd7 commented on GitHub (Jan 22, 2025):

As a temporary workaround, you can continue the download by stopping the ollama pull within 5 seconds after the download speed drops by pressing Ctrl+C, and then running it again. At least, this method works in my environment.

That's a tough thing when you have a slow internet connection😥 !!!

<!-- gh-comment-id:2607402633 --> @ahmedtalaltwd7 commented on GitHub (Jan 22, 2025): > As a temporary workaround, you can continue the download by stopping the `ollama pull` within 5 seconds after the download speed drops by pressing `Ctrl+C`, and then running it again. At least, this method works in my environment. That's a tough thing when you have a slow internet connection😥 !!!
Author
Owner

@cniebla commented on GitHub (Jan 22, 2025):

It stops when an SSD is performing a write to the disk while being cached on memory first and the disk is busy, so a Ctrl+C is not usable in this case. I wonder if the 5 sec can be altered at run time.

<!-- gh-comment-id:2608213573 --> @cniebla commented on GitHub (Jan 22, 2025): It stops when an SSD is performing a write to the disk while being cached on memory first and the disk is busy, so a `Ctrl+C` is not usable in this case. I wonder if the 5 sec can be altered at run time.
Author
Owner

@zeroducksleft commented on GitHub (Jan 23, 2025):

As a temporary workaround, you can continue the download by stopping the ollama pull within 5 seconds after the download speed drops by pressing Ctrl+C, and then running it again. At least, this method works in my environment.

Expanding on this suggestion by @jyomu, quick and dirty script downloads model unattended:

#!/bin/sh
while true; do
timeout 10s ollama pull <model-name>
done

Change the timeout to a reasonable duration based on your internet speed. For example, my download progress usually reverts after 200MB at 20MB/s, so I set my timeout to 10 seconds. Hit Ctrl+C when the download is complete.

<!-- gh-comment-id:2608929050 --> @zeroducksleft commented on GitHub (Jan 23, 2025): > As a temporary workaround, you can continue the download by stopping the `ollama pull` within 5 seconds after the download speed drops by pressing `Ctrl+C`, and then running it again. At least, this method works in my environment. Expanding on this suggestion by @jyomu, quick and dirty script downloads model unattended: ```sh #!/bin/sh while true; do timeout 10s ollama pull <model-name> done ``` Change the timeout to a reasonable duration based on your internet speed. For example, my download progress usually reverts after 200MB at 20MB/s, so I set my timeout to 10 seconds. Hit Ctrl+C when the download is complete.
Author
Owner

@mcapodici commented on GitHub (Jan 26, 2025):

This problem is happening when a connection does not receive data for more than 5 seconds. Unfortunately the 5 second limit is hard-coded: https://github.com/ollama/ollama/blob/main/server/download.go#L368

This is the issue that caused that code to go in. https://github.com/ollama/ollama/pull/1916

Probably a good feature but 60 seconds might be better?

<!-- gh-comment-id:2614185642 --> @mcapodici commented on GitHub (Jan 26, 2025): > This problem is happening when a connection does not receive data for more than 5 seconds. Unfortunately the 5 second limit is hard-coded: https://github.com/ollama/ollama/blob/main/server/download.go#L368 This is the issue that caused that code to go in. https://github.com/ollama/ollama/pull/1916 Probably a good feature but 60 seconds might be better?
Author
Owner

@FogoVoar commented on GitHub (Jan 27, 2025):

It is curious how developers can create such complex software and still fail at something so basic. Canceling an entire download after just 5 seconds without receiving packets? Is this done on purpose to annoy users? I just faced a situation where, at 95% on my fifth attempt, the speed started to drop, If it weren’t for @jyomu's trick I would be crying right now.

<!-- gh-comment-id:2617021584 --> @FogoVoar commented on GitHub (Jan 27, 2025): It is curious how developers can create such complex software and still fail at something so basic. Canceling an entire download after just 5 seconds without receiving packets? Is this done on purpose to annoy users? I just faced a situation where, at 95% on my fifth attempt, the speed started to drop, If it weren’t for @jyomu's trick I would be crying right now.
Author
Owner

@ctx2002 commented on GitHub (Jan 30, 2025):

still same problem. I have turned above bash script to a Powershell script,.

param(
    [Parameter(Mandatory=$true)]  # Force this argument to be required
    [string]$model = ""                
)

while ($true) {
    Write-Host "Attempting to download model..."
    $process = Start-Process -FilePath "ollama" -ArgumentList "pull $model" -PassThru -NoNewWindow
    
    try {
        $process | Wait-Process -Timeout 10 -ErrorAction Stop
        
        if ($process.ExitCode -eq 0) {
            Write-Host "Model downloaded successfully!"
            break 
        }
        else {
            Write-Host "Download failed (Exit code: $($process.ExitCode)). Retrying..."
        }
    }
    catch {
        Write-Host "Timeout occurred, restarting download..."
        $process | Stop-Process -Force -ErrorAction SilentlyContinue
    }
    
    Start-Sleep -Seconds 2
}

I do not understand why Ollama needs this kind script to continue downloading.
for smaller model , <1G , the Ollama can download it without any problem.

<!-- gh-comment-id:2623757960 --> @ctx2002 commented on GitHub (Jan 30, 2025): still same problem. I have turned above bash script to a Powershell script,. ``` param( [Parameter(Mandatory=$true)] # Force this argument to be required [string]$model = "" ) while ($true) { Write-Host "Attempting to download model..." $process = Start-Process -FilePath "ollama" -ArgumentList "pull $model" -PassThru -NoNewWindow try { $process | Wait-Process -Timeout 10 -ErrorAction Stop if ($process.ExitCode -eq 0) { Write-Host "Model downloaded successfully!" break } else { Write-Host "Download failed (Exit code: $($process.ExitCode)). Retrying..." } } catch { Write-Host "Timeout occurred, restarting download..." $process | Stop-Process -Force -ErrorAction SilentlyContinue } Start-Sleep -Seconds 2 } ``` I do not understand why Ollama needs this kind script to continue downloading. for smaller model , <1G , the Ollama can download it without any problem.
Author
Owner

@bertoxic commented on GitHub (Jan 30, 2025):

i used this script for mine and it worked perfectly downloaded a large model even with my very poor network it didn't restart.,

#!/bin/sh

# Set the speed threshold in KB/s
THRESHOLD=500

# Variable to track consecutive slow speed occurrences
slow_count=0

while true; do
  # Measure connection speed (downloading a small file)
  speed=$(curl -s -w '%{speed_download}' -o /dev/null https://speed.hetzner.de/100MB.bin)
  
  # Check if speed is empty or invalid
  if [ -z "$speed" ] || ! echo "$speed" | grep -Eq '^[0-9]+(\.[0-9]+)?$'; then
    echo "Failed to measure speed. Retrying..."
    sleep 10
    continue
  fi

  # Convert speed from bytes/s to KB/s
  speed_kbps=$(echo "$speed / 1024" | awk '{printf "%.0f", $1}')  # Using `awk` for precision

  echo "Current speed: ${speed_kbps} KB/s"

  if [ "$speed_kbps" -lt "$THRESHOLD" ]; then
    slow_count=$((slow_count + 1))
    echo "Speed below threshold ($THRESHOLD KB/s). Slow count: $slow_count"
  else
    slow_count=0  # Reset the counter if speed is above threshold
  fi

  # Retry the pull if slow speed is detected twice in a row
  if [ "$slow_count" -ge 2 ]; then
    echo "Connection speed is slow twice in a row. Retrying pull..."
    timeout 10s ollama pull deepseek-r1:7b
    slow_count=0  # Reset the counter after a pull attempt
  fi

  # Wait for a few seconds before checking again
  sleep 7
done

<!-- gh-comment-id:2624475033 --> @bertoxic commented on GitHub (Jan 30, 2025): i used this script for mine and it worked perfectly downloaded a large model even with my very poor network it didn't restart., ``` sh #!/bin/sh # Set the speed threshold in KB/s THRESHOLD=500 # Variable to track consecutive slow speed occurrences slow_count=0 while true; do # Measure connection speed (downloading a small file) speed=$(curl -s -w '%{speed_download}' -o /dev/null https://speed.hetzner.de/100MB.bin) # Check if speed is empty or invalid if [ -z "$speed" ] || ! echo "$speed" | grep -Eq '^[0-9]+(\.[0-9]+)?$'; then echo "Failed to measure speed. Retrying..." sleep 10 continue fi # Convert speed from bytes/s to KB/s speed_kbps=$(echo "$speed / 1024" | awk '{printf "%.0f", $1}') # Using `awk` for precision echo "Current speed: ${speed_kbps} KB/s" if [ "$speed_kbps" -lt "$THRESHOLD" ]; then slow_count=$((slow_count + 1)) echo "Speed below threshold ($THRESHOLD KB/s). Slow count: $slow_count" else slow_count=0 # Reset the counter if speed is above threshold fi # Retry the pull if slow speed is detected twice in a row if [ "$slow_count" -ge 2 ]; then echo "Connection speed is slow twice in a row. Retrying pull..." timeout 10s ollama pull deepseek-r1:7b slow_count=0 # Reset the counter after a pull attempt fi # Wait for a few seconds before checking again sleep 7 done ```
Author
Owner

@grav commented on GitHub (Jan 31, 2025):

On Mac:

$ brew install coreutils
$ while true; do gtimeout 10s ollama run [model]; done
<!-- gh-comment-id:2626539885 --> @grav commented on GitHub (Jan 31, 2025): On Mac: ```bash $ brew install coreutils $ while true; do gtimeout 10s ollama run [model]; done ```
Author
Owner

@Skizzy-create commented on GitHub (Jan 31, 2025):

This will give you the exact script, just run it: GitHub Issue Comment.

For Windows:

while ($true) {
    Write-Host "Attempting to download model..."
    $process = Start-Process -FilePath "ollama" -ArgumentList "pull deepseek-r1" -PassThru -NoNewWindow
    try {
        $process | Wait-Process -Timeout 10 -ErrorAction Stop
        if ($process.ExitCode -eq 0) {
            Write-Host "Model downloaded successfully!"
            break
        } else {
            Write-Host "Download failed (Exit code: $($process.ExitCode)). Retrying..."
        }
    } catch {
        Write-Host "Timeout occurred, restarting download..."
        $process | Stop-Process -Force -ErrorAction SilentlyContinue
    }
    Start-Sleep -Seconds 2
}

For Linux/mac:

#!/bin/bash

while true; do
    echo "Attempting to download model..."
    ollama pull deepseek-r1 &
    process_pid=$!
    sleep 10

    if wait $process_pid; then
        echo "Model downloaded successfully!"
        break
    else
        echo "Download failed. Retrying..."
        kill -9 $process_pid 2>/dev/null
    fi

    sleep 2
done
<!-- gh-comment-id:2627410336 --> @Skizzy-create commented on GitHub (Jan 31, 2025): This will give you the exact script, just run it: [GitHub Issue Comment](https://github.com/ollama/ollama/issues/8652). ### For Windows: ```powershell while ($true) { Write-Host "Attempting to download model..." $process = Start-Process -FilePath "ollama" -ArgumentList "pull deepseek-r1" -PassThru -NoNewWindow try { $process | Wait-Process -Timeout 10 -ErrorAction Stop if ($process.ExitCode -eq 0) { Write-Host "Model downloaded successfully!" break } else { Write-Host "Download failed (Exit code: $($process.ExitCode)). Retrying..." } } catch { Write-Host "Timeout occurred, restarting download..." $process | Stop-Process -Force -ErrorAction SilentlyContinue } Start-Sleep -Seconds 2 } ``` ### For Linux/mac: ```bash #!/bin/bash while true; do echo "Attempting to download model..." ollama pull deepseek-r1 & process_pid=$! sleep 10 if wait $process_pid; then echo "Model downloaded successfully!" break else echo "Download failed. Retrying..." kill -9 $process_pid 2>/dev/null fi sleep 2 done ```
Author
Owner

@Unknownuserfrommars commented on GitHub (Feb 12, 2025):

For Windows:

while ($true) {

Write-Host "Attempting to download model..."
$process = Start-Process -FilePath "ollama" -ArgumentList "pull deepseek-r1" -PassThru -NoNewWindow
try {
    $process | Wait-Process -Timeout 10 -ErrorAction Stop
    if ($process.ExitCode -eq 0) {
        Write-Host "Model downloaded successfully!"
        break
    } else {
        Write-Host "Download failed (Exit code: $($process.ExitCode)). Retrying..."
    }
} catch {
    Write-Host "Timeout occurred, restarting download..."
    $process | Stop-Process -Force -ErrorAction SilentlyContinue
}
Start-Sleep -Seconds 2

}

Attempting to download model...
pulling manifest
pulling 6150cb382311... 1% ▕ ▏ 115 MB/ 19 GB 16 MB/s 20m4sTpulling manifest
pulling 6150cb382311... 1% ▕ ▏ 121 MB/ 19 GB 16 MB/s 20m4sAttempting to download model...
pulling manifest
pulling 6150cb382311... 1% ▕ ▏ 235 MB/ 19 GB 16 MB/s 20m8sTimeout occurred, restarting download...
Attempting to download model...
pulling manifest
pulling 6150cb382311... 2% ▕█ ▏ 439 MB/ 19 GB 25 MB/s 12m42sTimeout occurred, restarting download...
Attempting to download model...
pulling manifest
pulling 6150cb382311... 2% ▕█ ▏ 495 MB/ 19 GB 11 MB/s 28m24sTimeout occurred, restarting download...

@Skizzy-create Your PS Script gave mt this output while i tried to download deepseek r1:32b. Somehow it keeps doing this. I'm kinda new to this but why does this happen? Will the model download successfully if i just wait enough time for this to (eventually) get to 100%?

Many thanks!

<!-- gh-comment-id:2654208037 --> @Unknownuserfrommars commented on GitHub (Feb 12, 2025): > For Windows: > ``` while ($true) { > Write-Host "Attempting to download model..." > $process = Start-Process -FilePath "ollama" -ArgumentList "pull deepseek-r1" -PassThru -NoNewWindow > try { > $process | Wait-Process -Timeout 10 -ErrorAction Stop > if ($process.ExitCode -eq 0) { > Write-Host "Model downloaded successfully!" > break > } else { > Write-Host "Download failed (Exit code: $($process.ExitCode)). Retrying..." > } > } catch { > Write-Host "Timeout occurred, restarting download..." > $process | Stop-Process -Force -ErrorAction SilentlyContinue > } > Start-Sleep -Seconds 2 > } > ``` Attempting to download model... pulling manifest pulling 6150cb382311... 1% ▕ ▏ 115 MB/ 19 GB 16 MB/s 20m4sTpulling manifest pulling 6150cb382311... 1% ▕ ▏ 121 MB/ 19 GB 16 MB/s 20m4sAttempting to download model... pulling manifest pulling 6150cb382311... 1% ▕ ▏ 235 MB/ 19 GB 16 MB/s 20m8sTimeout occurred, restarting download... Attempting to download model... pulling manifest pulling 6150cb382311... 2% ▕█ ▏ 439 MB/ 19 GB 25 MB/s 12m42sTimeout occurred, restarting download... Attempting to download model... pulling manifest pulling 6150cb382311... 2% ▕█ ▏ 495 MB/ 19 GB 11 MB/s 28m24sTimeout occurred, restarting download... @Skizzy-create Your PS Script gave mt this output while i tried to download deepseek r1:32b. Somehow it keeps doing this. I'm kinda new to this but why does this happen? Will the model download successfully if i just wait enough time for this to (eventually) get to 100%? Many thanks!
Author
Owner

@Skizzy-create commented on GitHub (Feb 12, 2025):

occurred

Yes you just have to wait and the download will be complete.
Just wait till it gets to 100% and that's it.

And you can see that the model continues to download from the previous checkpoint.

<!-- gh-comment-id:2654719798 --> @Skizzy-create commented on GitHub (Feb 12, 2025): > occurred Yes you just have to wait and the download will be complete. Just wait till it gets to 100% and that's it. And you can see that the model continues to download from the previous checkpoint.
Author
Owner

@Unknownuserfrommars commented on GitHub (Feb 13, 2025):

Yes you just have to wait and the download will be complete. Just wait till it gets to 100% and that's it.

And you can see that the model continues to download from the previous checkpoint.

Okay thank you very much!

<!-- gh-comment-id:2655181027 --> @Unknownuserfrommars commented on GitHub (Feb 13, 2025): > Yes you just have to wait and the download will be complete. Just wait till it gets to 100% and that's it. > > And you can see that the model continues to download from the previous checkpoint. Okay thank you very much!
Author
Owner

@Forest-Person commented on GitHub (Feb 19, 2025):

#!/bin/bash

while true; do
echo "Attempting to download model..."
ollama pull deepseek-r1 &
process_pid=$!
sleep 10

if wait $process_pid; then
    echo "Model downloaded successfully!"
    break
else
    echo "Download failed. Retrying..."
    kill -9 $process_pid 2>/dev/null
fi

sleep 2

done

Hey guys, wow this is turning out to be a doozy eh?

I tried the same script and it didnt work. Stuck on 90% goes to 91% goes backwards. I can usually download a 5 gig quantized model in like 2.5 hours even with my sad 1mb/sec avg dl speed. Best wishes and thanks for all your hard work.

<!-- gh-comment-id:2670011269 --> @Forest-Person commented on GitHub (Feb 19, 2025): #!/bin/bash while true; do echo "Attempting to download model..." ollama pull deepseek-r1 & process_pid=$! sleep 10 if wait $process_pid; then echo "Model downloaded successfully!" break else echo "Download failed. Retrying..." kill -9 $process_pid 2>/dev/null fi sleep 2 done Hey guys, wow this is turning out to be a doozy eh? I tried the same script and it didnt work. Stuck on 90% goes to 91% goes backwards. I can usually download a 5 gig quantized model in like 2.5 hours even with my sad 1mb/sec avg dl speed. Best wishes and thanks for all your hard work.
Author
Owner

@bertoxic commented on GitHub (Feb 20, 2025):

> # i used this script for mine and it worked perfectly downloaded a large model even with > #my very poor network it didn't restart.,
> 
> #!/bin/sh
> 
> # Set the speed `threshold` in KB/s
> THRESHOLD=500
> 
> # Variable to track consecutive slow speed occurrences
> slow_count=0
> 
> while true; do
>   # Measure connection speed (downloading a small file)
>   speed=$(curl -s -w '%{speed_download}' -o /dev/null https://speed.hetzner.de/100MB.bin)
>   
>   # Check if speed is empty or invalid
>   if [ -z "$speed" ] || ! echo "$speed" | grep -Eq '^[0-9]+(\.[0-9]+)?$'; then
>     echo "Failed to measure speed. Retrying..."
>     sleep 10
>     continue
>   fi
> 
>   # Convert speed from bytes/s to KB/s
>   speed_kbps=$(echo "$speed / 1024" | awk '{printf "%.0f", $1}')  # Using `awk` for precision
> 
>   echo "Current speed: ${speed_kbps} KB/s"
> 
>   if [ "$speed_kbps" -lt "$THRESHOLD" ]; then
>     slow_count=$((slow_count + 1))
>     echo "Speed below threshold ($THRESHOLD KB/s). Slow count: $slow_count"
>   else
>     slow_count=0  # Reset the counter if speed is above threshold
>   fi
> 
>   # Retry the pull if slow speed is detected twice in a row
>   if [ "$slow_count" -ge 2 ]; then
>     echo "Connection speed is slow twice in a row. Retrying pull..."
>     timeout 10s ollama pull deepseek-r1:7b
>     slow_count=0  # Reset the counter after a pull attempt
>   fi
> 
>   # Wait for a few seconds before checking again
>   sleep 7
> done

@Forest-Person 
Try this method I have used it to download different llm and even with terrible network coverage it works well.
<!-- gh-comment-id:2670314540 --> @bertoxic commented on GitHub (Feb 20, 2025): ``` > # i used this script for mine and it worked perfectly downloaded a large model even with > #my very poor network it didn't restart., > > #!/bin/sh > > # Set the speed `threshold` in KB/s > THRESHOLD=500 > > # Variable to track consecutive slow speed occurrences > slow_count=0 > > while true; do > # Measure connection speed (downloading a small file) > speed=$(curl -s -w '%{speed_download}' -o /dev/null https://speed.hetzner.de/100MB.bin) > > # Check if speed is empty or invalid > if [ -z "$speed" ] || ! echo "$speed" | grep -Eq '^[0-9]+(\.[0-9]+)?$'; then > echo "Failed to measure speed. Retrying..." > sleep 10 > continue > fi > > # Convert speed from bytes/s to KB/s > speed_kbps=$(echo "$speed / 1024" | awk '{printf "%.0f", $1}') # Using `awk` for precision > > echo "Current speed: ${speed_kbps} KB/s" > > if [ "$speed_kbps" -lt "$THRESHOLD" ]; then > slow_count=$((slow_count + 1)) > echo "Speed below threshold ($THRESHOLD KB/s). Slow count: $slow_count" > else > slow_count=0 # Reset the counter if speed is above threshold > fi > > # Retry the pull if slow speed is detected twice in a row > if [ "$slow_count" -ge 2 ]; then > echo "Connection speed is slow twice in a row. Retrying pull..." > timeout 10s ollama pull deepseek-r1:7b > slow_count=0 # Reset the counter after a pull attempt > fi > > # Wait for a few seconds before checking again > sleep 7 > done @Forest-Person Try this method I have used it to download different llm and even with terrible network coverage it works well. ```
Author
Owner

@SudoMds commented on GitHub (Feb 23, 2025):

For windows you can try this, this script is an update to

This will give you the exact script, just run it: GitHub Issue Comment.

For Windows:

while ($true) {
Write-Host "Attempting to download model..."
$process = Start-Process -FilePath "ollama" -ArgumentList "pull deepseek-r1" -PassThru -NoNewWindow
try {
$process | Wait-Process -Timeout 10 -ErrorAction Stop
if ($process.ExitCode -eq 0) {
Write-Host "Model downloaded successfully!"
break
} else {
Write-Host "Download failed (Exit code: $($process.ExitCode)). Retrying..."
}
} catch {
Write-Host "Timeout occurred, restarting download..."
$process | Stop-Process -Force -ErrorAction SilentlyContinue
}
Start-Sleep -Seconds 2
}

For Linux/mac:

#!/bin/bash

while true; do
echo "Attempting to download model..."
ollama pull deepseek-r1 &
process_pid=$!
sleep 10

if wait $process_pid; then
    echo "Model downloaded successfully!"
    break
else
    echo "Download failed. Retrying..."
    kill -9 $process_pid 2>/dev/null
fi

sleep 2

done

============
this script is an update to our freind script, it just check for retry if process exit

while ($true) {
    Write-Host "Attempting to download Llama 3.3 model..."
    $process = Start-Process -FilePath "ollama" -ArgumentList "pull llama3.3" -PassThru -NoNewWindow
    try {
        # Wait for the process to complete and check if it finishes successfully
        $process | Wait-Process -Timeout 300 -ErrorAction Stop  # Increased timeout to 5 minutes (300 seconds)
        
        if ($process.ExitCode -eq 0) {
            Write-Host "Model downloaded successfully!"
            break  # Exit the loop if the download was successful
        } else {
            Write-Host "Download failed (Exit code: $($process.ExitCode)). Retrying..."
        }
    } catch {
        Write-Host "Timeout occurred, restarting download..."
        $process | Stop-Process -Force -ErrorAction SilentlyContinue  # Forcefully stop the process if it times out
    }
    Start-Sleep -Seconds 5  # Wait 5 seconds before retrying
}

<!-- gh-comment-id:2676955568 --> @SudoMds commented on GitHub (Feb 23, 2025): For windows you can try this, this script is an update to > This will give you the exact script, just run it: [GitHub Issue Comment](https://github.com/ollama/ollama/issues/8652). > > ### For Windows: > while ($true) { > Write-Host "Attempting to download model..." > $process = Start-Process -FilePath "ollama" -ArgumentList "pull deepseek-r1" -PassThru -NoNewWindow > try { > $process | Wait-Process -Timeout 10 -ErrorAction Stop > if ($process.ExitCode -eq 0) { > Write-Host "Model downloaded successfully!" > break > } else { > Write-Host "Download failed (Exit code: $($process.ExitCode)). Retrying..." > } > } catch { > Write-Host "Timeout occurred, restarting download..." > $process | Stop-Process -Force -ErrorAction SilentlyContinue > } > Start-Sleep -Seconds 2 > } > ### For Linux/mac: > #!/bin/bash > > while true; do > echo "Attempting to download model..." > ollama pull deepseek-r1 & > process_pid=$! > sleep 10 > > if wait $process_pid; then > echo "Model downloaded successfully!" > break > else > echo "Download failed. Retrying..." > kill -9 $process_pid 2>/dev/null > fi > > sleep 2 > done ============ this script is an update to our freind script, it just check for retry if process exit ========== ``` while ($true) { Write-Host "Attempting to download Llama 3.3 model..." $process = Start-Process -FilePath "ollama" -ArgumentList "pull llama3.3" -PassThru -NoNewWindow try { # Wait for the process to complete and check if it finishes successfully $process | Wait-Process -Timeout 300 -ErrorAction Stop # Increased timeout to 5 minutes (300 seconds) if ($process.ExitCode -eq 0) { Write-Host "Model downloaded successfully!" break # Exit the loop if the download was successful } else { Write-Host "Download failed (Exit code: $($process.ExitCode)). Retrying..." } } catch { Write-Host "Timeout occurred, restarting download..." $process | Stop-Process -Force -ErrorAction SilentlyContinue # Forcefully stop the process if it times out } Start-Sleep -Seconds 5 # Wait 5 seconds before retrying } ```
Author
Owner

@mcapodici commented on GitHub (Feb 24, 2025):

Why all the workarounds still? Someone has put in what looks like a fix https://github.com/ollama/ollama/pull/8831

<!-- gh-comment-id:2678052210 --> @mcapodici commented on GitHub (Feb 24, 2025): Why all the workarounds still? Someone has put in what looks like a fix https://github.com/ollama/ollama/pull/8831
Author
Owner

@rick-github commented on GitHub (Feb 24, 2025):

There's confusion about what the actual issue is. The "reversing during download" issue has been mitigated with #8831. What I believe people are seeing now, and attributing to "reversing during download", is the slowdown of a download that occurs when some chunks are slow. ollama runs multiple concurrent download streams to fetch different chunks of the model. At the start, most of there are downloading quickly and finish early. As the proportion of slow downloaders increases, the reported speed of download goes down. Restarting the download has the effect of creating new download streams, again some of which are faster than others. The real issue is why are some streams slower than others - is it an ollama issue, or throttling at the ISP, or throttling at the server, etc. This issue has been around for a while and nobody's really looked in to it because it's simple to work around it by just restarting the download.

<!-- gh-comment-id:2678083224 --> @rick-github commented on GitHub (Feb 24, 2025): There's confusion about what the actual issue is. The "reversing during download" issue has been mitigated with #8831. What I believe people are seeing now, and attributing to "reversing during download", is the slowdown of a download that occurs when some chunks are slow. ollama runs multiple concurrent download streams to fetch different chunks of the model. At the start, most of there are downloading quickly and finish early. As the proportion of slow downloaders increases, the reported speed of download goes down. Restarting the download has the effect of creating new download streams, again some of which are faster than others. The real issue is why are some streams slower than others - is it an ollama issue, or throttling at the ISP, or throttling at the server, etc. This issue has been around for a while and nobody's really looked in to it because it's simple to work around it by just restarting the download.
Author
Owner

@rick-github commented on GitHub (Mar 4, 2025):

This should be mitigated as of 0.5.8 by #8831 and #9294 provides an overhaul of model pulling, so closing but feel free to add updates if you are still having issues.

<!-- gh-comment-id:2698051017 --> @rick-github commented on GitHub (Mar 4, 2025): This should be mitigated as of 0.5.8 by #8831 and #9294 provides an overhaul of model pulling, so closing but feel free to add updates if you are still having issues.
Author
Owner

@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):

@rick-github, sorry to bother you, but it's still happening... it looks like it happens when the download speed goes above 200mbps; maybe it's rate-limiting?

<!-- gh-comment-id:2765910245 --> @AlbertoSinigaglia commented on GitHub (Mar 31, 2025): @rick-github, sorry to bother you, but it's still happening... it looks like it happens when the download speed goes above 200mbps; maybe it's rate-limiting?
Author
Owner

@rick-github commented on GitHub (Mar 31, 2025):

What issue: stalling causing chunk restart, or slow chunks causing a slowdown? Logs may help.

<!-- gh-comment-id:2765918553 --> @rick-github commented on GitHub (Mar 31, 2025): What issue: stalling causing chunk restart, or slow chunks causing a slowdown? Logs may help.
Author
Owner

@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):

@rick-github stalling and not downloading anything pretty much, here's the logs:

time=2025-03-31T13:35:41.765Z level=INFO source=download.go:176 msg="downloading 6e7fdda508e9 in 48 1 GB part(s)"
time=2025-03-31T13:36:14.895Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 0 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:14.895Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 5 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:14.895Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 15 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:14.895Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 1 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 2 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 7 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 3 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 11 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 10 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 12 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 13 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 6 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 14 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 9 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 4 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 8 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:47.896Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 5 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:47.896Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 15 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:47.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 10 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:47.900Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 8 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:47.900Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 12 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:47.900Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 14 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:47.900Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 9 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:47.900Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 4 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:47.900Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 6 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:47.900Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 3 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:48.896Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 0 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:48.896Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 1 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:48.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 7 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:48.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 2 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:48.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 13 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:36:48.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 11 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
...

time=2025-03-31T13:39:11.901Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 13 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-03-31T13:39:13.902Z level=INFO source=download.go:294 msg="6e7fdda508e9 part 3 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/6e/6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250331%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250331T133541Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=d754b0c4275ea213007343e43826e4a2a6cd3a2258dbee49b4ab6c93ecd2f079\": dial tcp 172.66.1.46:443: i/o timeout, retrying in 1s"
time=2025-03-31T13:39:13.902Z level=INFO source=download.go:294 msg="6e7fdda508e9 part 2 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/6e/6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250331%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250331T133541Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=d754b0c4275ea213007343e43826e4a2a6cd3a2258dbee49b4ab6c93ecd2f079\": dial tcp 172.66.1.46:443: i/o timeout, retrying in 1s"
time=2025-03-31T13:39:13.902Z level=INFO source=download.go:294 msg="6e7fdda508e9 part 6 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/6e/6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250331%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250331T133541Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=d754b0c4275ea213007343e43826e4a2a6cd3a2258dbee49b4ab6c93ecd2f079\": dial tcp 172.66.1.46:443: i/o timeout, retrying in 1s"
time=2025-03-31T13:39:13.902Z level=INFO source=download.go:294 msg="6e7fdda508e9 part 12 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/6e/6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250331%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250331T133541Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=d754b0c4275ea213007343e43826e4a2a6cd3a2258dbee49b4ab6c93ecd2f079\": dial tcp 172.66.1.46:443: i/o timeout, retrying in 1s"
time=2025-03-31T13:39:15.901Z level=INFO source=download.go:294 msg="6e7fdda508e9 part 21 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/6e/6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250331%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250331T133541Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=d754b0c4275ea213007343e43826e4a2a6cd3a2258dbee49b4ab6c93ecd2f079\": dial tcp 162.159.141.50:443: i/o timeout, retrying in 1s"

it pretty much restarts the download a billion times

ctrl+c does not help, it just starts the download again from where it stopped, and advances like 500mb/1Gb each time

<!-- gh-comment-id:2766266410 --> @AlbertoSinigaglia commented on GitHub (Mar 31, 2025): @rick-github stalling and not downloading anything pretty much, here's the logs: ``` time=2025-03-31T13:35:41.765Z level=INFO source=download.go:176 msg="downloading 6e7fdda508e9 in 48 1 GB part(s)" time=2025-03-31T13:36:14.895Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 0 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:14.895Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 5 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:14.895Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 15 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:14.895Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 1 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 2 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 7 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 3 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 11 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 10 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 12 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 13 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 6 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 14 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 9 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 4 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:14.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 8 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:47.896Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 5 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:47.896Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 15 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:47.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 10 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:47.900Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 8 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:47.900Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 12 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:47.900Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 14 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:47.900Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 9 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:47.900Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 4 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:47.900Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 6 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:47.900Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 3 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:48.896Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 0 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:48.896Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 1 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:48.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 7 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:48.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 2 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:48.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 13 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:36:48.899Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 11 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." ... time=2025-03-31T13:39:11.901Z level=INFO source=download.go:373 msg="6e7fdda508e9 part 13 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-03-31T13:39:13.902Z level=INFO source=download.go:294 msg="6e7fdda508e9 part 3 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/6e/6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250331%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250331T133541Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=d754b0c4275ea213007343e43826e4a2a6cd3a2258dbee49b4ab6c93ecd2f079\": dial tcp 172.66.1.46:443: i/o timeout, retrying in 1s" time=2025-03-31T13:39:13.902Z level=INFO source=download.go:294 msg="6e7fdda508e9 part 2 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/6e/6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250331%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250331T133541Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=d754b0c4275ea213007343e43826e4a2a6cd3a2258dbee49b4ab6c93ecd2f079\": dial tcp 172.66.1.46:443: i/o timeout, retrying in 1s" time=2025-03-31T13:39:13.902Z level=INFO source=download.go:294 msg="6e7fdda508e9 part 6 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/6e/6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250331%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250331T133541Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=d754b0c4275ea213007343e43826e4a2a6cd3a2258dbee49b4ab6c93ecd2f079\": dial tcp 172.66.1.46:443: i/o timeout, retrying in 1s" time=2025-03-31T13:39:13.902Z level=INFO source=download.go:294 msg="6e7fdda508e9 part 12 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/6e/6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250331%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250331T133541Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=d754b0c4275ea213007343e43826e4a2a6cd3a2258dbee49b4ab6c93ecd2f079\": dial tcp 172.66.1.46:443: i/o timeout, retrying in 1s" time=2025-03-31T13:39:15.901Z level=INFO source=download.go:294 msg="6e7fdda508e9 part 21 attempt 0 failed: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/6e/6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20250331%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250331T133541Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=d754b0c4275ea213007343e43826e4a2a6cd3a2258dbee49b4ab6c93ecd2f079\": dial tcp 162.159.141.50:443: i/o timeout, retrying in 1s" ``` it pretty much restarts the download a billion times ctrl+c does not help, it just starts the download again from where it stopped, and advances like 500mb/1Gb each time
Author
Owner

@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):

As a side note, I've also tried the following, with pretty much no luck:

while true; do   timeout 10 bash -c 'OLLAMA_HOST="localhost:6000" ollama pull qwen2.5:72b'; done

(sorry for the extra OLLAMA_HOST="localhost:6000") but i have 2 ollama servers

<!-- gh-comment-id:2766325530 --> @AlbertoSinigaglia commented on GitHub (Mar 31, 2025): As a side note, I've also tried the following, with pretty much no luck: ``` while true; do timeout 10 bash -c 'OLLAMA_HOST="localhost:6000" ollama pull qwen2.5:72b'; done ``` (sorry for the extra `OLLAMA_HOST="localhost:6000"`) but i have 2 ollama servers
Author
Owner

@rick-github commented on GitHub (Mar 31, 2025):

dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com has always been a problematic host. I've seen speculation that this also hosts content that occasionally falls afoul of copyright laws and that some ISPs block/limit connections to that server.

What does the following do:

curl -#L -C - -o sha256-6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8 https://registry.ollama.ai/v2/library/qwen2.5/blobs/sha256:6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8
<!-- gh-comment-id:2766332109 --> @rick-github commented on GitHub (Mar 31, 2025): dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com has always been a problematic host. I've seen speculation that this also hosts content that occasionally falls afoul of copyright laws and that some ISPs block/limit connections to that server. What does the following do: ```console curl -#L -C - -o sha256-6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8 https://registry.ollama.ai/v2/library/qwen2.5/blobs/sha256:6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8 ```
Author
Owner

@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):

@rick-github getting the following:

> curl -#L -C - -o sha256-6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8 https://registry.ollama.ai/v2/library/qwen2.5/blobs/sha256:6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8
curl: (28) Failed to connect to dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com port 443 after 260248 ms: Couldn't connect to server
<!-- gh-comment-id:2766361836 --> @AlbertoSinigaglia commented on GitHub (Mar 31, 2025): @rick-github getting the following: ``` > curl -#L -C - -o sha256-6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8 https://registry.ollama.ai/v2/library/qwen2.5/blobs/sha256:6e7fdda508e91cb0f63de5c15ff79ac63a1584ccafd751c07ca12b7f442101b8 curl: (28) Failed to connect to dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com port 443 after 260248 ms: Couldn't connect to server ```
Author
Owner

@rick-github commented on GitHub (Mar 31, 2025):

Whereas when I run the command, it downloads the model at 60MB/s. What's the result of

host dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com
<!-- gh-comment-id:2766371831 --> @rick-github commented on GitHub (Mar 31, 2025): Whereas when I run the command, it downloads the model at 60MB/s. What's the result of ``` host dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com ```
Author
Owner

@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):

@rick-github output:

host dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com
dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com has address 172.66.1.46
dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com has address 162.159.141.50
dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com has IPv6 address 2606:4700:7::12e
dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com has IPv6 address 2a06:98c1:58::12e

btw, I'd love to take a second to really thank you for the time you take to help everybody, let me know if you happen to have a buymeacoffee


I've read other issues regarding this, that are saying that might also be the ISP, that probably cause cloudflare to redirect to that host. Given that I have another ollama server connected on another ollama, how hard would be to download it from that one, and then transfer it using scp? I mean, do I just need to note down the sha of the blob downloaded, and scp them, or there are other registry somewhere that should be manually updated?

<!-- gh-comment-id:2766385146 --> @AlbertoSinigaglia commented on GitHub (Mar 31, 2025): @rick-github output: ``` host dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com has address 172.66.1.46 dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com has address 162.159.141.50 dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com has IPv6 address 2606:4700:7::12e dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com has IPv6 address 2a06:98c1:58::12e ``` btw, I'd love to take a second to really thank you for the time you take to help everybody, let me know if you happen to have a buymeacoffee ---- I've read other issues regarding this, that are saying that might also be the ISP, that probably cause cloudflare to redirect to that host. Given that I have another ollama server connected on another ollama, how hard would be to download it from that one, and then transfer it using `scp`? I mean, do I just need to note down the sha of the blob downloaded, and scp them, or there are other registry somewhere that should be manually updated?
Author
Owner

@rick-github commented on GitHub (Mar 31, 2025):

If you have access to another server, download the model there and then package into a single file with this script:

#!/bin/bash

die() {
  echo "$1" >&2
  exit 2
}

_=$(command -v jq) || die "Need jq"
_=$(command -v cpio) || die "Need cpio"

model=$1
dst=$2

src=${OLLAMA_MODELS-/usr/share/ollama/.ollama/models}

[ -z "$model" ] && die "Usage: ${0##*/} model [output]"

domain=registry.ollama.ai
library=library
name=${model%:*} ; name=${name##*/}
tag=latest
[[ $model = *:* ]] && tag=${model#*:}
[[ $model = */*/* ]] && domain=${model%%/*}
[[ $model = */* ]] && { library=${model%/*} ; library=${library#*/} ; }
manifest="manifests/$domain/$library/$name/$tag"
[ ! -f "$src/$manifest" ] && die "Model $model not found at $src/$manifest"

files="$(
    echo $manifest ; 
    for blob in $(jq -r '.config.digest,.layers[].digest' "$src/$manifest") ; do
      echo blobs/${blob/:/-}
    done
)"

[ -n "$dst" ] && dst="$(realpath "$dst")" || dst=/dev/stdout
(cd $src && cpio -o --file "$dst" <<< "$files")

Then install it on the ollama server (adjust destination as necessary):

$ ollama pull qwen2.5:72b-instruct-q4_K_M
$ ./ollama-model-to-cpio.sh qwen2.5:72b-instruct-q4_K_M qwen2.5-72b.cpio
$ scp qwen2.5-72b.cpio ollama-server:/tmp
$ ssh ollama-server sudo -u ollama cpio -dumi --directory /usr/ocal/share/ollama/.ollama/models --file /tmp/qwen2.5-72b.cpio

This version creates zip files and so should be a bit more platform independent:

#!/usr/bin/env python3

import zipfile
import json
import platform
import argparse
import os
import sys

ollama_models = ""
ollama_models_user = ""

def manifest(model:str):
  elements = model.split(":")[0].split("/")
  name = elements.pop()
  library = elements.pop() if len(elements) else "library"
  domain = elements.pop() if len(elements) else "registry.ollama.ai"
  tag = model.split(":")[-1] if ":" in model else "latest"
  manifest = os.path.join("manifests", domain, library, name, tag)
  if os.path.isfile(os.path.join(ollama_models, manifest)):
    return ollama_models, manifest 
  if len(ollama_models_user) and os.path.isfile(os.path.join(ollama_models_user, manifest)):
    return ollama_models_user, manifest 
  raise Exception(f"Manifest {manifest} not found")

def modelzip(dst, model:str):
  manifests = []
  for m in model:
    manifests.append(manifest(m))
  files = []
  blobs = {}
  for m in manifests:
    files.append(m)
    j = json.load(open(os.path.join(m[0], m[1]), "r"))
    for l in [j["config"]] + j["layers"]:
      blob = l["digest"].replace(":", "-")
      if blob not in blobs:
        files.append((m[0], os.path.join("blobs", blob)))
        blobs[blob] = 1

  for f in files:
    if not os.path.exists(os.path.join(f[0], f[1])):
      raise Exception(f"Blob {f[1]} not found")
      
  with zipfile.ZipFile(dst, "w") as zip:
    for f in files:
      zip.write(os.path.join(f[0], f[1]), arcname=f[1])

def get_model_dir():
  global ollama_models
  global ollama_models_user
  if platform.system() == "Windows":
    ollama_models = "C:\\Users\\{os.getenv('USER')}\\.ollama\\models"
  if platform.system() == "MacOS":
    ollama_models = "/Users/{os.getenv('USER')}/.ollama/models"
  if platform.system() == "Linux":
    ollama_models = "/usr/share/ollama/.ollama/models"
    ollama_models_user = f"{os.getenv('HOME')}/.ollama/models"
  ollama_models = os.getenv("OLLAMA_MODELS", ollama_models).rstrip("/")

def main():
  parser = argparse.ArgumentParser()
  parser.add_argument("dst")
  parser.add_argument("model", nargs='+')
  parser.add_argument("--clobber", default=False, action="store_true")
  args = parser.parse_args()

  get_model_dir()
  dst = args.dst
  if not dst.endswith((".zip", ".ZIP")):
    dst += ".zip"
  if os.path.exists(dst) and not args.clobber:
    raise Exception(f"Destination {dst} exists")
  modelzip(dst, args.model)

if __name__ == "__main__":
  try:
    main()
  except Exception as e:
    print(e)
    sys.exit(1)
<!-- gh-comment-id:2766517326 --> @rick-github commented on GitHub (Mar 31, 2025): If you have access to another server, download the model there and then package into a single file with this script: ```sh #!/bin/bash die() { echo "$1" >&2 exit 2 } _=$(command -v jq) || die "Need jq" _=$(command -v cpio) || die "Need cpio" model=$1 dst=$2 src=${OLLAMA_MODELS-/usr/share/ollama/.ollama/models} [ -z "$model" ] && die "Usage: ${0##*/} model [output]" domain=registry.ollama.ai library=library name=${model%:*} ; name=${name##*/} tag=latest [[ $model = *:* ]] && tag=${model#*:} [[ $model = */*/* ]] && domain=${model%%/*} [[ $model = */* ]] && { library=${model%/*} ; library=${library#*/} ; } manifest="manifests/$domain/$library/$name/$tag" [ ! -f "$src/$manifest" ] && die "Model $model not found at $src/$manifest" files="$( echo $manifest ; for blob in $(jq -r '.config.digest,.layers[].digest' "$src/$manifest") ; do echo blobs/${blob/:/-} done )" [ -n "$dst" ] && dst="$(realpath "$dst")" || dst=/dev/stdout (cd $src && cpio -o --file "$dst" <<< "$files") ``` Then install it on the ollama server (adjust destination as necessary): ```console $ ollama pull qwen2.5:72b-instruct-q4_K_M $ ./ollama-model-to-cpio.sh qwen2.5:72b-instruct-q4_K_M qwen2.5-72b.cpio $ scp qwen2.5-72b.cpio ollama-server:/tmp $ ssh ollama-server sudo -u ollama cpio -dumi --directory /usr/ocal/share/ollama/.ollama/models --file /tmp/qwen2.5-72b.cpio ``` This version creates zip files and so should be a bit more platform independent: ```python #!/usr/bin/env python3 import zipfile import json import platform import argparse import os import sys ollama_models = "" ollama_models_user = "" def manifest(model:str): elements = model.split(":")[0].split("/") name = elements.pop() library = elements.pop() if len(elements) else "library" domain = elements.pop() if len(elements) else "registry.ollama.ai" tag = model.split(":")[-1] if ":" in model else "latest" manifest = os.path.join("manifests", domain, library, name, tag) if os.path.isfile(os.path.join(ollama_models, manifest)): return ollama_models, manifest if len(ollama_models_user) and os.path.isfile(os.path.join(ollama_models_user, manifest)): return ollama_models_user, manifest raise Exception(f"Manifest {manifest} not found") def modelzip(dst, model:str): manifests = [] for m in model: manifests.append(manifest(m)) files = [] blobs = {} for m in manifests: files.append(m) j = json.load(open(os.path.join(m[0], m[1]), "r")) for l in [j["config"]] + j["layers"]: blob = l["digest"].replace(":", "-") if blob not in blobs: files.append((m[0], os.path.join("blobs", blob))) blobs[blob] = 1 for f in files: if not os.path.exists(os.path.join(f[0], f[1])): raise Exception(f"Blob {f[1]} not found") with zipfile.ZipFile(dst, "w") as zip: for f in files: zip.write(os.path.join(f[0], f[1]), arcname=f[1]) def get_model_dir(): global ollama_models global ollama_models_user if platform.system() == "Windows": ollama_models = "C:\\Users\\{os.getenv('USER')}\\.ollama\\models" if platform.system() == "MacOS": ollama_models = "/Users/{os.getenv('USER')}/.ollama/models" if platform.system() == "Linux": ollama_models = "/usr/share/ollama/.ollama/models" ollama_models_user = f"{os.getenv('HOME')}/.ollama/models" ollama_models = os.getenv("OLLAMA_MODELS", ollama_models).rstrip("/") def main(): parser = argparse.ArgumentParser() parser.add_argument("dst") parser.add_argument("model", nargs='+') parser.add_argument("--clobber", default=False, action="store_true") args = parser.parse_args() get_model_dir() dst = args.dst if not dst.endswith((".zip", ".ZIP")): dst += ".zip" if os.path.exists(dst) and not args.clobber: raise Exception(f"Destination {dst} exists") modelzip(dst, args.model) if __name__ == "__main__": try: main() except Exception as e: print(e) sys.exit(1) ```
Author
Owner

@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):

@rick-github I tried between two servers where i can install commands, though the goes is to run it on a HPC server, which as the Gigabit connection that causes problems, in which I can't "decompress" the cpio file due to the fact that ollama runs in a singularity container. However, I'm not sure if this is still a "general enough" issue for you to spend time on it.

Maybe uploading ollama models to huggingfaces would fix the whole thing (I can download models from huggingface with no problem, since they don't come from that nasty cloudflare)

<!-- gh-comment-id:2767062962 --> @AlbertoSinigaglia commented on GitHub (Mar 31, 2025): @rick-github I tried between two servers where i can install commands, though the goes is to run it on a HPC server, which as the Gigabit connection that causes problems, in which I can't "decompress" the cpio file due to the fact that ollama runs in a singularity container. However, I'm not sure if this is still a "general enough" issue for you to spend time on it. Maybe uploading ollama models to huggingfaces would fix the whole thing (I can download models from huggingface with no problem, since they don't come from that nasty cloudflare)
Author
Owner

@rick-github commented on GitHub (Mar 31, 2025):

Not sure what a singularity container is, if it's like a docker/podman container you can extract the cpio archive outside and copy it in:

$ cpio -dumi -D /tmp/models < /tmp/qwen2.5-72b.cpio
$ docker cp /tmp/models ollama:/root/.ollama

If downloading from HF works for you, you can just pull the model from bartowski, and then you can rename it to look like an ollama library model:

$ ollama pull hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q4_K_M
$ ollama cp hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q4_K_M qwen2.5:72b-instruct-q4_K_M

I don't know how different it is from the ollama model but it's likely very close.

<!-- gh-comment-id:2767157640 --> @rick-github commented on GitHub (Mar 31, 2025): Not sure what a singularity container is, if it's like a docker/podman container you can extract the cpio archive outside and copy it in: ```sh $ cpio -dumi -D /tmp/models < /tmp/qwen2.5-72b.cpio $ docker cp /tmp/models ollama:/root/.ollama ``` If downloading from HF works for you, you can just pull the model from bartowski, and then you can rename it to look like an ollama library model: ```console $ ollama pull hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q4_K_M $ ollama cp hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q4_K_M qwen2.5:72b-instruct-q4_K_M ``` I don't know how different it is from the ollama model but it's likely very close.
Author
Owner

@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):

Not sure what a singularity container is, if it's like a docker/podman container you can extract the cpio archive outside and copy it in:

$ cpio -dumi -D /tmp/models < /tmp/qwen2.5-72b.cpio
$ docker cp /tmp/models ollama:/root/.ollama

If downloading from HF works for you, you can just pull the model from bartowski, and then you can rename it to look like an ollama library model:

$ ollama pull hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q4_K_M
$ ollama cp hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q4_K_M qwen2.5:72b-instruct-q4_K_M

I don't know how different it is from the ollama model but it's likely very close.

It's a very bad copy of docker that is used in high performance compute servers... I wish nobody will have to ever know what singularity is.

Anyway, i'll try, though his Qwen2.5 72b GGUF was "splitted" in multiple parts, and ollama does not naively support them, so I went for ollama registry's version

<!-- gh-comment-id:2767165805 --> @AlbertoSinigaglia commented on GitHub (Mar 31, 2025): > Not sure what a singularity container is, if it's like a docker/podman container you can extract the cpio archive outside and copy it in: > ```sh > $ cpio -dumi -D /tmp/models < /tmp/qwen2.5-72b.cpio > $ docker cp /tmp/models ollama:/root/.ollama > ``` > If downloading from HF works for you, you can just pull the model from bartowski, and then you can rename it to look like an ollama library model: > ```console > $ ollama pull hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q4_K_M > $ ollama cp hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q4_K_M qwen2.5:72b-instruct-q4_K_M > ``` > I don't know how different it is from the ollama model but it's likely very close. It's a very bad copy of docker that is used in high performance compute servers... I wish nobody will have to ever know what singularity is. Anyway, i'll try, though his Qwen2.5 72b GGUF was "splitted" in multiple parts, and ollama does not naively support them, so I went for ollama registry's version
Author
Owner

@rick-github commented on GitHub (Mar 31, 2025):

ollama pull hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q4_K_M should download a single GGUF file (plus assorted supplementary files).

<!-- gh-comment-id:2767173614 --> @rick-github commented on GitHub (Mar 31, 2025): `ollama pull hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q4_K_M` should download a single GGUF file (plus assorted supplementary files).
Author
Owner

@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):

ollama pull hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q4_K_M should download a single GGUF file (plus assorted supplementary files).

Eheh I moving on a 6x nvidia L40s server, i wanted to spoil myself with a Q_6 quantization, but too bad i'll one day learn to merge the parts with llama.cpp, saw one of your posts on how to do so

<!-- gh-comment-id:2767185302 --> @AlbertoSinigaglia commented on GitHub (Mar 31, 2025): > `ollama pull hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q4_K_M` should download a single GGUF file (plus assorted supplementary files). Eheh I moving on a 6x nvidia L40s server, i wanted to spoil myself with a Q_6 quantization, but too bad i'll one day learn to merge the parts with llama.cpp, saw one of your posts on how to do so
Author
Owner

@rick-github commented on GitHub (Mar 31, 2025):

ollama run hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q6_K
<!-- gh-comment-id:2767188513 --> @rick-github commented on GitHub (Mar 31, 2025): ``` ollama run hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q6_K ```
Author
Owner

@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):

ollama run hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q6_K
OLLAMA_HOST="localhost:6000" ollama pull hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q6_K
pulling manifest 
Error: pull model manifest: 400: {"error":"The specified tag is a sharded GGUF. Ollama does not support this yet. Please use another tag or \"latest\". Follow this issue for more info: https://github.com/ollama/ollama/issues/5245"}

This is the "merging" I was referring to (#5245)

<!-- gh-comment-id:2767197061 --> @AlbertoSinigaglia commented on GitHub (Mar 31, 2025): > ``` > ollama run hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q6_K > ``` ``` OLLAMA_HOST="localhost:6000" ollama pull hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q6_K pulling manifest Error: pull model manifest: 400: {"error":"The specified tag is a sharded GGUF. Ollama does not support this yet. Please use another tag or \"latest\". Follow this issue for more info: https://github.com/ollama/ollama/issues/5245"} ``` This is the "merging" I was referring to (#5245)
Author
Owner

@rick-github commented on GitHub (Mar 31, 2025):

Ah, OK. The q4_K_M quant downloads a single file, I thought the q6_K would be the same.

<!-- gh-comment-id:2767202937 --> @rick-github commented on GitHub (Mar 31, 2025): Ah, OK. The q4_K_M quant downloads a single file, I thought the q6_K would be the same.
Author
Owner

@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):

Ah, OK. The q4_K_M quant downloads a single file, I thought the q6_K would be the same.

yup makes sense, but I'm seeing a trend in models over 40Gb to split the model across multiple files

<!-- gh-comment-id:2767217800 --> @AlbertoSinigaglia commented on GitHub (Mar 31, 2025): > Ah, OK. The q4_K_M quant downloads a single file, I thought the q6_K would be the same. yup makes sense, but I'm seeing a trend in models over 40Gb to split the model across multiple files
Author
Owner

@Pablojosep commented on GitHub (Jul 30, 2025):

Same here, cant go any further than ~24%.

<!-- gh-comment-id:3135835418 --> @Pablojosep commented on GitHub (Jul 30, 2025): Same here, cant go any further than ~24%.
Author
Owner

@Clemeaux commented on GitHub (Sep 13, 2025):

Even now, 3 months later, this flaw persists. I've been using Ollama for quite a while (~1 year+) and in the early stages this never happened. But now it turns out not be reliable any more, you have to rather rely on that you run into this problem when pulling a model. This isn't good. I wonder whether this is an issue caused by changes in the Ollama software or is it something related to the server, where Ollama models are stored?

<!-- gh-comment-id:3287303213 --> @Clemeaux commented on GitHub (Sep 13, 2025): Even now, 3 months later, this flaw persists. I've been using Ollama for quite a while (~1 year+) and in the early stages this never happened. But now it turns out not be reliable any more, you have to rather rely on that you run into this problem when pulling a model. This isn't good. I wonder whether this is an issue caused by changes in the Ollama software or is it something related to the server, where Ollama models are stored?
Author
Owner

@laichiaheng commented on GitHub (Oct 17, 2025):

It still happens.😭

<!-- gh-comment-id:3415981035 --> @laichiaheng commented on GitHub (Oct 17, 2025): It still happens.😭
Author
Owner

@rick-github commented on GitHub (Oct 17, 2025):

Try reducing the bandwidth by setting OLLAMA_EXPERIMENT=client2 and OLLAMA_REGISTRY_MAXSTREAMS=1.

<!-- gh-comment-id:3416111447 --> @rick-github commented on GitHub (Oct 17, 2025): Try reducing the bandwidth by setting `OLLAMA_EXPERIMENT=client2` and `OLLAMA_REGISTRY_MAXSTREAMS=1`.
Author
Owner

@cogentcoder commented on GitHub (Oct 23, 2025):

Same here even today so it seems it has not been fixed in 10 months. This download stuff is really stupid, I tried the small model Gemma3:1b and after 745MB out of 778MB, it is dancing back and forth from 650 MB to 745 MB and then moving forward by 1MB in 1 minute when I have very good connection. It is funny and annoying at the same time.

<!-- gh-comment-id:3435378008 --> @cogentcoder commented on GitHub (Oct 23, 2025): Same here even today so it seems it has not been fixed in 10 months. This download stuff is really stupid, I tried the small model Gemma3:1b and after 745MB out of 778MB, it is dancing back and forth from 650 MB to 745 MB and then moving forward by 1MB in 1 minute when I have very good connection. It is funny and annoying at the same time.
Author
Owner

@monolith-jaehoon commented on GitHub (Oct 23, 2025):

Try reducing the bandwidth by setting OLLAMA_EXPERIMENT=client2 and OLLAMA_REGISTRY_MAXSTREAMS=1.

@rick-github, In my case, it works! It also works without OLLAMA_REGISTRY_MAXSTREAMS=1, but in that case, error logs are written on the server.

<!-- gh-comment-id:3436740623 --> @monolith-jaehoon commented on GitHub (Oct 23, 2025): > Try reducing the bandwidth by setting `OLLAMA_EXPERIMENT=client2` and `OLLAMA_REGISTRY_MAXSTREAMS=1`. @rick-github, In my case, it works! It also works without `OLLAMA_REGISTRY_MAXSTREAMS=1`, but in that case, error logs are written on the server.
Author
Owner

@Johannett321 commented on GitHub (Oct 23, 2025):

Same here even today so it seems it has not been fixed in 10 months. This download stuff is really stupid, I tried the small model Gemma3:1b and after 745MB out of 778MB, it is dancing back and forth from 650 MB to 745 MB and then moving forward by 1MB in 1 minute when I have very good connection. It is funny and annoying at the same time.

Same here!

<!-- gh-comment-id:3438492826 --> @Johannett321 commented on GitHub (Oct 23, 2025): > Same here even today so it seems it has not been fixed in 10 months. This download stuff is really stupid, I tried the small model Gemma3:1b and after 745MB out of 778MB, it is dancing back and forth from 650 MB to 745 MB and then moving forward by 1MB in 1 minute when I have very good connection. It is funny and annoying at the same time. Same here!
Author
Owner

@yiwansk commented on GitHub (Nov 3, 2025):

This wasn't fixed and shouldn't be "closed"

<!-- gh-comment-id:3479646399 --> @yiwansk commented on GitHub (Nov 3, 2025): This wasn't fixed and shouldn't be "closed"
Author
Owner

@svantiniho41 commented on GitHub (Jan 10, 2026):

I can also confirm this has not been fixed, i am continuously having the same problem despite trying the suggestions in this thread. The download can go from 80 percent down to 30 percent in extreme cases, Is there any other alternative way to pull models? e.g. from browser download or something?

I don't mind it cutting out but the whole download reverting is what i find puzzling, on a metered Wifi or data connection this can quickly get frustrating

<!-- gh-comment-id:3733284205 --> @svantiniho41 commented on GitHub (Jan 10, 2026): I can also confirm this has not been fixed, i am continuously having the same problem despite trying the suggestions in this thread. The download can go from 80 percent down to 30 percent in extreme cases, Is there any other alternative way to pull models? e.g. from browser download or something? I don't mind it cutting out but the whole download reverting is what i find puzzling, on a metered Wifi or data connection this can quickly get frustrating
Author
Owner

@Wytoo commented on GitHub (Jan 22, 2026):

this should not be closed at all. I've got a stable 1gig landline and I'm going crazy with the all interruptions / reverts

<!-- gh-comment-id:3785153371 --> @Wytoo commented on GitHub (Jan 22, 2026): this should not be closed at all. I've got a stable 1gig landline and I'm going crazy with the all interruptions / reverts
Author
Owner

@rick-github commented on GitHub (Jan 22, 2026):

What model are you trying to pull, what errors are you getting, what mitigations have you tried, wherabouts (approximately) are you in the world.

<!-- gh-comment-id:3785196507 --> @rick-github commented on GitHub (Jan 22, 2026): What model are you trying to pull, what errors are you getting, what mitigations have you tried, wherabouts (approximately) are you in the world.
Author
Owner

@Wytoo commented on GitHub (Jan 22, 2026):

mistral-small3.1 and western europe. But I've fixed it with the two environmental variables provided above.

OLLAMA_EXPERIMENT=client2
OLLAMA_REGISTRY_MAXSTREAMS=1

<!-- gh-comment-id:3785247820 --> @Wytoo commented on GitHub (Jan 22, 2026): mistral-small3.1 and western europe. But I've fixed it with the two environmental variables provided above. OLLAMA_EXPERIMENT=client2 OLLAMA_REGISTRY_MAXSTREAMS=1
Author
Owner

@ChrisXtractyl commented on GitHub (Jan 31, 2026):

Still reproducible as of today (Jan 2026).

Large model downloads repeatedly stall or reset near completion.

This issue has been open for over a year without a mitigation.

<!-- gh-comment-id:3829315746 --> @ChrisXtractyl commented on GitHub (Jan 31, 2026): Still reproducible as of today (Jan 2026). Large model downloads repeatedly stall or reset near completion. This issue has been open for over a year without a mitigation.
Author
Owner

@rick-github commented on GitHub (Jan 31, 2026):

Mitigation shown here. Ways to help in investigating shown here.

<!-- gh-comment-id:3829319311 --> @rick-github commented on GitHub (Jan 31, 2026): Mitigation shown [here](https://github.com/ollama/ollama/issues/8484#issuecomment-3785247820). Ways to help in investigating shown [here](https://github.com/ollama/ollama/issues/8484#issuecomment-3785196507).
Author
Owner

@ChrisXtractyl commented on GitHub (Jan 31, 2026):

I can confirm the mitigation OLLAMA_EXPERIMENT=client2 makes the download complete for me.

Model: gemma3:12b
Region: Western Europe

Side effect: enabling client2 breaks my browser-based /api/pull flow. The CORS preflight (OPTIONS) to http://localhost:11434/api/pull returns 405 Method Not Allowed, and the frontend fails with “NetworkError when attempting to fetch resource”.

I can change my workflow, but this is not a trivial change for my setup, and it means the mitigation isn’t compatible with browser usage as-is.

Also, until today I could download the same model reliably with the same setup without client2, so this looks like a recent change/regression (at least on my side).

Given the above, I don’t understand why this issue was closed — the underlying reliability problem still exists, and the mitigation introduces a new blocker for browser-based clients.

<!-- gh-comment-id:3829460718 --> @ChrisXtractyl commented on GitHub (Jan 31, 2026): I can confirm the mitigation `OLLAMA_EXPERIMENT=client2` makes the download complete for me. Model: gemma3:12b Region: Western Europe Side effect: enabling `client2` breaks my browser-based `/api/pull` flow. The CORS preflight (OPTIONS) to `http://localhost:11434/api/pull` returns `405 Method Not Allowed`, and the frontend fails with “NetworkError when attempting to fetch resource”. I can change my workflow, but this is not a trivial change for my setup, and it means the mitigation isn’t compatible with browser usage as-is. Also, until today I could download the same model reliably with the same setup *without* `client2`, so this looks like a recent change/regression (at least on my side). Given the above, I don’t understand why this issue was closed — the underlying reliability problem still exists, and the mitigation introduces a new blocker for browser-based clients.
Author
Owner

@bjf5201 commented on GitHub (Feb 3, 2026):

Mitigation shown here. Ways to help in investigating shown here.

These do not mitigate the issue for me, at least. I was struggling with a download beginning at all. The only different that setting OLLAMA_EXPERIMENT=client2 made was that the download was able sorta get off the ground (above 100 MB) but then it had the same behavior the others have described, where the download progress somehow reversed mid-download. Setting OLLAMA_REGISTRY_MAXSTREAMS=1 didn't help either, I just was back to slow downloads that couldn't get over 100MB before failing.

Environment Info:

I am using WSL2 on Windows 10 using Ubuntu 24.04. More details from running wsl --version below:
WSL version: 2.6.3.0
Kernel version: 6.6.87.2-1
WSLg version: 1.0.71
MSRDC version: 1.2.6353
Direct3D version: 1.611.1-81528511
DXCore version: 10.0.26100.1-240331-1435.ge-release
Windows version: 10.0.26220.7670
Ollama: version 0.15.4

I live in the mid-southeast of North America.

I got a few slightly different error messages whenever trying to run ollama pull qwen3-coder:

time=2026-02-03T13:10:40.654-05:00 level=INFO source=download.go:297 msg="1194192cf2a1 part 14 attempt 5 failed: max retries exceeded: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/11/1194192cf2a187eb02722edcc3f77b11d21f537048ce04b67ccf8ba78863006a/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20260203%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20260203T175247Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=c91417046c3d872cf069da8a9dae83f67047f61431c96a30ce817ea304eeaf9b\": net/http: TLS handshake timeout, retrying in 32s"

Or:

time=2026-02-03T13:24:00.639-05:00 level=INFO source=download.go:376 msg="1194192cf2a1 part 4 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."

Or:

time=2026-02-03T13:24:18.470-05:00 level=INFO source=download.go:297 msg="1194192cf2a1 part 13 attempt 2 failed: read tcp 10.2.0.2:46050->172.64.66.1:443: read: connection reset by peer, retrying in 4s"

Could this be looked into further?

Let me know if there's any additional information that would be helpful!

<!-- gh-comment-id:3842934798 --> @bjf5201 commented on GitHub (Feb 3, 2026): > Mitigation shown [here](https://github.com/ollama/ollama/issues/8484#issuecomment-3785247820). Ways to help in investigating shown [here](https://github.com/ollama/ollama/issues/8484#issuecomment-3785196507). These do not mitigate the issue for me, at least. I was struggling with a download beginning at all. The only different that setting `OLLAMA_EXPERIMENT=client2` made was that the download was able sorta get off the ground (above 100 MB) but then it had the same behavior the others have described, where the download progress somehow reversed mid-download. Setting `OLLAMA_REGISTRY_MAXSTREAMS=1` didn't help either, I just was back to slow downloads that couldn't get over 100MB before failing. ## Environment Info: I am using WSL2 on Windows 10 using Ubuntu 24.04. More details from running `wsl --version` below: WSL version: 2.6.3.0 Kernel version: 6.6.87.2-1 WSLg version: 1.0.71 MSRDC version: 1.2.6353 Direct3D version: 1.611.1-81528511 DXCore version: 10.0.26100.1-240331-1435.ge-release Windows version: 10.0.26220.7670 Ollama: version 0.15.4 I live in the mid-southeast of North America. I got a few slightly different error messages whenever trying to run `ollama pull qwen3-coder`: ```bash time=2026-02-03T13:10:40.654-05:00 level=INFO source=download.go:297 msg="1194192cf2a1 part 14 attempt 5 failed: max retries exceeded: Get \"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/11/1194192cf2a187eb02722edcc3f77b11d21f537048ce04b67ccf8ba78863006a/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20260203%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20260203T175247Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=c91417046c3d872cf069da8a9dae83f67047f61431c96a30ce817ea304eeaf9b\": net/http: TLS handshake timeout, retrying in 32s" ``` Or: ```bash time=2026-02-03T13:24:00.639-05:00 level=INFO source=download.go:376 msg="1194192cf2a1 part 4 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." ``` Or: ```bash time=2026-02-03T13:24:18.470-05:00 level=INFO source=download.go:297 msg="1194192cf2a1 part 13 attempt 2 failed: read tcp 10.2.0.2:46050->172.64.66.1:443: read: connection reset by peer, retrying in 4s" ``` Could this be looked into further? Let me know if there's any additional information that would be helpful!
Author
Owner

@rick-github commented on GitHub (Feb 3, 2026):

The first and third log lines look like connectivity issues: dd20bb891979d25aebc8bec07b2b3bbc has historicaly been a bit flakey for some users because it seems like ISPs like to block the server. "connection reset by peer" could be a slow connection timeout.

What happens if you run the following in WSL2:

curl -L -C - -o qwen-coder.gguf https://registry.ollama.ai/v2/library/qwen3-coder/blobs/sha256:1194192cf2a187eb02722edcc3f77b11d21f537048ce04b67ccf8ba78863006a

What's the "Average Dload" speed while the download is running? Does the "Current Speed" fluctuate? When it's finished, how long did it take?

Is it different if you run it in native Windows?

<!-- gh-comment-id:3842994926 --> @rick-github commented on GitHub (Feb 3, 2026): The first and third log lines look like connectivity issues: dd20bb891979d25aebc8bec07b2b3bbc has historicaly been a bit flakey for some users because it seems like ISPs like to block the server. "connection reset by peer" could be a slow connection timeout. What happens if you run the following in WSL2: ``` curl -L -C - -o qwen-coder.gguf https://registry.ollama.ai/v2/library/qwen3-coder/blobs/sha256:1194192cf2a187eb02722edcc3f77b11d21f537048ce04b67ccf8ba78863006a ``` What's the "Average Dload" speed while the download is running? Does the "Current Speed" fluctuate? When it's finished, how long did it take? Is it different if you run it in native Windows?
Author
Owner

@shafenbadar commented on GitHub (Apr 13, 2026):

Smarter version — restart if file stops growing for 2 minutes:
It's for Gemma4:e2b ~7GB
Tweak according to your needs

`$blobFile = "$env:USERPROFILE.ollama\models\blobs\sha256-4e30e2665218745ef463f722c0bf86be0cab6ee676320f1cfadf91e989107448"

while ($true) {
Write-Host "Starting ollama pull..."
$process = Start-Process -FilePath "ollama" -ArgumentList "pull gemma4:e2b" -PassThru -NoNewWindow

$lastSize = 0
$stalledSeconds = 0

while (-not $process.HasExited) {
    Start-Sleep -Seconds 10

    $currentSize = 0
    if (Test-Path $blobFile) {
        $currentSize = (Get-Item $blobFile).Length
    }

    if ($currentSize -gt $lastSize) {
        $lastSize = $currentSize
        $stalledSeconds = 0
        Write-Host "Progress: $([math]::Round($currentSize/1GB,2)) GB"
    } else {
        $stalledSeconds += 10
        Write-Host "No progress for $stalledSeconds seconds..."
    }

    # Restart if stalled for 2 minutes
    if ($stalledSeconds -ge 120) {
        Write-Host "Stalled! Restarting..."
        $process | Stop-Process -Force
        break
    }
}

if ($process.ExitCode -eq 0) {
    Write-Host "Done! gemma4:e2b downloaded successfully!"
    break
}

Start-Sleep -Seconds 5

}`

<!-- gh-comment-id:4235441232 --> @shafenbadar commented on GitHub (Apr 13, 2026): Smarter version — restart if file stops growing for 2 minutes: It's for Gemma4:e2b ~7GB Tweak according to your needs `$blobFile = "$env:USERPROFILE\.ollama\models\blobs\sha256-4e30e2665218745ef463f722c0bf86be0cab6ee676320f1cfadf91e989107448" while ($true) { Write-Host "Starting ollama pull..." $process = Start-Process -FilePath "ollama" -ArgumentList "pull gemma4:e2b" -PassThru -NoNewWindow $lastSize = 0 $stalledSeconds = 0 while (-not $process.HasExited) { Start-Sleep -Seconds 10 $currentSize = 0 if (Test-Path $blobFile) { $currentSize = (Get-Item $blobFile).Length } if ($currentSize -gt $lastSize) { $lastSize = $currentSize $stalledSeconds = 0 Write-Host "Progress: $([math]::Round($currentSize/1GB,2)) GB" } else { $stalledSeconds += 10 Write-Host "No progress for $stalledSeconds seconds..." } # Restart if stalled for 2 minutes if ($stalledSeconds -ge 120) { Write-Host "Stalled! Restarting..." $process | Stop-Process -Force break } } if ($process.ExitCode -eq 0) { Write-Host "Done! gemma4:e2b downloaded successfully!" break } Start-Sleep -Seconds 5 }`
Author
Owner

@shafenbadar commented on GitHub (Apr 13, 2026):

I found Simplified IDM workflow for downloading larger models:

Brief: Download Bigger file with IDM, place to models/blobs folder run that model with ollama, and ollama will complete the prerequisites automatically and run the model

Details:
Step 1: Get the manifest to find the big blob hash (biggest file in model):
powershell
"Invoke-RestMethod "https://registry.ollama.ai/v2/library//manifests/" |
Select-Object -ExpandProperty layers |
Select-Object mediaType, digest,
@{N='Size(MB)';E={[math]::Round($.size/1MB,2)}},
@{N='Size(GB)';E={[math]::Round($
.size/1GB,3)}}"

Example (Gemma4:E2B):
Invoke-RestMethod "https://registry.ollama.ai/v2/library/gemma4/manifests/e2b" |
Select-Object -ExpandProperty layers |
Select-Object mediaType, digest,
@{N='Size(MB)';E={[math]::Round($.size/1MB,2)}},
@{N='Size(GB)';E={[math]::Round($
.size/1GB,3)}}

Step 2 - Download ONLY the big blob via IDM (or browser):
URL: https://registry.ollama.ai/v2/library/gemma4/blobs/sha256:4e30e2665218745ef463f722c0bf86be0cab6ee676320f1cfadf91e989107448
Save folder | C:\Users<YourName>.ollama\models\blobs
Filename | sha256-4e30e2665218745ef463f722c0bf86be0cab6ee676320f1cfadf91e989107448
Expected size | 6.67 GB

Step 3 - Just run it:
"ollama run :"
Example: ollama run gemma4:e2b "What is 2+2? Reply in one sentence."

Ollama will automatically:

  • Pull the manifest
  • Download the small files (KB-sized, fast)
  • Verify everything
  • Write the manifest
  • Run the model

Key insight discovered:
The only reason to use IDM is the big model blob - everything else (manifest, license, params, config) is tiny and ollama handles it in seconds.
Just IDM the big file → put to blobs folder → ollama run <model>.

<!-- gh-comment-id:4238393291 --> @shafenbadar commented on GitHub (Apr 13, 2026): I found Simplified IDM workflow for downloading larger models: Brief: Download Bigger file with IDM, place to models/blobs folder run that model with ollama, and ollama will complete the prerequisites automatically and run the model Details: Step 1: Get the manifest to find the big blob hash (biggest file in model): powershell "Invoke-RestMethod "https://registry.ollama.ai/v2/library/<model>/manifests/<tag>" | Select-Object -ExpandProperty layers | Select-Object mediaType, digest, @{N='Size(MB)';E={[math]::Round($_.size/1MB,2)}}, @{N='Size(GB)';E={[math]::Round($_.size/1GB,3)}}" Example (Gemma4:E2B): Invoke-RestMethod "https://registry.ollama.ai/v2/library/gemma4/manifests/e2b" | Select-Object -ExpandProperty layers | Select-Object mediaType, digest, @{N='Size(MB)';E={[math]::Round($_.size/1MB,2)}}, @{N='Size(GB)';E={[math]::Round($_.size/1GB,3)}} Step 2 - Download ONLY the big blob via IDM (or browser): URL: https://registry.ollama.ai/v2/library/gemma4/blobs/sha256:4e30e2665218745ef463f722c0bf86be0cab6ee676320f1cfadf91e989107448 Save folder | C:\Users\<YourName>\.ollama\models\blobs\ Filename | sha256-4e30e2665218745ef463f722c0bf86be0cab6ee676320f1cfadf91e989107448 Expected size | 6.67 GB Step 3 - Just run it: "ollama run <model>:<tag>" Example: ollama run gemma4:e2b "What is 2+2? Reply in one sentence." Ollama will automatically: - Pull the manifest - Download the small files (KB-sized, fast) - Verify everything - Write the manifest - Run the model --- Key insight discovered: The only reason to use IDM is the big model blob - everything else (manifest, license, params, config) is tiny and ollama handles it in seconds. Just IDM the big file → put to blobs folder → `ollama run <model>`. ✅
Author
Owner

@ShivaMultiarmed commented on GitHub (Apr 30, 2026):

The issue occured when I had no available space on a disk. Succeded when switched download folders.

<!-- gh-comment-id:4350306419 --> @ShivaMultiarmed commented on GitHub (Apr 30, 2026): The issue occured when I had no available space on a disk. Succeded when switched download folders.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67520