[GH-ISSUE #12750] Trying to pull models does not work on multiple machines #8454

Closed
opened 2026-04-12 21:08:21 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @simonaden on GitHub (Oct 23, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12750

What is the issue?

I am from Germany and own the Macbook Pro M1 with the M1 Pro chip and an windows machine with Windows 10, i7-4770, 32 GB RAM and the GTX 1070 running docker. On windows I run ollama inside docker with the latest image and docker version 4.48.0

When I try to run "ollama pull model", the download starts but after a little while the download speed goes to zero. and the following error occures:

"...part 2 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."

Even after doing ctrl+c multiple times I am only able to get the download to 100%, but the model is not being installed so not usable.

This issue exists on both machines and in totally different locations(Home, University or Work) and for all models I tried to download (qwen3:4b, llama3.1:8b, gemma3:4b, ...). I would guess it's the same for all models.

Previously it worked on both machines without any issues, like 3 days ago. So I am wondering if this is some Issue on my end or if there is some issue with ollama in Germany.

Relevant log output

..part 2 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection.

OS

No response

GPU

No response

CPU

No response

Ollama version

ollama version is 0.12.6 on both machines

Originally created by @simonaden on GitHub (Oct 23, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12750 ### What is the issue? I am from Germany and own the Macbook Pro M1 with the M1 Pro chip and an windows machine with Windows 10, i7-4770, 32 GB RAM and the GTX 1070 running docker. On windows I run ollama inside docker with the latest image and docker version 4.48.0 When I try to run "ollama pull model", the download starts but after a little while the download speed goes to zero. and the following error occures: "...part 2 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." Even after doing ctrl+c multiple times I am only able to get the download to 100%, but the model is not being installed so not usable. This issue exists on both machines and in totally different locations(Home, University or Work) and for all models I tried to download (qwen3:4b, llama3.1:8b, gemma3:4b, ...). I would guess it's the same for all models. Previously it worked on both machines without any issues, like 3 days ago. So I am wondering if this is some Issue on my end or if there is some issue with ollama in Germany. ### Relevant log output ```shell ..part 2 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection. ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version ollama version is 0.12.6 on both machines
GiteaMirror added the bug label 2026-04-12 21:08:21 -05:00
Author
Owner

@simonaden commented on GitHub (Oct 23, 2025):

I also tried using the GUI on Mac for download, there the issue is also appearing for all models. Download starts, works for a while and the stops

<!-- gh-comment-id:3435566833 --> @simonaden commented on GitHub (Oct 23, 2025): I also tried using the GUI on Mac for download, there the issue is also appearing for all models. Download starts, works for a while and the stops
Author
Owner

@rick-github commented on GitHub (Oct 23, 2025):

Seems to be a server-side problem - I have tried downloading the model GGUF from multiple sites on different networks and they all stall. I'm doing the downloading with curl and wget so it's not an ollama client issue.

<!-- gh-comment-id:3435730861 --> @rick-github commented on GitHub (Oct 23, 2025): Seems to be a server-side problem - I have tried downloading the model GGUF from multiple sites on different networks and they all stall. I'm doing the downloading with curl and wget so it's not an ollama client issue.
Author
Owner

@jakeogrady commented on GitHub (Oct 23, 2025):

I am having an issue attempting to download with ollama pull llama3.1:latest. My speed is dropping to KBs

<!-- gh-comment-id:3435754470 --> @jakeogrady commented on GitHub (Oct 23, 2025): I am having an issue attempting to download with `ollama pull llama3.1:latest`. My speed is dropping to KBs
Author
Owner

@rick-github commented on GitHub (Oct 23, 2025):

Seems to be a Cloudfare CDN issue. My downloads eventually completed, but very very slowly.

<!-- gh-comment-id:3435790271 --> @rick-github commented on GitHub (Oct 23, 2025): Seems to be a Cloudfare CDN issue. My downloads eventually completed, but very very slowly.
Author
Owner

@jakeogrady commented on GitHub (Oct 23, 2025):

Yes, I noticed that for the first 80-90% of the download it is performant, but the last 10% takes a long time and changes speed drastically.

<!-- gh-comment-id:3435798031 --> @jakeogrady commented on GitHub (Oct 23, 2025): Yes, I noticed that for the first 80-90% of the download it is performant, but the last 10% takes a long time and changes speed drastically.
Author
Owner

@sparkxdev commented on GitHub (Oct 23, 2025):

i can confirm. the last 10% dripple in kb/s and take several times longer than the first 90%.

<!-- gh-comment-id:3435916491 --> @sparkxdev commented on GitHub (Oct 23, 2025): i can confirm. the last 10% dripple in kb/s and take several times longer than the first 90%.
Author
Owner

@simonaden commented on GitHub (Oct 23, 2025):

yea on my machines the download eventually finish aswell, just takes a little while when the Mb/s drop to Kb/s

<!-- gh-comment-id:3436116441 --> @simonaden commented on GitHub (Oct 23, 2025): yea on my machines the download eventually finish aswell, just takes a little while when the Mb/s drop to Kb/s
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8454