[GH-ISSUE #2850] ollama push and ollama pull are slow or hang on windows #48248

Closed
opened 2026-04-28 07:19:40 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @ewebgh33 on GitHub (Mar 1, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2850

Originally assigned to: @mxyng on GitHub.

Can't download ANY models.

What is happening? Not my internet, speed test blasts.
Your servers OK?

Happening on Windows version buggy still? Using latest, 0.1.27 (Win11).

As per docs, set Windows environment variable to:
OLLAMA_MODELS = D:\AI\text\ollama-models
I am familiar with environment variables and this worked with llama2 a few days ago.

Now in Powershell
ollama pull phind-codellama
Says will take 99hrs, has downloaded 82kb
Then quits DL
Error: context canceled

Just tried codellama:70b, same thing. 99hrs, cancels with error.

This is why people ask, "why can't we just use a GGUF" or AWQ or whatever. There are multiple sources that host the models, but we need these hashed files and blobs that only Ollama has. Centralised models, point of failure, case in point this ticket.
Some models I already have as I run them in Oobabooga. Is it too much to ask I could use these already downloaded (and large) models? And not need two copies of the same thing, albeit in different formats.

Rebooted - no change. Can't download ANY models.

Originally created by @ewebgh33 on GitHub (Mar 1, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2850 Originally assigned to: @mxyng on GitHub. Can't download ANY models. What is happening? Not my internet, speed test blasts. Your servers OK? Happening on Windows version buggy still? Using latest, 0.1.27 (Win11). As per docs, set Windows environment variable to: OLLAMA_MODELS = D:\AI\text\ollama-models I am familiar with environment variables and this worked with llama2 a few days ago. Now in Powershell `ollama pull phind-codellama` Says will take 99hrs, has downloaded 82kb Then quits DL `Error: context canceled` Just tried codellama:70b, same thing. 99hrs, cancels with error. This is why people ask, "why can't we just use a GGUF" or AWQ or whatever. There are multiple sources that host the models, but we need these hashed files and blobs that only Ollama has. Centralised models, point of failure, case in point this ticket. Some models I already have as I run them in Oobabooga. Is it too much to ask I could use these already downloaded (and large) models? And not need two copies of the same thing, albeit in different formats. Rebooted - no change. Can't download ANY models.
GiteaMirror added the networkingbug labels 2026-04-28 07:19:44 -05:00
Author
Owner

@ewebgh33 commented on GitHub (Mar 2, 2024):

This is still happening today, I tried again and it got to 75kb (total downloaded, not speed) before it got so slow that it errored out and killed itself.

<!-- gh-comment-id:1974308594 --> @ewebgh33 commented on GitHub (Mar 2, 2024): This is still happening today, I tried again and it got to 75kb (total downloaded, not speed) before it got so slow that it errored out and killed itself.
Author
Owner

@AstralTomate commented on GitHub (Mar 2, 2024):

Turn off the windows firewall or make a rule exception. That worked for me.

<!-- gh-comment-id:1974732982 --> @AstralTomate commented on GitHub (Mar 2, 2024): Turn off the windows firewall or make a rule exception. That worked for me.
Author
Owner

@ewebgh33 commented on GitHub (Mar 2, 2024):

I managed to download codellama34b a short time back but the downloading is hella buggy, seems to fail most of the time and then luckily just succeed randomly.

I would not have thought that firewall was an issue, as previously I was able to download anything and everything via Ollama running in WSL. It's the Windows native version that is giving me issues. I've never had problems downloading anything really - just this, now.

<!-- gh-comment-id:1974768829 --> @ewebgh33 commented on GitHub (Mar 2, 2024): I managed to download codellama34b a short time back but the downloading is hella buggy, seems to fail most of the time and then luckily just succeed randomly. I would not have thought that firewall was an issue, as previously I was able to download anything and everything via Ollama running in WSL. It's the Windows native version that is giving me issues. I've never had problems downloading anything really - just this, now.
Author
Owner

@dhiltgen commented on GitHub (Mar 7, 2024):

I'm sorry you're having trouble downloading models. We had a few glitches last week on the hub although I don't think those would explain what you're seeing. Are you still experiencing slow downloads? Did changing firewall settings have any impact? Do you have any 3rd party AV software on your system that might be doing packet inspection?

If you're still having problems, could you run the server with OLLAMA_DEBUG="1" set and share the logs when you're trying to download and seeing the extremely slow throughput?

We're working on some improvements to throttling the download to try to optimize for the available bandwidth in https://github.com/ollama/ollama/pull/2221 which may help.

<!-- gh-comment-id:1984031936 --> @dhiltgen commented on GitHub (Mar 7, 2024): I'm sorry you're having trouble downloading models. We had a few glitches last week on the hub although I don't think those would explain what you're seeing. Are you still experiencing slow downloads? Did changing firewall settings have any impact? Do you have any 3rd party AV software on your system that might be doing packet inspection? If you're still having problems, could you run the server with OLLAMA_DEBUG="1" set and share the logs when you're trying to download and seeing the extremely slow throughput? We're working on some improvements to throttling the download to try to optimize for the available bandwidth in https://github.com/ollama/ollama/pull/2221 which may help.
Author
Owner

@SarkarKurdish commented on GitHub (Mar 15, 2024):

Turn off the windows firewall or make a rule exception. That worked for me.

Worked for me too, thanks

<!-- gh-comment-id:2000459127 --> @SarkarKurdish commented on GitHub (Mar 15, 2024): > Turn off the windows firewall or make a rule exception. That worked for me. Worked for me too, thanks
Author
Owner

@hermitbernard commented on GitHub (Mar 24, 2024):

Turned off the firewall. Worked to pull 1 model, then stopped working if I try to download other models.

<!-- gh-comment-id:2016738963 --> @hermitbernard commented on GitHub (Mar 24, 2024): Turned off the firewall. Worked to pull 1 model, then stopped working if I try to download other models.
Author
Owner

@TurningTide commented on GitHub (Mar 26, 2024):

I encountered an issue where the download speed drops from 100 kB/s to around 1 B/s for the first 1-2 minutes of pull. However, after some time, the download speed picks up according to the available bandwidth.
Defender is disabled completely

<!-- gh-comment-id:2020654487 --> @TurningTide commented on GitHub (Mar 26, 2024): I encountered an issue where the download speed drops from 100 kB/s to around 1 B/s for the first 1-2 minutes of pull. However, after some time, the download speed picks up according to the available bandwidth. Defender is disabled completely
Author
Owner

@MarsThunder commented on GitHub (May 10, 2024):

First time I experienced this. Happened today. Pulling llava and solar. First it has speed of 7B/sec (molasses). So I Ctrl+C and then try again. Usually, it will then download at normal speed. If I don't get the ASCII white progress bar within a second, it will fail. Once I pull and see that progress bar within a second, it seems to be normal. I am chalking it up to busy download server and AI madness where everyone is downloading!

<!-- gh-comment-id:2103644848 --> @MarsThunder commented on GitHub (May 10, 2024): First time I experienced this. Happened today. Pulling llava and solar. First it has speed of 7B/sec (molasses). So I Ctrl+C and then try again. Usually, it will then download at normal speed. If I don't get the ASCII white progress bar within a second, it will fail. Once I pull and see that progress bar within a second, it seems to be normal. I am chalking it up to busy download server and AI madness where everyone is downloading!
Author
Owner

@bmizerany commented on GitHub (May 13, 2024):

Hi everyone,

We're actively working on slow downloads for all. Please help us test a new experimental solution. Instructions are here: https://github.com/ollama/ollama/issues/1736#issuecomment-2102983113

Please, please, please give us feedback in that issue if you can!

<!-- gh-comment-id:2108923230 --> @bmizerany commented on GitHub (May 13, 2024): Hi everyone, We're actively working on slow downloads for all. Please help us test a new experimental solution. Instructions are here: https://github.com/ollama/ollama/issues/1736#issuecomment-2102983113 Please, please, please give us feedback in that issue if you can!
Author
Owner

@dhiltgen commented on GitHub (Aug 6, 2024):

Between improvements we've made on ollama.com and PR #6207 I think this should be resolved.

If folks are still experiencing problems on windows after upgrading to v0.3.4, please share an updated server log and describe your network topology (firewall, proxy, etc.) and I'll reopen.

<!-- gh-comment-id:2271864657 --> @dhiltgen commented on GitHub (Aug 6, 2024): Between improvements we've made on ollama.com and PR #6207 I think this should be resolved. If folks are still experiencing problems on windows after upgrading to v0.3.4, please share an updated server log and describe your network topology (firewall, proxy, etc.) and I'll reopen.
Author
Owner

@supasympa commented on GitHub (Jun 27, 2025):

I am experiencing this. It seems to trash my local network somehow

<!-- gh-comment-id:3012234200 --> @supasympa commented on GitHub (Jun 27, 2025): I am experiencing this. It seems to trash my local network somehow
Author
Owner

@abshkd commented on GitHub (Jun 27, 2025):

I am not able to push either. it goes 98% then slows to modem speed, its hsowing as done but is just stuck there at
pushing...100%.... As a backup i am just dumping to huggingface.

<!-- gh-comment-id:3012490309 --> @abshkd commented on GitHub (Jun 27, 2025): I am not able to push either. it goes 98% then slows to modem speed, its hsowing as done but is just stuck there at `pushing...100%....` As a backup i am just dumping to huggingface.
Author
Owner

@thedarkfalcon commented on GitHub (Aug 13, 2025):

I am experiencing random slow down to a crawls via the application. It was downloading at around 10MB/s, then randomly slowed down to a crawl of a few kilobytes a second.

Image

If I start downloading the file directly through my web browser I again get the expected speed:

Image

FYI I got the direct download link from this website.

Stopping and starting the download seemed to fix the problem:

Image
<!-- gh-comment-id:3181904745 --> @thedarkfalcon commented on GitHub (Aug 13, 2025): I am experiencing random slow down to a crawls via the application. It was downloading at around 10MB/s, then randomly slowed down to a crawl of a few kilobytes a second. <img width="880" height="94" alt="Image" src="https://github.com/user-attachments/assets/01b77bd5-2578-455e-8772-9a7e4425bd13" /> If I start downloading the file directly through my web browser I again get the expected speed: <img width="1280" height="431" alt="Image" src="https://github.com/user-attachments/assets/5bf935ab-da12-499f-87a9-120a2fb3f2f6" /> FYI I got the direct download link from [this website](https://ollama-direct-downloader.vercel.app/). Stopping and starting the download seemed to fix the problem: <img width="890" height="124" alt="Image" src="https://github.com/user-attachments/assets/5ff3056f-e069-41dd-b4f0-9b21c6c3d58c" />
Author
Owner

@MarsThunder commented on GitHub (Aug 13, 2025):

I just checked this morning to test. I did a pull instead of run and I got consistent 27-33MB/s.

Image

Wonder if it is the version installed? I am using version 0.9.6, but on Ubuntu 24.04. I just deleted and tried 'run' and it was averaging 35MB/s. Maybe it was time of day when server was busy?

<!-- gh-comment-id:3183904969 --> @MarsThunder commented on GitHub (Aug 13, 2025): I just checked this morning to test. I did a pull instead of run and I got consistent 27-33MB/s. <img width="1313" height="268" alt="Image" src="https://github.com/user-attachments/assets/bf193f03-905b-4225-8525-e824ff25bc78" /> Wonder if it is the version installed? I am using version 0.9.6, but on Ubuntu 24.04. I just deleted and tried 'run' and it was averaging 35MB/s. Maybe it was time of day when server was busy?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48248