[GH-ISSUE #330] ollama pull llama2:70b stuck #25905

Closed
opened 2026-04-22 01:45:22 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @sarvagnan on GitHub (Aug 11, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/330

Originally assigned to: @BruceMacD on GitHub.

I have tried to pull llama2:70b but ollama appears to be stuck in the "pulling manifest" stage. This repeats after cancelling as well. I tried pulling orca and that downloaded without any issues. I have appended the server log from the logs folder. These logs are repeated with almost identical times each run.

Thank you in advance for any help that you can provide.

[GIN] 2023/08/11 - 17:30:26 | 200 |       2.584µs |       127.0.0.1 | HEAD     "/"
2023/08/11 17:30:28 images.go:1164: redirected to: https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/8c/8c17c2ebb0ea011be9981cc3922db8ca8fa61e828c5d3f44cb6ae342bf80460b/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20230811%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20230811T120028Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=a5aa71ee7e1eb700ed450dfb3a31a31a27c13d86617fd8a08b17860894055c13
2023/08/11 17:30:31 download.go:213: success getting sha256:8c17c2ebb0ea011be9981cc3922db8ca8fa61e828c5d3f44cb6ae342bf80460b
2023/08/11 17:30:32 images.go:1164: redirected to: https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/7c/7c23fb36d80141c4ab8cdbb61ee4790102ebd2bf7aeff414453177d4f2110e5d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20230811%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20230811T120032Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=65d955ee08e83d4b875cce6c584ce45c08ebe74d102161ffa0c26c325b027795
2023/08/11 17:30:33 download.go:213: success getting sha256:7c23fb36d80141c4ab8cdbb61ee4790102ebd2bf7aeff414453177d4f2110e5d
2023/08/11 17:30:34 images.go:1164: redirected to: https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/57/578a2e81f7064c5118b93336dbe53dff6049bbeb4a8cee6c32a87579022e1aba/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20230811%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20230811T120034Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=bfa4befa8b20e0c3a6f68b7af4764ad9a1485735da82c5d1c54a9336b107a76d
2023/08/11 17:30:35 download.go:213: success getting sha256:578a2e81f7064c5118b93336dbe53dff6049bbeb4a8cee6c32a87579022e1aba
2023/08/11 17:30:36 images.go:1164: redirected to: https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/e3/e35ab70a78c78ebbbc4d2e2eaec8259938a6a60c34ebd9fd2e0c8b20f2cdcfc5/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20230811%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20230811T120036Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=87e970f689aabcb7f6e8473b80d7dd67509b177a91df1991e67ae71387fdbf4a
2023/08/11 17:30:36 download.go:213: success getting sha256:e35ab70a78c78ebbbc4d2e2eaec8259938a6a60c34ebd9fd2e0c8b20f2cdcfc5
2023/08/11 17:30:38 images.go:1164: redirected to: https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/96/96862bb35d7760e607f893b81ddef58a0288de62aaf66200b3a0e99c3e4956e5/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20230811%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20230811T120037Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=f2022dfb695b9c4c3273a119aea47def6ffaa2e4198de415c86765df8c53729d
2023/08/11 17:30:39 download.go:213: success getting sha256:96862bb35d7760e607f893b81ddef58a0288de62aaf66200b3a0e99c3e4956e5
[GIN] 2023/08/11 - 17:30:41 | 200 | 14.927754625s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2023/08/11 - 17:30:56 | 200 |       2.208µs |       127.0.0.1 | HEAD     "/"
[GIN] 2023/08/11 - 17:34:50 | 200 |       2.709µs |       127.0.0.1 | HEAD     "/"
[GIN] 2023/08/11 - 17:34:50 | 404 |     185.542µs |       127.0.0.1 | DELETE   "/api/delete"
[GIN] 2023/08/11 - 17:35:03 | 200 |       3.083µs |       127.0.0.1 | HEAD     "/"
[GIN] 2023/08/11 - 17:35:04 | 200 |  843.496584ms |       127.0.0.1 | POST     "/api/pull"
[GIN] 2023/08/11 - 17:36:28 | 200 |       2.458µs |       127.0.0.1 | HEAD     "/"
Originally created by @sarvagnan on GitHub (Aug 11, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/330 Originally assigned to: @BruceMacD on GitHub. I have tried to pull llama2:70b but ollama appears to be stuck in the "pulling manifest" stage. This repeats after cancelling as well. I tried pulling orca and that downloaded without any issues. I have appended the server log from the logs folder. These logs are repeated with almost identical times each run. Thank you in advance for any help that you can provide. ``` [GIN] 2023/08/11 - 17:30:26 | 200 | 2.584µs | 127.0.0.1 | HEAD "/" 2023/08/11 17:30:28 images.go:1164: redirected to: https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/8c/8c17c2ebb0ea011be9981cc3922db8ca8fa61e828c5d3f44cb6ae342bf80460b/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20230811%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20230811T120028Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=a5aa71ee7e1eb700ed450dfb3a31a31a27c13d86617fd8a08b17860894055c13 2023/08/11 17:30:31 download.go:213: success getting sha256:8c17c2ebb0ea011be9981cc3922db8ca8fa61e828c5d3f44cb6ae342bf80460b 2023/08/11 17:30:32 images.go:1164: redirected to: https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/7c/7c23fb36d80141c4ab8cdbb61ee4790102ebd2bf7aeff414453177d4f2110e5d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20230811%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20230811T120032Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=65d955ee08e83d4b875cce6c584ce45c08ebe74d102161ffa0c26c325b027795 2023/08/11 17:30:33 download.go:213: success getting sha256:7c23fb36d80141c4ab8cdbb61ee4790102ebd2bf7aeff414453177d4f2110e5d 2023/08/11 17:30:34 images.go:1164: redirected to: https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/57/578a2e81f7064c5118b93336dbe53dff6049bbeb4a8cee6c32a87579022e1aba/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20230811%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20230811T120034Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=bfa4befa8b20e0c3a6f68b7af4764ad9a1485735da82c5d1c54a9336b107a76d 2023/08/11 17:30:35 download.go:213: success getting sha256:578a2e81f7064c5118b93336dbe53dff6049bbeb4a8cee6c32a87579022e1aba 2023/08/11 17:30:36 images.go:1164: redirected to: https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/e3/e35ab70a78c78ebbbc4d2e2eaec8259938a6a60c34ebd9fd2e0c8b20f2cdcfc5/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20230811%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20230811T120036Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=87e970f689aabcb7f6e8473b80d7dd67509b177a91df1991e67ae71387fdbf4a 2023/08/11 17:30:36 download.go:213: success getting sha256:e35ab70a78c78ebbbc4d2e2eaec8259938a6a60c34ebd9fd2e0c8b20f2cdcfc5 2023/08/11 17:30:38 images.go:1164: redirected to: https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/96/96862bb35d7760e607f893b81ddef58a0288de62aaf66200b3a0e99c3e4956e5/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20230811%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20230811T120037Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=f2022dfb695b9c4c3273a119aea47def6ffaa2e4198de415c86765df8c53729d 2023/08/11 17:30:39 download.go:213: success getting sha256:96862bb35d7760e607f893b81ddef58a0288de62aaf66200b3a0e99c3e4956e5 [GIN] 2023/08/11 - 17:30:41 | 200 | 14.927754625s | 127.0.0.1 | POST "/api/pull" [GIN] 2023/08/11 - 17:30:56 | 200 | 2.208µs | 127.0.0.1 | HEAD "/" [GIN] 2023/08/11 - 17:34:50 | 200 | 2.709µs | 127.0.0.1 | HEAD "/" [GIN] 2023/08/11 - 17:34:50 | 404 | 185.542µs | 127.0.0.1 | DELETE "/api/delete" [GIN] 2023/08/11 - 17:35:03 | 200 | 3.083µs | 127.0.0.1 | HEAD "/" [GIN] 2023/08/11 - 17:35:04 | 200 | 843.496584ms | 127.0.0.1 | POST "/api/pull" [GIN] 2023/08/11 - 17:36:28 | 200 | 2.458µs | 127.0.0.1 | HEAD "/" ```
GiteaMirror added the bug label 2026-04-22 01:45:22 -05:00
Author
Owner

@zzn01 commented on GitHub (Aug 12, 2023):

same issue here.
workaround:

  1. restart the server
  2. then try to pull again
  3. it should work, if not go to 1

hope it will work for you too

<!-- gh-comment-id:1675726286 --> @zzn01 commented on GitHub (Aug 12, 2023): same issue here. workaround: 1. restart the server 2. then try to pull again 3. it should work, if not go to 1 hope it will work for you too
Author
Owner

@sarvagnan commented on GitHub (Aug 12, 2023):

This does seem to work but essentially requires restarting and pulling multiple times if something happens in between.
What's happened for me after the bug report:

  1. Restart server
  2. Pull again. The model starts to download
  3. Download stops with an unexpected EOF error (I'd originally attributed this to system sleep but it seems as though this happens even when the system is awake)
  4. Start the download again but it stalls (displays the progress bar at the same percentage that it stopped at but does not download).
  5. Ctrl-C and restart the download. Now it stalls at the pulling manifest stage.
  6. Restart Ollama and then pull again. Now it downloads again

This is probably not a problem for the smaller models but for the large models this requires multiple restarts

<!-- gh-comment-id:1675770291 --> @sarvagnan commented on GitHub (Aug 12, 2023): This does seem to work but essentially requires restarting and pulling multiple times if something happens in between. What's happened for me after the bug report: 1. Restart server 2. Pull again. The model starts to download 3. Download stops with an unexpected EOF error (I'd originally attributed this to system sleep but it seems as though this happens even when the system is awake) 4. Start the download again but it stalls (displays the progress bar at the same percentage that it stopped at but does not download). 5. Ctrl-C and restart the download. Now it stalls at the pulling manifest stage. 6. Restart Ollama and then pull again. Now it downloads again This is probably not a problem for the smaller models but for the large models this requires multiple restarts
Author
Owner

@parampavar commented on GitHub (Aug 15, 2023):

I can confirm that this happens while pull small model like orca in both Linux (ubuntu) and macos.

<!-- gh-comment-id:1679415800 --> @parampavar commented on GitHub (Aug 15, 2023): I can confirm that this happens while pull small model like orca in both Linux (ubuntu) and macos.
Author
Owner

@BruceMacD commented on GitHub (Aug 15, 2023):

Hi all, this was a bug in the last release where downloads get stuck on error. The fix will be in the next release soon.

You can fix it by restarting the ollama app (or restart the server if you're running from source). It should continue the download without issue once restarted.

Fix is here: https://github.com/jmorganca/ollama/pull/344

<!-- gh-comment-id:1679478637 --> @BruceMacD commented on GitHub (Aug 15, 2023): Hi all, this was a bug in the last release where downloads get stuck on error. The fix will be in the next release soon. You can fix it by restarting the ollama app (or restart the server if you're running from source). It should continue the download without issue once restarted. Fix is here: https://github.com/jmorganca/ollama/pull/344
Author
Owner

@jmorganca commented on GitHub (Aug 23, 2023):

Closing for #344 but please do re-open this if this error keeps happening.

<!-- gh-comment-id:1690471484 --> @jmorganca commented on GitHub (Aug 23, 2023): Closing for #344 but please do re-open this if this error keeps happening.
Author
Owner

@ashryanbeats commented on GitHub (Oct 8, 2023):

I'm hitting this issue when attempting to pull llama2:70b.

For anyone who finds themselves here, it's worth having a look at #695. My takeaway from that—happy to be corrected—is that it's better to run the pull command again instead of restarting the ollama server, which, at time of writing, seems to jettison incomplete pulls. Re-running the command seems to pick up where it left off (it's still running, so I can't say for sure yet).

System:

  • MacOS Sonoma 14.0
  • ollama 0.0.0
  • Command: ollama pull llama2:70b
<!-- gh-comment-id:1751889096 --> @ashryanbeats commented on GitHub (Oct 8, 2023): I'm hitting this issue when attempting to pull llama2:70b. For anyone who finds themselves here, it's worth having a look at #695. My takeaway from that—happy to be corrected—is that it's better to run the pull command again instead of restarting the ollama server, which, at time of writing, seems to jettison incomplete pulls. Re-running the command seems to pick up where it left off (it's still running, so I can't say for sure yet). System: - MacOS Sonoma 14.0 - ollama 0.0.0 - Command: `ollama pull llama2:70b`
Author
Owner

@zioalex commented on GitHub (Mar 20, 2024):

Hi there,
this happened to me just now with ollama version is 0.1.29.
Are we sure it was solved?

<!-- gh-comment-id:2009491013 --> @zioalex commented on GitHub (Mar 20, 2024): Hi there, this happened to me just now with `ollama version is 0.1.29`. Are we sure it was solved?
Author
Owner

@zioalex commented on GitHub (Apr 8, 2024):

In my case I found that my proxy is delaying everything. I workaround the problem downloading it with huggingface-cli

<!-- gh-comment-id:2042041074 --> @zioalex commented on GitHub (Apr 8, 2024): In my case I found that my proxy is delaying everything. I workaround the problem downloading it with huggingface-cli
Author
Owner

@combat007 commented on GitHub (Apr 19, 2024):

How to resume a partially downloaded model

<!-- gh-comment-id:2066230226 --> @combat007 commented on GitHub (Apr 19, 2024): How to resume a partially downloaded model
Author
Owner

@metamec commented on GitHub (Jun 12, 2024):

Downloading models using pull or run should be one of the major conveniences of the app, but it seems every time I try to download a new model lately, I'm spending half an hour trying to get past the stall.

It doesn't matter if I control+c to quit, close the app via the systray or straight up stop-process -name "ollama", "ollama app", I've got to repeatedly issue pull and run requests for 30-60 minutes to convince it to start downloading.

<!-- gh-comment-id:2162505810 --> @metamec commented on GitHub (Jun 12, 2024): Downloading models using pull or run should be one of the major conveniences of the app, but it seems every time I try to download a new model lately, I'm spending half an hour trying to get past the stall. It doesn't matter if I control+c to quit, close the app via the systray or straight up `stop-process -name "ollama", "ollama app"`, I've got to repeatedly issue pull and run requests for 30-60 minutes to convince it to start downloading.
Author
Owner

@crazy2be commented on GitHub (Jun 19, 2024):

FWIW, this just happened to me, and the fix was exactly as described in the OP - restart the ollama server, re-run ollama pull, and voila, it works the second time!

For the time it didn't work,

Logs from ollama serve:
ollama_serve_logs.txt

Logs from ollama pull:

ollama pull llama3
pulling manifest 
pulling manifest 
pulling manifest 
pulling manifest 
pulling 6a0746a1ec1a...   4% ▕█                                          ▏ 199 MB/4.7 GB      

(even after leaving it here for some time, the progress bar never moves. Also note that no MB/s rate or time estimate is displayed).

<!-- gh-comment-id:2177493842 --> @crazy2be commented on GitHub (Jun 19, 2024): FWIW, this just happened to me, and the fix was exactly as described in the OP - restart the ollama server, re-run `ollama pull`, and voila, it works the second time! For the time it didn't work, Logs from `ollama serve`: [ollama_serve_logs.txt](https://github.com/user-attachments/files/15895004/ollama_serve_logs.txt) Logs from `ollama pull`: ``` ollama pull llama3 pulling manifest pulling manifest pulling manifest pulling manifest pulling 6a0746a1ec1a... 4% ▕█ ▏ 199 MB/4.7 GB ``` (even after leaving it here for some time, the progress bar never moves. Also note that no MB/s rate or time estimate is displayed).
Author
Owner

@zioalex commented on GitHub (Jun 19, 2024):

Hi there, for me it was actually something different. Our proxy was
blocking the download, most probably because the antimalware was unable to
check the big chunk of data . Solved downloading the model from HuggingFace
and loading in Ollama

On Wed, 19 Jun 2024, 05:29 crazy2be, @.***> wrote:

FWIW, this just happened to me, and the fix was exactly as described in
the OP - restart the ollama server, re-run ollama pull, and voila, it
works the second time!

For the time it didn't work,

Logs from ollama serve:
ollama_serve_logs.txt
https://github.com/user-attachments/files/15895004/ollama_serve_logs.txt

Logs from ollama pull:

ollama pull llama3
pulling manifest
pulling manifest
pulling manifest
pulling manifest
pulling 6a0746a1ec1a... 4% ▕█ ▏ 199 MB/4.7 GB

(even after leaving it here for some time, the progress bar never moves.
Also note that no MB/s rate or time estimate is displayed).


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/330#issuecomment-2177493842, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AA3NM4HJOPDWJIJHZ3BI5F3ZID3LHAVCNFSM6AAAAAA3M4DKLOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNZXGQ4TGOBUGI
.
You are receiving this because you commented.Message ID:
@.***>

<!-- gh-comment-id:2179099909 --> @zioalex commented on GitHub (Jun 19, 2024): Hi there, for me it was actually something different. Our proxy was blocking the download, most probably because the antimalware was unable to check the big chunk of data . Solved downloading the model from HuggingFace and loading in Ollama On Wed, 19 Jun 2024, 05:29 crazy2be, ***@***.***> wrote: > FWIW, this just happened to me, and the fix was exactly as described in > the OP - restart the ollama server, re-run ollama pull, and voila, it > works the second time! > > For the time it didn't work, > > Logs from ollama serve: > ollama_serve_logs.txt > <https://github.com/user-attachments/files/15895004/ollama_serve_logs.txt> > > Logs from ollama pull: > > ollama pull llama3 > pulling manifest > pulling manifest > pulling manifest > pulling manifest > pulling 6a0746a1ec1a... 4% ▕█ ▏ 199 MB/4.7 GB > > (even after leaving it here for some time, the progress bar never moves. > Also note that no MB/s rate or time estimate is displayed). > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/330#issuecomment-2177493842>, or > unsubscribe > <https://github.com/notifications/unsubscribe-auth/AA3NM4HJOPDWJIJHZ3BI5F3ZID3LHAVCNFSM6AAAAAA3M4DKLOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNZXGQ4TGOBUGI> > . > You are receiving this because you commented.Message ID: > ***@***.***> >
Author
Owner

@mcDandy commented on GitHub (Aug 28, 2024):

Just happened for me...

Cannot kill ollama. Running qwen2:72B which is out of scope for my PC and waiting for it finishing.

time=2024-08-28T22:34:59.266+02:00 level=INFO source=download.go:178 msg="648f108f6a1e part 43 attempt 0 failed: unexpected EOF, retrying in 1s"

<!-- gh-comment-id:2316325812 --> @mcDandy commented on GitHub (Aug 28, 2024): Just happened for me... Cannot kill ollama. Running qwen2:72B which is out of scope for my PC and waiting for it finishing. time=2024-08-28T22:34:59.266+02:00 level=INFO source=download.go:178 msg="648f108f6a1e part 43 attempt 0 failed: unexpected EOF, retrying in 1s"
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#25905