[GH-ISSUE #3939] Connection reset when pulling model Connection is reset when pulling model #28202

Closed
opened 2026-04-22 06:06:07 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @abcde-a on GitHub (Apr 26, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3939

What is the issue?

When I pull the mirror the return connection is reset

(base) mac@MacBook-Pro .ollama % ollama run llama3
pulling manifest 
Error: pull model manifest: Get "https://ollama.com/token?nonce=_cUgY2K3FP-svtBREKy4-w&scope=repository%!A(MISSING)library%!F(MISSING)llama3%!A(MISSING)pull&service=ollama.com&ts=1714127334": read tcp 172.17.12.43:52571->34.120.132.20:443: read: connection reset by peer

The following log files are available

time=2024-04-26T17:12:11.605+08:00 level=INFO source=images.go:817 msg="total blobs: 0"
time=2024-04-26T17:12:11.607+08:00 level=INFO source=images.go:824 msg="total unused blobs removed: 0"
time=2024-04-26T17:12:11.607+08:00 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.1.32)"
time=2024-04-26T17:12:11.610+08:00 level=INFO source=payload.go:28 msg="extracting embedded files" dir=/var/folders/bs/txf2fqrs30x4b_g4rrbs2lg80000gn/T/ollama2788695153/runners
time=2024-04-26T17:12:11.680+08:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2]"
[GIN] 2024/04/26 - 17:12:11 | 200 |    5.881057ms |       127.0.0.1 | HEAD     "/"
[GIN] 2024/04/26 - 17:12:11 | 404 |     929.513µs |       127.0.0.1 | POST     "/api/show"
update check failed - TypeError: fetch failed
[GIN] 2024/04/26 - 17:12:13 | 200 |  1.505523002s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2024/04/26 - 17:15:00 | 404 |    1.826652ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2024/04/26 - 17:19:00 | 200 |      22.809µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/04/26 - 17:19:00 | 404 |     219.809µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/04/26 - 17:19:04 | 200 |  4.030929373s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2024/04/26 - 17:19:29 | 200 |      20.244µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/04/26 - 17:19:29 | 404 |      74.281µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/04/26 - 17:19:30 | 200 |   839.98559ms |       127.0.0.1 | POST     "/api/pull"
[GIN] 2024/04/26 - 18:24:08 | 200 |      94.468µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/04/26 - 18:24:08 | 404 |     267.403µs |       127.0.0.1 | POST     "/api/show"

OS

macOS

GPU

Other

CPU

Intel

Ollama version

0.1.32

Originally created by @abcde-a on GitHub (Apr 26, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3939 ### What is the issue? When I pull the mirror the return connection is reset ``` (base) mac@MacBook-Pro .ollama % ollama run llama3 pulling manifest Error: pull model manifest: Get "https://ollama.com/token?nonce=_cUgY2K3FP-svtBREKy4-w&scope=repository%!A(MISSING)library%!F(MISSING)llama3%!A(MISSING)pull&service=ollama.com&ts=1714127334": read tcp 172.17.12.43:52571->34.120.132.20:443: read: connection reset by peer ``` The following log files are available ``` time=2024-04-26T17:12:11.605+08:00 level=INFO source=images.go:817 msg="total blobs: 0" time=2024-04-26T17:12:11.607+08:00 level=INFO source=images.go:824 msg="total unused blobs removed: 0" time=2024-04-26T17:12:11.607+08:00 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.1.32)" time=2024-04-26T17:12:11.610+08:00 level=INFO source=payload.go:28 msg="extracting embedded files" dir=/var/folders/bs/txf2fqrs30x4b_g4rrbs2lg80000gn/T/ollama2788695153/runners time=2024-04-26T17:12:11.680+08:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2]" [GIN] 2024/04/26 - 17:12:11 | 200 | 5.881057ms | 127.0.0.1 | HEAD "/" [GIN] 2024/04/26 - 17:12:11 | 404 | 929.513µs | 127.0.0.1 | POST "/api/show" update check failed - TypeError: fetch failed [GIN] 2024/04/26 - 17:12:13 | 200 | 1.505523002s | 127.0.0.1 | POST "/api/pull" [GIN] 2024/04/26 - 17:15:00 | 404 | 1.826652ms | 127.0.0.1 | POST "/api/generate" [GIN] 2024/04/26 - 17:19:00 | 200 | 22.809µs | 127.0.0.1 | HEAD "/" [GIN] 2024/04/26 - 17:19:00 | 404 | 219.809µs | 127.0.0.1 | POST "/api/show" [GIN] 2024/04/26 - 17:19:04 | 200 | 4.030929373s | 127.0.0.1 | POST "/api/pull" [GIN] 2024/04/26 - 17:19:29 | 200 | 20.244µs | 127.0.0.1 | HEAD "/" [GIN] 2024/04/26 - 17:19:29 | 404 | 74.281µs | 127.0.0.1 | POST "/api/show" [GIN] 2024/04/26 - 17:19:30 | 200 | 839.98559ms | 127.0.0.1 | POST "/api/pull" [GIN] 2024/04/26 - 18:24:08 | 200 | 94.468µs | 127.0.0.1 | HEAD "/" [GIN] 2024/04/26 - 18:24:08 | 404 | 267.403µs | 127.0.0.1 | POST "/api/show" ``` ### OS macOS ### GPU Other ### CPU Intel ### Ollama version 0.1.32
GiteaMirror added the bug label 2026-04-22 06:06:07 -05:00
Author
Owner

@dhiltgen commented on GitHub (May 1, 2024):

Looks like a dup of #3504

<!-- gh-comment-id:2089150859 --> @dhiltgen commented on GitHub (May 1, 2024): Looks like a dup of #3504
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28202