[GH-ISSUE #9621] Error: pull model manifest on MacOS #6278

Closed
opened 2026-04-12 17:42:06 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @georgemac-labs on GitHub (Mar 10, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9621

What is the issue?

Since today, ollama run does not work for models that I haven't already downloaded

Examples:

$ ollama run qwq:32b
pulling manifest 
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/qwq/manifests/32b": dial tcp 104.21.75.227:443: connect: bad file descriptor
$ollama run phi4-mini
pulling manifest 
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/phi4-mini/manifests/latest": dial tcp 104.21.75.227:443: connect: bad file descriptor

However, curl is able to fetch these URLs just fine

$ curl https://registry.ollama.ai/v2/library/qwq/manifests/32b
{"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:6a8faa2fb8b028e782399922ea8eef06b55ec45e2dea1e46642b4326af2020f8","size":488},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:c62ccde5630c20c8a9cf601861d31977d07450cad6dfdf1c661aab307107bddb","size":19851336256},{"mediaType":"application/vnd.ollama.image.template","digest":"sha256:41190096a061d4e37207f28e5306f56f55b451127a23df8e38b82dca8947cb98","size":1231},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:d18a5cc71b84bc4af394a31116bd3932b42241de70c77d2b76d69a314ec8aa12","size":11338},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:4afe5edfdb51c6d15bdb8124bd38ea5642bb6732ae3fd09abe0fefa1ace25caa","size":77}]}%                    

What I'm seeing is the same behaviour described in https://github.com/ollama/ollama/issues/7495 – if I close the menu bar app and run ollama serve, the error does not appear. However, in this case, it wants to redownload the model from scratch (I already had 90%).

Models that are already downloaded work as normal.

This is on MacOS 13.7.2 and I have 100s of GB of storage free.

Relevant log output

[GIN] 2025/03/10 - 14:05:07 | 200 |      60.291µs |       127.0.0.1 | GET      "/api/version"
2025/03/10 14:05:39 routes.go:1215: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/john/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
time=2025-03-10T14:05:39.601+07:00 level=INFO source=images.go:432 msg="total blobs: 84"
time=2025-03-10T14:05:39.602+07:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-10T14:05:39.603+07:00 level=INFO source=routes.go:1277 msg="Listening on 127.0.0.1:11434 (version 0.5.13)"
time=2025-03-10T14:05:39.671+07:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="32.0 GiB" available="32.0 GiB"
update check failed - TypeError: fetch failed
[GIN] 2025/03/10 - 14:05:39 | 200 |      29.584µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/10 - 14:05:39 | 404 |    1.881209ms |       127.0.0.1 | POST     "/api/show"
time=2025-03-10T14:05:39.957+07:00 level=INFO source=images.go:669 msg="request failed: Get \"https://registry.ollama.ai/v2/library/qwq/manifests/32b\": dial tcp 104.21.75.227:443: connect: bad file descriptor"
[GIN] 2025/03/10 - 14:05:39 | 200 |  133.137209ms |       127.0.0.1 | POST     "/api/pull"
[GIN] 2025/03/10 - 14:05:41 | 200 |      44.125µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/10 - 14:05:41 | 404 |    7.534792ms |       127.0.0.1 | POST     "/api/show"
time=2025-03-10T14:05:41.734+07:00 level=INFO source=images.go:669 msg="request failed: Get \"https://registry.ollama.ai/v2/library/qwq/manifests/32b\": dial tcp 104.21.75.227:443: connect: bad file descriptor"
[GIN] 2025/03/10 - 14:05:41 | 200 |    10.53075ms |       127.0.0.1 | POST     "/api/pull"

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.5.13

Originally created by @georgemac-labs on GitHub (Mar 10, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9621 ### What is the issue? Since today, ollama run does not work for models that I haven't already downloaded Examples: ``` $ ollama run qwq:32b pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/qwq/manifests/32b": dial tcp 104.21.75.227:443: connect: bad file descriptor ``` ``` $ollama run phi4-mini pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/phi4-mini/manifests/latest": dial tcp 104.21.75.227:443: connect: bad file descriptor ``` However, curl is able to fetch these URLs just fine ``` $ curl https://registry.ollama.ai/v2/library/qwq/manifests/32b {"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:6a8faa2fb8b028e782399922ea8eef06b55ec45e2dea1e46642b4326af2020f8","size":488},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:c62ccde5630c20c8a9cf601861d31977d07450cad6dfdf1c661aab307107bddb","size":19851336256},{"mediaType":"application/vnd.ollama.image.template","digest":"sha256:41190096a061d4e37207f28e5306f56f55b451127a23df8e38b82dca8947cb98","size":1231},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:d18a5cc71b84bc4af394a31116bd3932b42241de70c77d2b76d69a314ec8aa12","size":11338},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:4afe5edfdb51c6d15bdb8124bd38ea5642bb6732ae3fd09abe0fefa1ace25caa","size":77}]}% ``` What I'm seeing is the same behaviour described in https://github.com/ollama/ollama/issues/7495 – if I close the menu bar app and run `ollama serve`, the error does not appear. However, in this case, it wants to redownload the model from scratch (I already had 90%). Models that are already downloaded work as normal. This is on MacOS 13.7.2 and I have 100s of GB of storage free. ### Relevant log output ```shell [GIN] 2025/03/10 - 14:05:07 | 200 | 60.291µs | 127.0.0.1 | GET "/api/version" 2025/03/10 14:05:39 routes.go:1215: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/john/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]" time=2025-03-10T14:05:39.601+07:00 level=INFO source=images.go:432 msg="total blobs: 84" time=2025-03-10T14:05:39.602+07:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-10T14:05:39.603+07:00 level=INFO source=routes.go:1277 msg="Listening on 127.0.0.1:11434 (version 0.5.13)" time=2025-03-10T14:05:39.671+07:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="32.0 GiB" available="32.0 GiB" update check failed - TypeError: fetch failed [GIN] 2025/03/10 - 14:05:39 | 200 | 29.584µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/10 - 14:05:39 | 404 | 1.881209ms | 127.0.0.1 | POST "/api/show" time=2025-03-10T14:05:39.957+07:00 level=INFO source=images.go:669 msg="request failed: Get \"https://registry.ollama.ai/v2/library/qwq/manifests/32b\": dial tcp 104.21.75.227:443: connect: bad file descriptor" [GIN] 2025/03/10 - 14:05:39 | 200 | 133.137209ms | 127.0.0.1 | POST "/api/pull" [GIN] 2025/03/10 - 14:05:41 | 200 | 44.125µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/10 - 14:05:41 | 404 | 7.534792ms | 127.0.0.1 | POST "/api/show" time=2025-03-10T14:05:41.734+07:00 level=INFO source=images.go:669 msg="request failed: Get \"https://registry.ollama.ai/v2/library/qwq/manifests/32b\": dial tcp 104.21.75.227:443: connect: bad file descriptor" [GIN] 2025/03/10 - 14:05:41 | 200 | 10.53075ms | 127.0.0.1 | POST "/api/pull" ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.5.13
GiteaMirror added the bug label 2026-04-12 17:42:06 -05:00
Author
Owner

@georgemac-labs commented on GitHub (Mar 10, 2025):

Apologies – cause was a local application-level firewall blocking the connection. That must be why the GUI app was blocked, but terminal was not.

<!-- gh-comment-id:2709839284 --> @georgemac-labs commented on GitHub (Mar 10, 2025): Apologies – cause was a local application-level firewall blocking the connection. That must be why the GUI app was blocked, but terminal was not.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6278