[GH-ISSUE #187] Error: stream: registry responded with code 416: #76

Closed
opened 2026-04-12 09:37:07 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @codazoda on GitHub (Jul 23, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/187

Originally assigned to: @BruceMacD on GitHub.

I'm getting the following error when I try to run llama2 with ollama. I'm on an M1 Max with 64G of RAM running Ventura 13.4.1.

$ollama run llama2
pulling manifest
Error: stream: registry responded with code 416:

I have a feeling something happened with the internet connection when I originally tried to fetch the model and it seems to be forever broken. I've tried deleting ollama from Applications, deleting the ~/.ollama directory, and re-installing, but that doesn't seem to work.

Originally created by @codazoda on GitHub (Jul 23, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/187 Originally assigned to: @BruceMacD on GitHub. I'm getting the following error when I try to run llama2 with ollama. I'm on an M1 Max with 64G of RAM running Ventura 13.4.1. ``` $ollama run llama2 pulling manifest Error: stream: registry responded with code 416: ``` I have a feeling something happened with the internet connection when I originally tried to fetch the model and it seems to be forever broken. I've tried deleting `ollama` from Applications, deleting the `~/.ollama` directory, and re-installing, but that doesn't seem to work.
GiteaMirror added the bug label 2026-04-12 09:37:07 -05:00
Author
Owner

@BruceMacD commented on GitHub (Jul 24, 2023):

As a workaround try running ollama rm llama2 and than ollama pull lama2 again

<!-- gh-comment-id:1648451442 --> @BruceMacD commented on GitHub (Jul 24, 2023): As a workaround try running `ollama rm llama2` and than `ollama pull lama2` again
Author
Owner

@BruceMacD commented on GitHub (Jul 24, 2023):

Are you running the client in a different location than the server by any chance? Trying to track this one down.

<!-- gh-comment-id:1648473646 --> @BruceMacD commented on GitHub (Jul 24, 2023): Are you running the client in a different location than the server by any chance? Trying to track this one down.
Author
Owner

@codazoda commented on GitHub (Jul 24, 2023):

Sorry, I thought I had responded to this. I’ve resolved the problem, which was that my VPN decrypts, inspects, and re-encrypts my traffic. It’s a bit of a MITM for network inspection. Disconnecting from the VPN resolved the issue. So, maybe just an error about failed SSL would be enough to have helped me solve this myself sooner. I suspect something is using SSL pinning to prevent the MITM.

<!-- gh-comment-id:1648517369 --> @codazoda commented on GitHub (Jul 24, 2023): Sorry, I thought I had responded to this. I’ve resolved the problem, which was that my VPN decrypts, inspects, and re-encrypts my traffic. It’s a bit of a MITM for network inspection. Disconnecting from the VPN resolved the issue. So, maybe just an error about failed SSL would be enough to have helped me solve this myself sooner. I suspect something is using SSL pinning to prevent the MITM.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#76