[GH-ISSUE #143] error loading model: unexpectedly reached end of file #53

Closed
opened 2026-04-12 09:35:02 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @bkruger99 on GitHub (Jul 20, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/143

On a couple of models I am receiving this error:

llama.cpp: loading model from /Users/REDACTED/.ollama/models/blobs/sha256:d1735b93e1dc503f1045ccd6c8bd73277b18ba892befd1dc29e9b9a7822ed998
error loading model: unexpectedly reached end of file
llama_load_model_from_file: failed to load model

This happens with a couple of the larger models:
nous-hermes:latest
llama2:13b

If I do ollama pull against them, the manifests match up and it doesn't re-pull anything. Since this looks like docker under the hood, are the models corrupt? or?

Any thoughts? FWIW, llama2:latest and wizard-vicuna:latest work fine.

M2 Macbook Pro 32 Gigs of ram.

Originally created by @bkruger99 on GitHub (Jul 20, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/143 On a couple of models I am receiving this error: llama.cpp: loading model from /Users/REDACTED/.ollama/models/blobs/sha256:d1735b93e1dc503f1045ccd6c8bd73277b18ba892befd1dc29e9b9a7822ed998 error loading model: unexpectedly reached end of file llama_load_model_from_file: failed to load model This happens with a couple of the larger models: nous-hermes:latest llama2:13b If I do ollama pull against them, the manifests match up and it doesn't re-pull anything. Since this looks like docker under the hood, are the models corrupt? or? Any thoughts? FWIW, llama2:latest and wizard-vicuna:latest work fine. M2 Macbook Pro 32 Gigs of ram.
GiteaMirror added the bug label 2026-04-12 09:35:02 -05:00
Author
Owner

@jmorganca commented on GitHub (Jul 20, 2023):

Thanks @bkruger99, will check out why this is happening

<!-- gh-comment-id:1644310136 --> @jmorganca commented on GitHub (Jul 20, 2023): Thanks @bkruger99, will check out why this is happening
Author
Owner

@bkruger99 commented on GitHub (Jul 20, 2023):

Thanks @bkruger99, will check out why this is happening

Let me know if you need any additional debugging data from my side. You'll have to tell me how to enable other than running server via cli :)

<!-- gh-comment-id:1644330170 --> @bkruger99 commented on GitHub (Jul 20, 2023): > Thanks @bkruger99, will check out why this is happening Let me know if you need any additional debugging data from my side. You'll have to tell me how to enable other than running server via cli :)
Author
Owner

@jmorganca commented on GitHub (Jul 20, 2023):

Great! @bkruger99 is this on Mac? Thanks!

<!-- gh-comment-id:1644376887 --> @jmorganca commented on GitHub (Jul 20, 2023): Great! @bkruger99 is this on Mac? Thanks!
Author
Owner

@bkruger99 commented on GitHub (Jul 20, 2023):

Yes!

Hardware:
Model Name: MacBook Pro
Model Identifier: Mac14,10
Model Number: Z174000EBLL/A
Chip: Apple M2 Pro
Total Number of Cores: 12 (8 performance and 4 efficiency)
Memory: 32 GB

OS: Ventura 13.4.1 (c)

<!-- gh-comment-id:1644524665 --> @bkruger99 commented on GitHub (Jul 20, 2023): Yes! Hardware: Model Name: MacBook Pro Model Identifier: Mac14,10 Model Number: Z174000EBLL/A Chip: Apple M2 Pro Total Number of Cores: 12 (8 performance and 4 efficiency) Memory: 32 GB OS: Ventura 13.4.1 (c)
Author
Owner

@pdevine commented on GitHub (Jul 20, 2023):

@bkruger99 can you run:

sha2 -256 ~/.ollama/models/blobs/sha256:d1735b93e1dc503f1045ccd6c8bd73277b18ba892befd1dc29e9b9a7822ed998

Check to see that sha sum matches, and if it doesn't you can

rm ~/.ollama/models/blobs/sha256:d1735b93e1dc503f1045ccd6c8bd73277b18ba892befd1dc29e9b9a7822ed998

and then re-pull the image. There's a fix that I think was just merged that will make certain the sha sum is verified correctly when you're pulling the layers.

<!-- gh-comment-id:1644648578 --> @pdevine commented on GitHub (Jul 20, 2023): @bkruger99 can you run: ``` sha2 -256 ~/.ollama/models/blobs/sha256:d1735b93e1dc503f1045ccd6c8bd73277b18ba892befd1dc29e9b9a7822ed998 ``` Check to see that sha sum matches, and if it doesn't you can ``` rm ~/.ollama/models/blobs/sha256:d1735b93e1dc503f1045ccd6c8bd73277b18ba892befd1dc29e9b9a7822ed998 ``` and then re-pull the image. There's a fix that I think was just merged that will make certain the sha sum is verified correctly when you're pulling the layers.
Author
Owner

@bkruger99 commented on GitHub (Jul 21, 2023):

Yeah. there's something w/ manifest not verifying the sha256 when pulling. These two models did have a network interruption as the laptop went to sleep.

❯ shasum -a 256 sha256:d1735b93e1dc503f1045ccd6c8bd73277b18ba892befd1dc29e9b9a7822ed998
f2a1788633ddf3edef0ee4d9d4e93c399bfeeeb7363015d7c1b630ff268cdcf5 sha256:d1735b93e1dc503f1045ccd6c8bd73277b18ba892befd1dc29e9b9a7822ed998

I re-pulled llama2:12b and it's happy, I'll do the same with the rest of 'em.

<!-- gh-comment-id:1646156659 --> @bkruger99 commented on GitHub (Jul 21, 2023): Yeah. there's something w/ manifest not verifying the sha256 when pulling. These two models did have a network interruption as the laptop went to sleep. ❯ shasum -a 256 sha256:d1735b93e1dc503f1045ccd6c8bd73277b18ba892befd1dc29e9b9a7822ed998 f2a1788633ddf3edef0ee4d9d4e93c399bfeeeb7363015d7c1b630ff268cdcf5 sha256:d1735b93e1dc503f1045ccd6c8bd73277b18ba892befd1dc29e9b9a7822ed998 I re-pulled llama2:12b and it's happy, I'll do the same with the rest of 'em.
Author
Owner

@pdevine commented on GitHub (Jul 21, 2023):

The next version will check the SHAs; the re-pull is pretty tolerant of network interruptions, but wondering if the buffer wrote garbage onto the end of the partial file somehow. I haven't (yet) tested with sleeping the machine though, so that could have been the reason.

I'm going to go ahead and close the issue. Feel free to re-open it though.

<!-- gh-comment-id:1646313348 --> @pdevine commented on GitHub (Jul 21, 2023): The next version will check the SHAs; the re-pull is pretty tolerant of network interruptions, but wondering if the buffer wrote garbage onto the end of the partial file somehow. I haven't (yet) tested with sleeping the machine though, so that could have been the reason. I'm going to go ahead and close the issue. Feel free to re-open it though.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53