[GH-ISSUE #8088] pull error EOF with gemma2:27b-instruct-q8_0 #5169

Closed
opened 2026-04-12 16:17:15 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @rcanand on GitHub (Dec 13, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/8088

What is the issue?

When I call ollama pull gemma2:27b-instruct-q8_0, I get error EOF.

I have pulled other models successfully (including other gemma2 models) from the same system. And I have disk space etc. - running into this issue with just this model.
Based on web search, I suspect the file on the server itself is invalid or corrupted.

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.5.1

Originally created by @rcanand on GitHub (Dec 13, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/8088 ### What is the issue? When I call `ollama pull gemma2:27b-instruct-q8_0`, I get error `EOF`. I have pulled other models successfully (including other gemma2 models) from the same system. And I have disk space etc. - running into this issue with just this model. Based on web search, I suspect the file on the server itself is invalid or corrupted. ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.5.1
GiteaMirror added the bug label 2026-04-12 16:17:15 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 13, 2024):

$ ollama pull gemma2:27b-instruct-q8_0
pulling manifest 
pulling 1b971f02fb8f... 100% ▕██████████████████████████████████████████████████████████████████▏  28 GB                         
pulling 109037bec39c... 100% ▕██████████████████████████████████████████████████████████████████▏  136 B                         
pulling 097a36493f71... 100% ▕██████████████████████████████████████████████████████████████████▏ 8.4 KB                         
pulling 2490e7468436... 100% ▕██████████████████████████████████████████████████████████████████▏   65 B                         
pulling d55baf7df634... 100% ▕██████████████████████████████████████████████████████████████████▏  488 B                         
verifying sha256 digest 
writing manifest 
success 
$ ollama run gemma2:27b-instruct-q8_0 hello
Hello! 👋

How can I help you today? 😊 
<!-- gh-comment-id:2541668305 --> @rick-github commented on GitHub (Dec 13, 2024): ```console $ ollama pull gemma2:27b-instruct-q8_0 pulling manifest pulling 1b971f02fb8f... 100% ▕██████████████████████████████████████████████████████████████████▏ 28 GB pulling 109037bec39c... 100% ▕██████████████████████████████████████████████████████████████████▏ 136 B pulling 097a36493f71... 100% ▕██████████████████████████████████████████████████████████████████▏ 8.4 KB pulling 2490e7468436... 100% ▕██████████████████████████████████████████████████████████████████▏ 65 B pulling d55baf7df634... 100% ▕██████████████████████████████████████████████████████████████████▏ 488 B verifying sha256 digest writing manifest success $ ollama run gemma2:27b-instruct-q8_0 hello Hello! 👋 How can I help you today? 😊 ```
Author
Owner

@rcanand commented on GitHub (Dec 13, 2024):

Thanks, @rick-github - how can I debug this? I am consistently getting this EOF error for just this model - other models are working fine.

<!-- gh-comment-id:2541689618 --> @rcanand commented on GitHub (Dec 13, 2024): Thanks, @rick-github - how can I debug this? I am consistently getting this EOF error for just this model - other models are working fine.
Author
Owner

@rick-github commented on GitHub (Dec 13, 2024):

What does the following return:

curl -D - https://registry.ollama.ai/v2/library/gemma2/manifests/27b-instruct-q8_0
<!-- gh-comment-id:2541713965 --> @rick-github commented on GitHub (Dec 13, 2024): What does the following return: ``` curl -D - https://registry.ollama.ai/v2/library/gemma2/manifests/27b-instruct-q8_0 ```
Author
Owner

@rick-github commented on GitHub (Dec 13, 2024):

Server logs might help.

<!-- gh-comment-id:2541715641 --> @rick-github commented on GitHub (Dec 13, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) might help.
Author
Owner

@rcanand commented on GitHub (Dec 13, 2024):

Here is the curl output:

> curl -D - https://registry.ollama.ai/v2/library/gemma2/manifests/27b-instruct-q8_0
HTTP/2 200
date: Fri, 13 Dec 2024 15:45:00 GMT
content-type: text/plain; charset=utf-8
content-length: 857
via: 1.1 google
alt-svc: h3=":443"; ma=86400
cf-cache-status: DYNAMIC
report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=mUx6k6nHkDhdKJfxlivkc8FJ4Nn6ds8TDWtZO9rQ%2B1neUiKovyn7OG%2BYl24EDQ23HEoDwXuEiYZbkYClZ5sl9MxELwOm6PsKIzaeC2ocDKgg5PAJG6EK%2Bs1WbnuPnQ6BeijOxyQ%3D"}],"group":"cf-nel","max_age":604800}
nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
server: cloudflare
cf-ray: 8f17142a9e86c399-SEA
server-timing: cfL4;desc="?proto=TCP&rtt=16096&min_rtt=15650&rtt_var=5079&sent=6&recv=10&lost=0&retrans=0&sent_bytes=2881&recv_bytes=607&delivery_rate=151781&cwnd=214&unsent_bytes=0&cid=d22e2a69e67adb02&ts=224&x=0"

{"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:d55baf7df634fde80fa35eb04c670762bc6e7c335c3e0787bd2b199a4f526c2b","size":488},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:1b971f02fb8f2cb34d37f83dcd1d9cc982cb1b3701d0a169e45fcd8fed0750d6","size":28937388256},{"mediaType":"application/vnd.ollama.image.template","digest":"sha256:109037bec39c0becc8221222ae23557559bc594290945a2c4221ab4f303b8871","size":136},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:097a36493f718248845233af1d3fefe7a303f864fae13bc31a3a9704229378ca","size":8433},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:2490e7468436707d5156d7959cf3c6341cc46ee323084cfa3fcf30fe76e397dc","size":65}]}%

Server logs don't have anything related to this model:

cat ~/.ollama/logs/server.log | grep 27b-instruct-q8_0 gives no entries.

<!-- gh-comment-id:2541744015 --> @rcanand commented on GitHub (Dec 13, 2024): Here is the curl output: ``` > curl -D - https://registry.ollama.ai/v2/library/gemma2/manifests/27b-instruct-q8_0 HTTP/2 200 date: Fri, 13 Dec 2024 15:45:00 GMT content-type: text/plain; charset=utf-8 content-length: 857 via: 1.1 google alt-svc: h3=":443"; ma=86400 cf-cache-status: DYNAMIC report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=mUx6k6nHkDhdKJfxlivkc8FJ4Nn6ds8TDWtZO9rQ%2B1neUiKovyn7OG%2BYl24EDQ23HEoDwXuEiYZbkYClZ5sl9MxELwOm6PsKIzaeC2ocDKgg5PAJG6EK%2Bs1WbnuPnQ6BeijOxyQ%3D"}],"group":"cf-nel","max_age":604800} nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800} server: cloudflare cf-ray: 8f17142a9e86c399-SEA server-timing: cfL4;desc="?proto=TCP&rtt=16096&min_rtt=15650&rtt_var=5079&sent=6&recv=10&lost=0&retrans=0&sent_bytes=2881&recv_bytes=607&delivery_rate=151781&cwnd=214&unsent_bytes=0&cid=d22e2a69e67adb02&ts=224&x=0" {"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:d55baf7df634fde80fa35eb04c670762bc6e7c335c3e0787bd2b199a4f526c2b","size":488},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:1b971f02fb8f2cb34d37f83dcd1d9cc982cb1b3701d0a169e45fcd8fed0750d6","size":28937388256},{"mediaType":"application/vnd.ollama.image.template","digest":"sha256:109037bec39c0becc8221222ae23557559bc594290945a2c4221ab4f303b8871","size":136},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:097a36493f718248845233af1d3fefe7a303f864fae13bc31a3a9704229378ca","size":8433},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:2490e7468436707d5156d7959cf3c6341cc46ee323084cfa3fcf30fe76e397dc","size":65}]}% ``` Server logs don't have anything related to this model: `cat ~/.ollama/logs/server.log | grep 27b-instruct-q8_0` gives no entries.
Author
Owner

@rcanand commented on GitHub (Dec 13, 2024):

Here is the error I get:

> ollama pull gemma2:27b-instruct-q8_0
pulling manifest
Error: EOF
<!-- gh-comment-id:2541749188 --> @rcanand commented on GitHub (Dec 13, 2024): Here is the error I get: ``` > ollama pull gemma2:27b-instruct-q8_0 pulling manifest Error: EOF ```
Author
Owner

@rick-github commented on GitHub (Dec 13, 2024):

Manifest looks OK. The log won't contain the model name, the server uses blob hashes. If you add the server logs there may be relevant context.

<!-- gh-comment-id:2541749248 --> @rick-github commented on GitHub (Dec 13, 2024): Manifest looks OK. The log won't contain the model name, the server uses blob hashes. If you add the server logs there may be relevant context.
Author
Owner

@rcanand commented on GitHub (Dec 13, 2024):

gemma2_debug.server.log

I have attached the server log - the bad manifest log entries are due to the mac tendency to create hidden files "._xxx" on some volumes (which seems unavoidable).

<!-- gh-comment-id:2541756710 --> @rcanand commented on GitHub (Dec 13, 2024): [gemma2_debug.server.log](https://github.com/user-attachments/files/18128498/gemma2_debug.server.log) I have attached the server log - the bad manifest log entries are due to the mac tendency to create hidden files "._xxx" on some volumes (which seems unavoidable).
Author
Owner

@rcanand commented on GitHub (Dec 13, 2024):

This may be relevant: I have overridden OLLAMA_MODELS to point to an external drive in my shell profile. And if I run ollama serve and call ollama pull the models are saved in the external disk, logs are in default ~/.ollama/logs. And this issue happened in that configuration.

While debugging this issue, I found that some times I had the ollama mac app running (due to restart, and ollama app being set to startup at login) instead of the shell based server, which did not have the external models location configured. This led to the models being downloaded to the default location ~/.ollama/models - To clean it up, I merge moved everything from ~/.ollama/models into the external drive models folder.

Note:

  1. This issue happened before the merge move of models, and I did that only to debug this issue. So the merge itself is not the cause of the issue.
  2. This detail may help explain some of the stuff in the logs.
<!-- gh-comment-id:2541783114 --> @rcanand commented on GitHub (Dec 13, 2024): This may be relevant: I have overridden OLLAMA_MODELS to point to an external drive in my shell profile. And if I run `ollama serve` and call `ollama pull` the models are saved in the external disk, logs are in default ~/.ollama/logs. And this issue happened in that configuration. While debugging this issue, I found that some times I had the ollama mac app running (due to restart, and ollama app being set to startup at login) instead of the shell based server, which did not have the external models location configured. This led to the models being downloaded to the default location ~/.ollama/models - To clean it up, I merge moved everything from ~/.ollama/models into the external drive models folder. Note: 1. This issue happened before the merge move of models, and I did that only to debug this issue. So the merge itself is not the cause of the issue. 2. This detail may help explain some of the stuff in the logs.
Author
Owner

@rick-github commented on GitHub (Dec 13, 2024):

The server has never tried to pull 1b971f02fb8f (model weights) or d55baf7df634 (config) for 27b-instruct-q8_0.

When you merged into the external folder, did you set the ownership of the moved files? What's the result of:

ls -ld $OLLAMA_MODELS/blobs/{,sha256-{1b971f02fb8f,109037bec39c,097a36493f71,2490e7468436,d55baf7df634}*}
<!-- gh-comment-id:2541797930 --> @rick-github commented on GitHub (Dec 13, 2024): The server has never tried to pull 1b971f02fb8f (model weights) or d55baf7df634 (config) for 27b-instruct-q8_0. When you merged into the external folder, did you set the ownership of the moved files? What's the result of: ``` ls -ld $OLLAMA_MODELS/blobs/{,sha256-{1b971f02fb8f,109037bec39c,097a36493f71,2490e7468436,d55baf7df634}*} ```
Author
Owner

@rick-github commented on GitHub (Dec 13, 2024):

Also

ls -ld $OLLAMA_MODELS/manifests/registry.ollama.ai/library/gemma2/{,27b{,-instruct-q8_0}}
cat $OLLAMA_MODELS/manifests/registry.ollama.ai/library/gemma2/27b{,-instruct-q8_0}
<!-- gh-comment-id:2541803013 --> @rick-github commented on GitHub (Dec 13, 2024): Also ``` ls -ld $OLLAMA_MODELS/manifests/registry.ollama.ai/library/gemma2/{,27b{,-instruct-q8_0}} cat $OLLAMA_MODELS/manifests/registry.ollama.ai/library/gemma2/27b{,-instruct-q8_0} ```
Author
Owner

@rcanand commented on GitHub (Dec 13, 2024):

gemma2_debug_ls_cat_dumps.server.log

<!-- gh-comment-id:2541856201 --> @rcanand commented on GitHub (Dec 13, 2024): [gemma2_debug_ls_cat_dumps.server.log](https://github.com/user-attachments/files/18129044/gemma2_debug_ls_cat_dumps.server.log)
Author
Owner

@rcanand commented on GitHub (Dec 13, 2024):

I tried removing the gemma2/27b manifest file referenced in that dump, restarted ollama server, reran pull, but it still gave me the EOF error (so I put it back).

<!-- gh-comment-id:2541873458 --> @rcanand commented on GitHub (Dec 13, 2024): I tried removing the gemma2/27b manifest file referenced in that dump, restarted ollama server, reran pull, but it still gave me the EOF error (so I put it back).
Author
Owner

@rick-github commented on GitHub (Dec 13, 2024):

So it looks like the pull was started but the server was taken down half way through. What would normally happen is that when the server restarted, it would go through the failed pulls and delete the orphaned partial files. However, because of all the ._xx files, the server is abandoning any cleanup attempt, and I think the incomplete state causes a new pull to fail. I would suggest deleting all the ._xx files and restarting the server, and see if the partial files are gone (ls -l $OLLAMA_MODELS/blobs/sha256-*partial*). If they are, try re-pulling 27b-instruct-q8_0. If not, add the recent logs and see what the new blocker is.

<!-- gh-comment-id:2541889671 --> @rick-github commented on GitHub (Dec 13, 2024): So it looks like the pull was started but the server was taken down half way through. What would normally happen is that when the server restarted, it would go through the failed pulls and delete the orphaned `partial` files. However, because of all the ._xx files, the server is abandoning any cleanup attempt, and I think the incomplete state causes a new pull to fail. I would suggest deleting all the ._xx files and restarting the server, and see if the partial files are gone (`ls -l $OLLAMA_MODELS/blobs/sha256-*partial*`). If they are, try re-pulling 27b-instruct-q8_0. If not, add the recent logs and see what the new blocker is.
Author
Owner

@rcanand commented on GitHub (Dec 13, 2024):

Yeah, that worked, thanks! I mean after deleting these ._ files, it started pulling the model (I didn't check for the partial files, just ran pull) - I need to look deeper for a way to make mac stop creating these ._ files.

<!-- gh-comment-id:2541920423 --> @rcanand commented on GitHub (Dec 13, 2024): Yeah, that worked, thanks! I mean after deleting these ._ files, it started pulling the model (I didn't check for the partial files, just ran pull) - I need to look deeper for a way to make mac stop creating these ._ files.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5169