[GH-ISSUE #1731] pulling manifest Error: EOF when pulling after disk is full #26747

Open
opened 2026-04-22 03:16:07 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @jmorganca on GitHub (Dec 28, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1731

Originally assigned to: @mxyng on GitHub.

To reproduce, pull with little disk space lefT:

$ ollama run deepseek-coder:33b
pulling manifest 
Error: write /usr/share/ollama/.ollama/models/blobs/sha256:065b9a7416ba28634cd4efc2cd3024d4755731c1275dc0286b81b01793185fbb-partial-0: no space left on device

Even with more space, future ollama pull commands fail until Ollama restarted:

$ ollama run deepseek-coder:33b
pulling manifest 
Error: EOF
Originally created by @jmorganca on GitHub (Dec 28, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1731 Originally assigned to: @mxyng on GitHub. To reproduce, pull with little disk space lefT: ``` $ ollama run deepseek-coder:33b pulling manifest Error: write /usr/share/ollama/.ollama/models/blobs/sha256:065b9a7416ba28634cd4efc2cd3024d4755731c1275dc0286b81b01793185fbb-partial-0: no space left on device ``` Even with more space, future `ollama pull` commands fail until Ollama restarted: ``` $ ollama run deepseek-coder:33b pulling manifest Error: EOF ```
GiteaMirror added the bug label 2026-04-22 03:16:07 -05:00
Author
Owner

@mxyng commented on GitHub (Jan 16, 2024):

I haven't been able to reproduce this. Ollama behaves as expected. Here is the test I ran:

  1. Create a VM with multiple, small physical volumes
  2. Create a logical volume by attaching one of the physical volumes
  3. Format, mount, and configure as OLLAMA_MODELS
  4. Pull a large model, e.g. llama2:70b
  5. Step 4 should fail once the logical volume is full
  6. Expand the disk by attaching another physical volumes to the logical volume
  7. Repeat step 4 which should resume the download where it left off

@jmorganca do you have any more details on how to reproduce this?

<!-- gh-comment-id:1894285486 --> @mxyng commented on GitHub (Jan 16, 2024): I haven't been able to reproduce this. Ollama behaves as expected. Here is the test I ran: 1. Create a VM with multiple, small physical volumes 2. Create a logical volume by attaching _one_ of the physical volumes 3. Format, mount, and configure as `OLLAMA_MODELS` 4. Pull a large model, e.g. llama2:70b 5. Step 4 should fail once the logical volume is full 6. Expand the disk by attaching another physical volumes to the logical volume 7. Repeat step 4 which should resume the download where it left off @jmorganca do you have any more details on how to reproduce this?
Author
Owner

@Pytness commented on GitHub (Jan 31, 2024):

I just managed to unintentionally reproduce this bug.
The steps I followed:

  • Pull multiple models until disk is full.
  • Get disk is full error pulling another model.
  • ollama rm a few models.
  • Try to ollama pull a model and get EOF error.

Restarting ollama fixed the issue.

<!-- gh-comment-id:1919141927 --> @Pytness commented on GitHub (Jan 31, 2024): I just managed to unintentionally reproduce this bug. The steps I followed: - Pull multiple models until disk is full. - Get disk is full error pulling another model. - `ollama rm` a few models. - Try to `ollama pull` a model and get `EOF` error. Restarting ollama fixed the issue.
Author
Owner

@wg1k commented on GitHub (May 14, 2024):

> ollama --version
ollama version is 0.1.34

Restarting didn't work for me.

After having a look at https://github.com/ollama/ollama/pull/344, I ended with the below command:

sudo rm /usr/share/ollama/.ollama/models/blobs/*-partial*
<!-- gh-comment-id:2109943047 --> @wg1k commented on GitHub (May 14, 2024): ```shell-session > ollama --version ollama version is 0.1.34 ``` Restarting didn't work for me. After having a look at https://github.com/ollama/ollama/pull/344, I ended with the below command: ```shell sudo rm /usr/share/ollama/.ollama/models/blobs/*-partial* ```
Author
Owner

@candera commented on GitHub (Jul 23, 2024):

Deleting only the ones of zero size in ~/.ollama/models/blobs/ worked for me, and I didn't have to restart the entire download.

<!-- gh-comment-id:2246072799 --> @candera commented on GitHub (Jul 23, 2024): Deleting only the ones of zero size in `~/.ollama/models/blobs/` worked for me, and I didn't have to restart the entire download.
Author
Owner

@sezerogras commented on GitHub (Jan 28, 2025):

I had the same problem :
disk check macos : df -h
to clear memory : sudo purge
ollama rm (model name)
ollama run (new model name)
Restarting ollama fixes the issue.

<!-- gh-comment-id:2618893898 --> @sezerogras commented on GitHub (Jan 28, 2025): I had the same problem : disk check macos : df -h to clear memory : sudo purge ollama rm (model name) ollama run (new model name) Restarting ollama fixes the issue.
Author
Owner

@DarkTyger commented on GitHub (Apr 7, 2025):

Confirmed.

$ ollama --version
ollama version is 0.6.4
$ ./outline.sh 
pulling manifest 
Error: EOF

The fix was to restart ollama serve. Seems ollama is incorrectly caching the state of the disk, even after issuing ollama rm to clear up space.

<!-- gh-comment-id:2782206202 --> @DarkTyger commented on GitHub (Apr 7, 2025): Confirmed. ``` bash $ ollama --version ollama version is 0.6.4 $ ./outline.sh pulling manifest Error: EOF ``` The fix was to restart `ollama serve`. Seems ollama is incorrectly caching the state of the disk, even after issuing `ollama rm` to clear up space.
Author
Owner

@yoliverasPozo commented on GitHub (Apr 9, 2025):

restarting ollama worked for me, thanks. Now I only have to work out why ollama keeps hanging while pulling models so I don't have to use the timeout shell script.

<!-- gh-comment-id:2789987343 --> @yoliverasPozo commented on GitHub (Apr 9, 2025): restarting ollama worked for me, thanks. Now I only have to work out why ollama keeps hanging while pulling models so I don't have to use the timeout shell script.
Author
Owner

@Nyx1197 commented on GitHub (May 15, 2025):

Deleting only the ones of zero size in ~/.ollama/models/blobs/ worked for me, and I didn't have to restart the entire download.删除 ~/.ollama/models/blobs/ 目录中零大小的文件,这对我有用,并且不需要重启整个下载。

thanks, that help a lot.

<!-- gh-comment-id:2882156792 --> @Nyx1197 commented on GitHub (May 15, 2025): > Deleting only the ones of zero size in `~/.ollama/models/blobs/` worked for me, and I didn't have to restart the entire download.删除 `~/.ollama/models/blobs/` 目录中零大小的文件,这对我有用,并且不需要重启整个下载。 thanks, that help a lot.
Author
Owner

@debuglevel commented on GitHub (Nov 24, 2025):

Would be nice to see this fixed. This is an issue which is rather hard to handle automatically.

<!-- gh-comment-id:3571148852 --> @debuglevel commented on GitHub (Nov 24, 2025): Would be nice to see this fixed. This is an issue which is rather hard to handle automatically.
Author
Owner

@bysiber commented on GitHub (Mar 5, 2026):

This is a painful one, once Ollama hits a disk-full state during a pull, the partial download can leave things in a broken state where even retrying with more space fails until you restart.

A few things that help recover and prevent this:

# 1. Check current disk space
df -h /

# 2. See how much space Ollama models are using
du -sh ~/.ollama/models/

# 3. Remove partial/stuck downloads
rm -rf ~/.ollama/models/blobs/*-partial-*

# 4. Restart Ollama after cleanup
pkill ollama && ollama serve &

# 5. Retry the pull
ollama pull deepseek-coder:33b

To avoid running out of space in the first place, keeping dev caches clean helps a lot. Tools like Docker, Xcode DerivedData, node_modules, and Homebrew can silently eat 20-50GB on macOS.

I built ClearDisk — a free, open-source macOS menu bar utility that monitors 44+ cache paths and shows which ones are eating your disk. Helps keep enough headroom for large model downloads.

<!-- gh-comment-id:4001282217 --> @bysiber commented on GitHub (Mar 5, 2026): This is a painful one, once Ollama hits a disk-full state during a pull, the partial download can leave things in a broken state where even retrying with more space fails until you restart. A few things that help recover and prevent this: ```bash # 1. Check current disk space df -h / # 2. See how much space Ollama models are using du -sh ~/.ollama/models/ # 3. Remove partial/stuck downloads rm -rf ~/.ollama/models/blobs/*-partial-* # 4. Restart Ollama after cleanup pkill ollama && ollama serve & # 5. Retry the pull ollama pull deepseek-coder:33b ``` To avoid running out of space in the first place, keeping dev caches clean helps a lot. Tools like Docker, Xcode DerivedData, node_modules, and Homebrew can silently eat 20-50GB on macOS. I built [ClearDisk](https://github.com/bysiber/cleardisk) — a free, open-source macOS menu bar utility that monitors 44+ cache paths and shows which ones are eating your disk. Helps keep enough headroom for large model downloads.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26747