[GH-ISSUE #4122] Delete models installed from Ollama in my Mac to free the space #2560

Closed
opened 2026-04-12 12:53:07 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @ISK-VAGR on GitHub (May 3, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4122

HI,

I installed two Llama models using "Ollama run" in the terminal. Those occupy a significant space in disk and I need to free space to install a different model.

I tried Ollama rm command, but it only deletes the file in the manifests folder which is KBs. I also tried to delete those files manually, but again those are KBs in size not GB as the real models.

I need a solution to delete the big files out o my system.

Any clues?

Any help will be appreciated

Originally created by @ISK-VAGR on GitHub (May 3, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4122 HI, I installed two Llama models using "Ollama run" in the terminal. Those occupy a significant space in disk and I need to free space to install a different model. I tried Ollama rm command, but it only deletes the file in the manifests folder which is KBs. I also tried to delete those files manually, but again those are KBs in size not GB as the real models. I need a solution to delete the big files out o my system. Any clues? Any help will be appreciated
GiteaMirror added the model label 2026-04-12 12:53:07 -05:00
Author
Owner

@ISK-VAGR commented on GitHub (May 3, 2024):

Hi,

Already solved. The models weights and biases are saved in blob files in MACs seems to be accessible when you run in the terminal:

open ~/.ollama/models/blobs

there are the bloody files occupying space in your computer. ;-)

<!-- gh-comment-id:2092647198 --> @ISK-VAGR commented on GitHub (May 3, 2024): Hi, Already solved. The models weights and biases are saved in blob files in MACs seems to be accessible when you run in the terminal: open ~/.ollama/models/blobs there are the bloody files occupying space in your computer. ;-)
Author
Owner

@sealad886 commented on GitHub (May 3, 2024):

There are several utilities that exist to help manage your cache. It is not generally recommended to manually delete files from your cache, as you could irreversibly corrupt it.

From the command line, you can remove a model one at a time using:

ollama rm <model name>

For example:

> ollama list
NAME                           	ID          	SIZE  	MODIFIED     
codegemma:7b-code-fp16         	211627025485	17 GB 	2 days ago  	
codegemma:7b-instruct-fp16     	27f776c137a0	17 GB 	2 days ago  	
codellama:70b-code-q2_K        	a971fcfd33e2	25 GB 	2 days ago  	
codellama:latest               	8fdf8f752f6e	3.8 GB	10 days ago 	
command-r:latest               	b8cdfff0263c	20 GB 	4 weeks ago 

> ollama rm codellama
deleted 'codellama'
> ollama rm codellama:70b-code-q2_K 
deleted 'codellama:70b-code-q2_K'

In Python, if you have installed ollama-python:

import ollama
status = ollama.delete('codellama').       # status is either of: 'success' or 'error'

I've written a utility to help manage the cache, called ollamautil. Feel free to use that to manage your cache, if you find it useful!

<!-- gh-comment-id:2092795848 --> @sealad886 commented on GitHub (May 3, 2024): There are several utilities that exist to help manage your cache. It is **not** generally recommended to manually delete files from your cache, as you could irreversibly corrupt it. From the command line, you can remove a model one at a time using: ``` ollama rm <model name> ``` For example: ``` > ollama list NAME ID SIZE MODIFIED codegemma:7b-code-fp16 211627025485 17 GB 2 days ago codegemma:7b-instruct-fp16 27f776c137a0 17 GB 2 days ago codellama:70b-code-q2_K a971fcfd33e2 25 GB 2 days ago codellama:latest 8fdf8f752f6e 3.8 GB 10 days ago command-r:latest b8cdfff0263c 20 GB 4 weeks ago > ollama rm codellama deleted 'codellama' > ollama rm codellama:70b-code-q2_K deleted 'codellama:70b-code-q2_K' ``` In Python, if you have installed [ollama-python](https://github.com/ollama/ollama-python): ```python import ollama status = ollama.delete('codellama'). # status is either of: 'success' or 'error' ``` I've written a utility to help manage the cache, called [ollamautil](https://github.com/sealad886/ollama_util). Feel free to use that to manage your cache, if you find it useful!
Author
Owner

@ISK-VAGR commented on GitHub (May 3, 2024):

@sealad886

Thanks a lot for the feedback. I really had no option but to delete the files from the Cache. The problem as fundamentally, that Ollama rm command didn't work. I will test your solution. Thanks a lot.

<!-- gh-comment-id:2092807388 --> @ISK-VAGR commented on GitHub (May 3, 2024): @sealad886 Thanks a lot for the feedback. I really had no option but to delete the files from the Cache. The problem as fundamentally, that Ollama rm command didn't work. I will test your solution. Thanks a lot.
Author
Owner

@sealad886 commented on GitHub (May 3, 2024):

The cache tries to intelligently reduce disk space by storing a single blob file that is then shared among two or more models. If the blob file wasn't deleted with ollama rm <model> then it's probable that it was being used by one or more other models.

The way Ollama has implemented symlinking is actually essentially agnostic to the OS (i.e. I'm assuming their method allows this pseudo-symlink to work on Windows).: each <quant> file in /models/manifests/registry.ollama.ai/library/<model>/<quant> is actually a text file that just has the blob's sha256 hash stored, which is also the name of the blob file itself.

<!-- gh-comment-id:2092851272 --> @sealad886 commented on GitHub (May 3, 2024): The cache tries to intelligently reduce disk space by storing a single blob file that is then shared among two or more models. If the blob file wasn't deleted with `ollama rm <model>` then it's probable that it was being used by one or more other models. The way Ollama has implemented symlinking is actually essentially agnostic to the OS (i.e. I'm assuming their method allows this pseudo-symlink to work on Windows).: each `<quant>` file in `/models/manifests/registry.ollama.ai/library/<model>/<quant>` is actually a text file that just has the blob's `sha256` hash stored, which is also the name of the blob file itself.
Author
Owner

@ISK-VAGR commented on GitHub (May 3, 2024):

The cache tries to intelligently reduce disk space by storing a single blob file that is then shared among two or more models. If the blob file wasn't deleted with ollama rm <model> then it's probable that it was being used by one or more other models.

The way Ollama has implemented symlinking is actually essentially agnostic to the OS (i.e. I'm assuming their method allows this pseudo-symlink to work on Windows).: each <quant> file in /models/manifests/registry.ollama.ai/library/<model>/<quant> is actually a text file that just has the blob's sha256 hash stored, which is also the name of the blob file itself.

I am not a programmer. So no idea about this. However, in your repo the ollamautil.py in line 578 and 580 is asking for the path to external and internal DIR. How do I know where to find those?

<!-- gh-comment-id:2093167129 --> @ISK-VAGR commented on GitHub (May 3, 2024): > The cache tries to intelligently reduce disk space by storing a single blob file that is then shared among two or more models. If the blob file wasn't deleted with `ollama rm <model>` then it's probable that it was being used by one or more other models. > > The way Ollama has implemented symlinking is actually essentially agnostic to the OS (i.e. I'm assuming their method allows this pseudo-symlink to work on Windows).: each `<quant>` file in `/models/manifests/registry.ollama.ai/library/<model>/<quant>` is actually a text file that just has the blob's `sha256` hash stored, which is also the name of the blob file itself. I am not a programmer. So no idea about this. However, in your repo the ollamautil.py in line 578 and 580 is asking for the path to external and internal DIR. How do I know where to find those?
Author
Owner

@mxyng commented on GitHub (May 13, 2024):

I tried Ollama rm command, but it only deletes the file in the manifests folder which is KBs. I also tried to delete those files manually, but again those are KBs in size not GB as the real models.

Different models can share files. These files are not removed using ollama rm if there are other models that use the same files. For example, if model A uses blob A, B and model B uses blob A, C, removing model A will only remove blob B. This is likely the main source of the behaviour you're seeing.

<!-- gh-comment-id:2108212609 --> @mxyng commented on GitHub (May 13, 2024): > I tried Ollama rm command, but it only deletes the file in the manifests folder which is KBs. I also tried to delete those files manually, but again those are KBs in size not GB as the real models. Different models can share files. These files are not removed using `ollama rm` if there are other models that use the same files. For example, if model A uses blob A, B and model B uses blob A, C, removing model A will only remove blob B. This is likely the main source of the behaviour you're seeing.
Author
Owner

@sanspa commented on GitHub (Jun 6, 2024):

In my experience, we can just restart ollama after doing "ollama rm model". The related blobs and cache will be deleted and we have the free space again.

<!-- gh-comment-id:2151330444 --> @sanspa commented on GitHub (Jun 6, 2024): In my experience, we can just restart ollama after doing "ollama rm model". The related blobs and cache will be deleted and we have the free space again.
Author
Owner

@tdiprima commented on GitHub (Dec 6, 2024):

...in your repo the ollamautil.py in line 578 and 580 is asking for the path to external and internal DIR. How do I know where to find those?

@ISK-VAGR I realize this question was posed a while ago, but for what it's worth, you have to create those directories, delete the real directory, and create a symlink. And the readme says "Make sure you're okay with that." See: https://github.com/sealad886/ollamautil/blob/master/README.md#before-you-start

<!-- gh-comment-id:2523576244 --> @tdiprima commented on GitHub (Dec 6, 2024): > ...in your repo the ollamautil.py in line 578 and 580 is asking for the path to external and internal DIR. How do I know where to find those? @ISK-VAGR I realize this question was posed a while ago, but for what it's worth, you have to create those directories, delete the real directory, and create a symlink. And the readme says "Make sure you're okay with that." See: https://github.com/sealad886/ollamautil/blob/master/README.md#before-you-start
Author
Owner

@codevalve commented on GitHub (Jan 29, 2025):

In my experience, we can just restart ollama after doing "ollama rm model". The related blobs and cache will be deleted and we have the free space again.

@sanspa - I can confirm your solution worked for me. (MacStudio)

<!-- gh-comment-id:2623041565 --> @codevalve commented on GitHub (Jan 29, 2025): > In my experience, we can just restart ollama after doing "ollama rm model". The related blobs and cache will be deleted and we have the free space again. @sanspa - I can confirm your solution worked for me. (MacStudio)
Author
Owner

@greenteaismyjam commented on GitHub (Feb 21, 2025):

One-liner to delete all models from the terminal:

ollama list | awk 'NR>1 {print $1}' | xargs -I {} ollama rm {}

<!-- gh-comment-id:2674355972 --> @greenteaismyjam commented on GitHub (Feb 21, 2025): One-liner to delete all models from the terminal: `ollama list | awk 'NR>1 {print $1}' | xargs -I {} ollama rm {}`
Author
Owner

@ZeFifi commented on GitHub (Feb 4, 2026):

Or if you want to delete a specific model, open your terminal, then:

ollama list // to check the name of the model you want to delete
ollama rm modelName // to delete it
<!-- gh-comment-id:3846728060 --> @ZeFifi commented on GitHub (Feb 4, 2026): Or if you want to delete a specific model, open your terminal, then: ``` ollama list // to check the name of the model you want to delete ollama rm modelName // to delete it ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2560