[GH-ISSUE #1560] OLLAMA_MODELS environment variable ignored by Mac app #853

Closed
opened 2026-04-12 10:31:05 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @Crypto69 on GitHub (Dec 16, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1560

Documentation FAQ says the following:

How can I change where Ollama stores models?

To modify where models are stored, you can use the OLLAMA_MODELS environment variable. Note that on Linux this means defining OLLAMA_MODELS in a drop-in /etc/systemd/system/ollama.service.d service file, reloading systemd, and restarting the ollama service.

I have made the changes but it doesn't seem to work when using the ollama Mac app

~ ollama list
NAME                            	ID          	SIZE  	MODIFIED
deepseek-coder:33b              	2941d6ab92f3	18 GB 	3 weeks ago
deepseek-coder:33b-instruct-q2_K	92b1e8ffe46e	14 GB 	3 weeks ago
deepseek-coder:6.7b             	72be2442d736	3.8 GB	3 weeks ago
deepseek-coder:latest           	140a485970a6	776 MB	3 weeks ago
llama2:latest                   	fe938a131f40	3.8 GB	3 weeks ago
llama2-uncensored:latest        	44040b922233	3.8 GB	3 weeks ago
mistral:latest                  	1ab49bc0b6a8	4.1 GB	14 minutes ago
wizard-vicuna-uncensored:13b    	6887722b6618	7.4 GB	3 weeks ago
wizardlm-uncensored:13b-llama2  	886a369d74fc	7.4 GB	3 weeks ago
~ echo $OLLAMA_MODELS
/Volumes/ExternalHD/ollama-models
~ ollama run codellama
pulling manifest
pulling manifest
pulling manifest
pulling manifest
pulling manifest
``` I
However the model is still getting downloaded to ~/.ollama/models
Originally created by @Crypto69 on GitHub (Dec 16, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1560 Documentation FAQ says the following: ### How can I change where Ollama stores models? To modify where models are stored, you can use the `OLLAMA_MODELS` environment variable. Note that on Linux this means defining `OLLAMA_MODELS` in a drop-in `/etc/systemd/system/ollama.service.d` service file, reloading systemd, and restarting the ollama service. I have made the changes but it doesn't seem to work when using the ollama Mac app ``` ~ ollama list NAME ID SIZE MODIFIED deepseek-coder:33b 2941d6ab92f3 18 GB 3 weeks ago deepseek-coder:33b-instruct-q2_K 92b1e8ffe46e 14 GB 3 weeks ago deepseek-coder:6.7b 72be2442d736 3.8 GB 3 weeks ago deepseek-coder:latest 140a485970a6 776 MB 3 weeks ago llama2:latest fe938a131f40 3.8 GB 3 weeks ago llama2-uncensored:latest 44040b922233 3.8 GB 3 weeks ago mistral:latest 1ab49bc0b6a8 4.1 GB 14 minutes ago wizard-vicuna-uncensored:13b 6887722b6618 7.4 GB 3 weeks ago wizardlm-uncensored:13b-llama2 886a369d74fc 7.4 GB 3 weeks ago ~ echo $OLLAMA_MODELS /Volumes/ExternalHD/ollama-models ~ ollama run codellama pulling manifest pulling manifest pulling manifest pulling manifest pulling manifest ``` I However the model is still getting downloaded to ~/.ollama/models
GiteaMirror added the bug label 2026-04-12 10:31:05 -05:00
Author
Owner

@easp commented on GitHub (Dec 17, 2023):

As a work-around, try this. Quit the menu bar app. From a Terminal window OLLAMA_MODELS=THE_PATH_YOU_WANT open /Applications/Ollama.app. I tried it on my computer and Ollama seems to pick up the environment variables.

You'll want to keep the Ollama.app from starting automatically.

<!-- gh-comment-id:1858997460 --> @easp commented on GitHub (Dec 17, 2023): As a work-around, try this. Quit the menu bar app. From a Terminal window `OLLAMA_MODELS=THE_PATH_YOU_WANT open /Applications/Ollama.app`. I tried it on my computer and Ollama seems to pick up the environment variables. You'll want to keep the Ollama.app from starting automatically.
Author
Owner

@Crypto69 commented on GitHub (Dec 18, 2023):

Thanks that worked. Are there any plans to update the app so that you can have the app auto start?

As a work-around, try this. Quit the menu bar app. From a Terminal window OLLAMA_MODELS=THE_PATH_YOU_WANT open /Applications/Ollama.app. I tried it on my computer and Ollama seems to pick up the environment variables.

You'll want to keep the Ollama.app from starting automatically.

<!-- gh-comment-id:1859404468 --> @Crypto69 commented on GitHub (Dec 18, 2023): Thanks that worked. Are there any plans to update the app so that you can have the app auto start? > As a work-around, try this. Quit the menu bar app. From a Terminal window `OLLAMA_MODELS=THE_PATH_YOU_WANT open /Applications/Ollama.app`. I tried it on my computer and Ollama seems to pick up the environment variables. > > You'll want to keep the Ollama.app from starting automatically.
Author
Owner

@sandorvasas commented on GitHub (Dec 18, 2023):

This is a major issue for me too, I need the models to be stored on another partition on my Ubuntu.

<!-- gh-comment-id:1861670670 --> @sandorvasas commented on GitHub (Dec 18, 2023): This is a major issue for me too, I need the models to be stored on another partition on my Ubuntu.
Author
Owner
<!-- gh-comment-id:1861680634 --> @easp commented on GitHub (Dec 18, 2023): @sandorvass https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-can-i-change-where-ollama-stores-models
Author
Owner

@sandorvasas commented on GitHub (Dec 18, 2023):

@easp it doesn't work, that's the issue.

<!-- gh-comment-id:1861697251 --> @sandorvasas commented on GitHub (Dec 18, 2023): @easp it doesn't work, that's the issue.
Author
Owner

@sandorvasas commented on GitHub (Dec 18, 2023):

@easp
/etc/systemd/system/ollama.service.d

[Service]
Environment="OLLAMA_MODELS=/w/ollama/models/"

then

sudo systemctl daemon-reload 
sudo systemctl restart ollama

still downloads to ~/.ollama/models

<!-- gh-comment-id:1861704441 --> @sandorvasas commented on GitHub (Dec 18, 2023): @easp /etc/systemd/system/ollama.service.d ``` [Service] Environment="OLLAMA_MODELS=/w/ollama/models/" ``` then ``` sudo systemctl daemon-reload sudo systemctl restart ollama ``` still downloads to `~/.ollama/models`
Author
Owner

@technovangelist commented on GitHub (Dec 19, 2023):

What OS are you on? You seem to be following the instructions for linux, but the models are being downloaded to the path used on MacOS. If you are using MacOS, use will need to quit Ollama from the menu bar, then in a new terminal window, run OLLAMA_MODELS=/w/ollama/models/ ollama serve. We are looking to update our Mac app to make this better.

<!-- gh-comment-id:1862133935 --> @technovangelist commented on GitHub (Dec 19, 2023): What OS are you on? You seem to be following the instructions for linux, but the models are being downloaded to the path used on MacOS. If you are using MacOS, use will need to quit Ollama from the menu bar, then in a new terminal window, run `OLLAMA_MODELS=/w/ollama/models/ ollama serve`. We are looking to update our Mac app to make this better.
Author
Owner

@sandorvasas commented on GitHub (Dec 19, 2023):

@technovangelist Thanks, that works flawlessly. I'm on Ubuntu, OP is on Mac. Could you update the docs? Docs only explain how to add an env var to ollama when it is run as a service, and that doesn't recognize the OLLAMA_MODELS env var.
I stopped the service and just running OLLAMA_MODELS=/w/ollama/models/ ollama serve and that does work. 💯

<!-- gh-comment-id:1862527513 --> @sandorvasas commented on GitHub (Dec 19, 2023): @technovangelist Thanks, that works flawlessly. I'm on Ubuntu, OP is on Mac. Could you update the docs? Docs only explain how to add an env var to ollama when it is run as a service, and that doesn't recognize the OLLAMA_MODELS env var. I stopped the service and just running `OLLAMA_MODELS=/w/ollama/models/ ollama serve` and that does work. :100:
Author
Owner

@Crypto69 commented on GitHub (Dec 20, 2023):

What OS are you on? You seem to be following the instructions for linux, but the models are being downloaded to the path used on MacOS. If you are using MacOS, use will need to quit Ollama from the menu bar, then in a new terminal window, run OLLAMA_MODELS=/w/ollama/models/ ollama serve. We are looking to update our Mac app to make this better.

I am on macOS. No I'm not following the instructions for linux. On macOS you need to stop the service on the menu bar and then run OLLAMA_MODELS=THE_PATH_YOU_WANT open /Applications/Ollama.app in order to get the models downloaded to your directory. This works but it would be great if the menu bar app Took into account environment variable when it started by the system

<!-- gh-comment-id:1863718671 --> @Crypto69 commented on GitHub (Dec 20, 2023): > What OS are you on? You seem to be following the instructions for linux, but the models are being downloaded to the path used on MacOS. If you are using MacOS, use will need to quit Ollama from the menu bar, then in a new terminal window, run `OLLAMA_MODELS=/w/ollama/models/ ollama serve`. We are looking to update our Mac app to make this better. I am on macOS. No I'm not following the instructions for linux. On macOS you need to stop the service on the menu bar and then run OLLAMA_MODELS=THE_PATH_YOU_WANT open /Applications/Ollama.app in order to get the models downloaded to your directory. This works but it would be great if the menu bar app Took into account environment variable when it started by the system
Author
Owner

@zboyles commented on GitHub (Jan 4, 2024):

@Crypto69 as a clean workaround you can add an alias to your shell config ie. .zshrc, .bashrc, etc.

I wanted the models stored in /Volumes/T9/models so these are the steps I took to make the environmental variable work as expected; Note: obviously change destination to your preferred path

  1. Quit Ollama from menu bar
  2. Add the following lines to your shell configuration; I changed mine to ZSH so ~/.zshrc but your Mac might be using ~/.bashrc:
# .zshrc

export OLLAMA_MODELS="/Volumes/T9/models"
alias ollama="OLLAMA_MODELS=/Volumes/T9/models ollama $@"
  1. Run commands normally, ie ollama pull dolphin-mixtral or ollama run dolphin-mistral
  2. Reload terminal zsh -l or just close and reopen terminal

Revert Configuration to Ollama Defaults

In order to change it back, I had to quit Ollama from the menu bar again and specify the original location for OLLAMA_MODELS. This might have resolved itself automatically after a reboot.

# .zshrc
# revert configuration back to Ollama defaults:

# alias ollama="OLLAMA_MODELS=/Volumes/T9/models ollama $@"
export OLLAMA_MODELS="~/.ollama/models"

Edit:
Added # 4 - restart terminal

<!-- gh-comment-id:1876764143 --> @zboyles commented on GitHub (Jan 4, 2024): @Crypto69 as a clean workaround you can add an alias to your shell config ie. `.zshrc`, `.bashrc`, etc. I wanted the models stored in `/Volumes/T9/models` so these are the steps I took to make the environmental variable work as expected; **Note: obviously change destination to your preferred path** 1. Quit Ollama from menu bar 2. Add the following lines to your shell configuration; I changed mine to ZSH so `~/.zshrc` but your Mac might be using `~/.bashrc`: ```shell # .zshrc export OLLAMA_MODELS="/Volumes/T9/models" alias ollama="OLLAMA_MODELS=/Volumes/T9/models ollama $@" ``` 3. Run commands normally, ie `ollama pull dolphin-mixtral` or `ollama run dolphin-mistral` 4. Reload terminal `zsh -l` or just close and reopen terminal ### Revert Configuration to Ollama Defaults In order to change it back, I had to quit Ollama from the menu bar again and specify the original location for `OLLAMA_MODELS`. This might have resolved itself automatically after a reboot. ```shell # .zshrc # revert configuration back to Ollama defaults: # alias ollama="OLLAMA_MODELS=/Volumes/T9/models ollama $@" export OLLAMA_MODELS="~/.ollama/models" ``` Edit: Added # 4 - restart terminal
Author
Owner

@mxyng commented on GitHub (Jan 22, 2024):

It's possible to set OLLAMA_MODELS and other environment variables with launchctl. See the FAQ for more details. Remember to restart the app afterwards for changes to take effect.

<!-- gh-comment-id:1905037473 --> @mxyng commented on GitHub (Jan 22, 2024): It's possible to set `OLLAMA_MODELS` and other environment variables with `launchctl`. See the [FAQ](https://github.com/jmorganca/ollama/blob/main/docs/faq.md#where-are-models-stored) for more details. Remember to restart the app afterwards for changes to take effect.
Author
Owner

@JohnnyLeuthard commented on GitHub (Jan 11, 2025):

Why is this such a difficult thing to do. It is very buggy. I had it working with the environment variable on my Mac and it just stoped working. Would it be so hard to have a config file in Ollama to specify where to store Models rather than have to do work arounds. I have been fighting this for some time now because it keeps reverting to my local drive that is very limited on space. I'm close to abandoning Ollama and going with other tools.

<!-- gh-comment-id:2585324298 --> @JohnnyLeuthard commented on GitHub (Jan 11, 2025): Why is this such a difficult thing to do. It is very buggy. I had it working with the environment variable on my Mac and it just stoped working. Would it be so hard to have a config file in Ollama to specify where to store Models rather than have to do work arounds. I have been fighting this for some time now because it keeps reverting to my local drive that is very limited on space. I'm close to abandoning Ollama and going with other tools.
Author
Owner

@easp commented on GitHub (Jan 17, 2025):

@aricnappi ollama ps shows models that are currently loaded into memory. If you want to see the models you've downloaded, use ollama list

<!-- gh-comment-id:2597179852 --> @easp commented on GitHub (Jan 17, 2025): @aricnappi `ollama ps` shows models that are currently loaded into memory. If you want to see the models you've downloaded, use `ollama list`
Author
Owner

@JohnnyLeuthard commented on GitHub (Jan 17, 2025):

I understand that. I’m talking about pulling new models. If I run, pull.etc. They get downloaded locally not to where the environment variable;e tells it to go. If I pull a model and do a list of course it will lust them but that’s because they are already downloaded. The problem is where it stores them. It was working and now doesn’t, it have had this happen a coupe times now where it just stops working. I can have both locations open, pull a new model and watch both the local and the environment variably location and I watch it download to the local folder. I have full access to the redirected location and again it was working. So the whole point is it is very inconsistent with its reliability. Even though nothing changes on my end and all settings are verified they haven’t changed. I can write to that location with other commands to verify it all works, the env variable is accurate, access is accurate. It’s just Ollma that doesn’t honor the env variable 100% of the time.

On Jan 16, 2025, at 7:27 PM, Erik S @.***> wrote:

@aricnappi https://github.com/aricnappi ollama ps shows models that are currently loaded into memory. If you want to see the models you've downloaded, use ollama list


Reply to this email directly, view it on GitHub https://github.com/ollama/ollama/issues/1560#issuecomment-2597179852, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADMGQNN4IXB67SKFW7I5P632LBE5VAVCNFSM6AAAAABVADPQSGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKOJXGE3TSOBVGI.
You are receiving this because you commented.

<!-- gh-comment-id:2599090857 --> @JohnnyLeuthard commented on GitHub (Jan 17, 2025): I understand that. I’m talking about pulling new models. If I run, pull.etc. They get downloaded locally not to where the environment variable;e tells it to go. If I pull a model and do a list of course it will lust them but that’s because they are already downloaded. The problem is where it stores them. It was working and now doesn’t, it have had this happen a coupe times now where it just stops working. I can have both locations open, pull a new model and watch both the local and the environment variably location and I watch it download to the local folder. I have full access to the redirected location and again it was working. So the whole point is it is very inconsistent with its reliability. Even though nothing changes on my end and all settings are verified they haven’t changed. I can write to that location with other commands to verify it all works, the env variable is accurate, access is accurate. It’s just Ollma that doesn’t honor the env variable 100% of the time. > On Jan 16, 2025, at 7:27 PM, Erik S ***@***.***> wrote: > > > @aricnappi <https://github.com/aricnappi> ollama ps shows models that are currently loaded into memory. If you want to see the models you've downloaded, use ollama list > > — > Reply to this email directly, view it on GitHub <https://github.com/ollama/ollama/issues/1560#issuecomment-2597179852>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADMGQNN4IXB67SKFW7I5P632LBE5VAVCNFSM6AAAAABVADPQSGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKOJXGE3TSOBVGI>. > You are receiving this because you commented. >
Author
Owner

@michalliu commented on GitHub (Feb 13, 2025):

I just create a softlink to ~/.ollama

<!-- gh-comment-id:2655355151 --> @michalliu commented on GitHub (Feb 13, 2025): I just create a softlink to ~/.ollama
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#853