[GH-ISSUE #9877] OLLAMA_MODELS directive not respected #32226

Closed
opened 2026-04-22 13:17:05 -05:00 by GiteaMirror · 28 comments
Owner

Originally created by @vmajor on GitHub (Mar 19, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9877

What is the issue?

ollama insists on writing models to my home directory (Ubuntu) which is:

  1. too small
  2. terrible idea to frequently write to a system drive due to SSD wear

I host my models on a dedicated ssd and I have set this in ollama.service

Environment="OLLAMA_MODELS=/media/user/AI2/models/ollama"

since I read that other thread about directory traversal issue I added ollama to the same group as my user. I am assuming that would be sufficient since it is able to traverse my user's home and try to write there.

However it does not work. ollama insists on writing to home instead of the dedicated LLM drive as directed by OLLAMA_MODELS.

Relevant log output


OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.6.2

Originally created by @vmajor on GitHub (Mar 19, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9877 ### What is the issue? ollama insists on writing models to my `home` directory (Ubuntu) which is: 1. too small 2. terrible idea to frequently write to a system drive due to SSD wear I host my models on a dedicated ssd and I have set this in `ollama.service` `Environment="OLLAMA_MODELS=/media/user/AI2/models/ollama"` since I read that other thread about directory traversal issue I added ollama to the same group as my user. I am assuming that would be sufficient since it is able to traverse my user's `home` and try to write there. However it does not work. ollama insists on writing to `home` instead of the dedicated LLM drive as directed by `OLLAMA_MODELS`. ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.6.2
GiteaMirror added the bug label 2026-04-22 13:17:05 -05:00
Author
Owner

@vmajor commented on GitHub (Mar 19, 2025):

It also does not respect this in bash:

export OLLAMA_MODELS="/media/user/AI2/models/ollama"

<!-- gh-comment-id:2735240110 --> @vmajor commented on GitHub (Mar 19, 2025): It also does not respect this in bash: export OLLAMA_MODELS="/media/user/AI2/models/ollama"
Author
Owner

@rick-github commented on GitHub (Mar 19, 2025):

What's the output of

systemctl cat ollama --no-pager
<!-- gh-comment-id:2735844358 --> @rick-github commented on GitHub (Mar 19, 2025): What's the output of ``` systemctl cat ollama --no-pager ```
Author
Owner

@vmajor commented on GitHub (Mar 19, 2025):

# /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/home/usder/anaconda3/envs/smolagents/bin:/home/user/.local/bin:/home/user/.nvm/versions/node/v18.19.0/bin:/usr/bin"
Environment="OLLAMA_MODELS=/media/user/AI2/models/ollama"

[Install]
WantedBy=default.target
<!-- gh-comment-id:2736173342 --> @vmajor commented on GitHub (Mar 19, 2025): ``` # /etc/systemd/system/ollama.service [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/home/usder/anaconda3/envs/smolagents/bin:/home/user/.local/bin:/home/user/.nvm/versions/node/v18.19.0/bin:/usr/bin" Environment="OLLAMA_MODELS=/media/user/AI2/models/ollama" [Install] WantedBy=default.target ```
Author
Owner

@rick-github commented on GitHub (Mar 19, 2025):

What's the output of

sudo journalctl -u ollama --no-pager --since=yesterday
<!-- gh-comment-id:2736188952 --> @rick-github commented on GitHub (Mar 19, 2025): What's the output of ``` sudo journalctl -u ollama --no-pager --since=yesterday ```
Author
Owner

@vmajor commented on GitHub (Mar 19, 2025):

ok... I am not sure you want it all... it is several pages worth of things like this:

Mar 19 19:04:15 KHH-Ubuntu ollama[964568]: Error: could not create directory mkdir /usr/share/ollama: permission denied
Mar 19 19:04:15 KHH-Ubuntu systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
Mar 19 19:04:15 KHH-Ubuntu systemd[1]: ollama.service: Failed with result 'exit-code'.
Mar 19 19:04:18 KHH-Ubuntu systemd[1]: ollama.service: Scheduled restart job, restart counter is at 24759.
Mar 19 19:04:18 KHH-Ubuntu systemd[1]: Started ollama.service - Ollama Service.
Mar 19 19:04:18 KHH-Ubuntu ollama[964626]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
<!-- gh-comment-id:2736237050 --> @vmajor commented on GitHub (Mar 19, 2025): ok... I am not sure you want it all... it is several pages worth of things like this: ``` Mar 19 19:04:15 KHH-Ubuntu ollama[964568]: Error: could not create directory mkdir /usr/share/ollama: permission denied Mar 19 19:04:15 KHH-Ubuntu systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE Mar 19 19:04:15 KHH-Ubuntu systemd[1]: ollama.service: Failed with result 'exit-code'. Mar 19 19:04:18 KHH-Ubuntu systemd[1]: ollama.service: Scheduled restart job, restart counter is at 24759. Mar 19 19:04:18 KHH-Ubuntu systemd[1]: Started ollama.service - Ollama Service. Mar 19 19:04:18 KHH-Ubuntu ollama[964626]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key. ```
Author
Owner

@rick-github commented on GitHub (Mar 19, 2025):

What's the output of

sudo journalctl -u ollama --no-pager | grep OLLAMA_MODELS | tail
<!-- gh-comment-id:2736432377 --> @rick-github commented on GitHub (Mar 19, 2025): What's the output of ``` sudo journalctl -u ollama --no-pager | grep OLLAMA_MODELS | tail ```
Author
Owner

@vmajor commented on GitHub (Mar 20, 2025):

There is no output, ie. no entries.

<!-- gh-comment-id:2738638692 --> @vmajor commented on GitHub (Mar 20, 2025): There is no output, ie. no entries.
Author
Owner

@rick-github commented on GitHub (Mar 20, 2025):

sudo systemctl stop ollama
sudo systemctl start ollama
sudo journalctl -u ollama --no-pager
<!-- gh-comment-id:2738663912 --> @rick-github commented on GitHub (Mar 20, 2025): ``` sudo systemctl stop ollama sudo systemctl start ollama sudo journalctl -u ollama --no-pager ```
Author
Owner

@vmajor commented on GitHub (Mar 20, 2025):

same as the previous attempt, pages of this:

Mar 20 08:59:52 KHH-Ubuntu ollama[1023264]: Error: could not create directory mkdir /usr/share/ollama: permission denied
Mar 20 08:59:52 KHH-Ubuntu systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
Mar 20 08:59:52 KHH-Ubuntu systemd[1]: ollama.service: Failed with result 'exit-code'.
Mar 20 08:59:55 KHH-Ubuntu systemd[1]: ollama.service: Scheduled restart job, restart counter is at 25710.
Mar 20 08:59:55 KHH-Ubuntu systemd[1]: Started ollama.service - Ollama Service.
Mar 20 08:59:55 KHH-Ubuntu ollama[1023314]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
<!-- gh-comment-id:2738714535 --> @vmajor commented on GitHub (Mar 20, 2025): same as the previous attempt, pages of this: ``` Mar 20 08:59:52 KHH-Ubuntu ollama[1023264]: Error: could not create directory mkdir /usr/share/ollama: permission denied Mar 20 08:59:52 KHH-Ubuntu systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE Mar 20 08:59:52 KHH-Ubuntu systemd[1]: ollama.service: Failed with result 'exit-code'. Mar 20 08:59:55 KHH-Ubuntu systemd[1]: ollama.service: Scheduled restart job, restart counter is at 25710. Mar 20 08:59:55 KHH-Ubuntu systemd[1]: Started ollama.service - Ollama Service. Mar 20 08:59:55 KHH-Ubuntu ollama[1023314]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key. ```
Author
Owner

@rick-github commented on GitHub (Mar 20, 2025):

ls -ld /usr/share/ollama
<!-- gh-comment-id:2738719714 --> @rick-github commented on GitHub (Mar 20, 2025): ``` ls -ld /usr/share/ollama ```
Author
Owner

@vmajor commented on GitHub (Mar 20, 2025):

ls -ld /usr/share/ollama
ls: cannot access '/usr/share/ollama': No such file or directory

it is as if ollama was not even installed, but the installation method followed was this, I did not get adventurous from the outset:

curl -fsSL https://ollama.com/install.sh | sh

<!-- gh-comment-id:2738723020 --> @vmajor commented on GitHub (Mar 20, 2025): ``` ls -ld /usr/share/ollama ls: cannot access '/usr/share/ollama': No such file or directory ``` it is as if ollama was not even installed, but the installation method followed was this, I did not get adventurous from the outset: `curl -fsSL https://ollama.com/install.sh | sh`
Author
Owner

@rick-github commented on GitHub (Mar 20, 2025):

grep ollama /etc/passwd
ls -l /usr/share
<!-- gh-comment-id:2738727614 --> @rick-github commented on GitHub (Mar 20, 2025): ``` grep ollama /etc/passwd ls -l /usr/share ```
Author
Owner

@vmajor commented on GitHub (Mar 20, 2025):

the only system paths for ollama are these:

/usr/lib/ollama
/usr/bin/ollama
grep ollama /etc/passwd
ollama:x:999:992::/usr/share/ollama:/bin/false

there is no ollama in /usr/share

<!-- gh-comment-id:2738738795 --> @vmajor commented on GitHub (Mar 20, 2025): the only system paths for ollama are these: ``` /usr/lib/ollama /usr/bin/ollama ``` ``` grep ollama /etc/passwd ollama:x:999:992::/usr/share/ollama:/bin/false ``` there is no ollama in `/usr/share`
Author
Owner

@rick-github commented on GitHub (Mar 20, 2025):

sudo systemctl stop ollama
sudo mkdir /usr/share/ollama
sudo chown ollama:ollama /usr/share/ollama
sudo systemctl start ollama

This should allow ollama to start. It will create the private key under that directory but models will be stored in /media/user/AI2/models/ollama provided it can traverse the path.

<!-- gh-comment-id:2738750990 --> @rick-github commented on GitHub (Mar 20, 2025): ``` sudo systemctl stop ollama sudo mkdir /usr/share/ollama sudo chown ollama:ollama /usr/share/ollama sudo systemctl start ollama ``` This should allow ollama to start. It will create the private key under that directory but models will be stored in /media/user/AI2/models/ollama provided it can traverse the path.
Author
Owner

@vmajor commented on GitHub (Mar 20, 2025):

great, thank you for this. I had to manually kill ollama serve, it was not obeying systemctl.

now the journal shows this and there is zero chance that I will allow it to mkdir this path:

Mar 20 09:31:44 KHH-Ubuntu ollama[1059742]: Error: mkdir /media/user/AI2: permission denied

the models are in /media/user/AI2/models/ollama and it has read permissions to the path and ownership of ollama. AI2 is a mountpoint.

Sorry to vent but this is one of the main reasons (the second is inability to use arbitrary gguf models directly) why I am not part of the ollama ecosystem. It does not follow my wishes and configuration that exist for a good reason.

Can ollama handle symbolic links? Maybe I will just make it think it is writing to my home directory.

<!-- gh-comment-id:2738810088 --> @vmajor commented on GitHub (Mar 20, 2025): great, thank you for this. I had to manually kill `ollama serve`, it was not obeying systemctl. now the journal shows this and there is zero chance that I will allow it to mkdir this path: `Mar 20 09:31:44 KHH-Ubuntu ollama[1059742]: Error: mkdir /media/user/AI2: permission denied` the models are in `/media/user/AI2/models/ollama` and it has read permissions to the path and ownership of `ollama`. `AI2` is a mountpoint. Sorry to vent but this is one of the main reasons (the second is inability to use arbitrary gguf models directly) why I am not part of the ollama ecosystem. It does not follow my wishes and configuration that exist for a good reason. Can ollama handle symbolic links? Maybe I will just make it think it is writing to my home directory.
Author
Owner

@rick-github commented on GitHub (Mar 20, 2025):

It's not an ollama problem, it's a permission problem. If it can't traverse the path because of permissions, it can't access the models.

Perhaps Bindpaths will suit your use case better.

<!-- gh-comment-id:2738835175 --> @rick-github commented on GitHub (Mar 20, 2025): It's not an ollama problem, it's a permission problem. If it can't traverse the path because of permissions, it can't access the models. Perhaps [Bindpaths](https://github.com/ollama/ollama/issues/8512#issuecomment-2605177997) will suit your use case better.
Author
Owner

@bkgoodman commented on GitHub (Mar 20, 2025):

I am having the same issue. Working through all of the above, I am seeing:

Mar 20 13:41:33 poweredge ollama[138533]: 2025/03/20 13:41:33 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/bkgdata/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"

TL;DR: That included: OLLAMA_MODELS:/var/bkgdata/ollama
Yet am seeing:

Mar 20 13:46:26 poweredge systemd[1]: Started Ollama Service.
Mar 20 13:46:26 poweredge ollama[139401]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Mar 20 13:46:26 poweredge ollama[139401]: Error: could not create directory mkdir /usr/share/ollama: permission denied
Mar 20 13:46:26 poweredge systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
Mar 20 13:46:26 poweredge systemd[1]: ollama.service: Failed with result 'exit-code'.

I've relaoded daemon, restarted ollama, killed process, etc etc - still always goes to the default /usr/share/ollama

<!-- gh-comment-id:2740519571 --> @bkgoodman commented on GitHub (Mar 20, 2025): I am having the same issue. Working through all of the above, I am seeing: > Mar 20 13:41:33 poweredge ollama[138533]: 2025/03/20 13:41:33 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/bkgdata/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" TL;DR: That included: `OLLAMA_MODELS:/var/bkgdata/ollama` Yet am seeing: > Mar 20 13:46:26 poweredge systemd[1]: Started Ollama Service. > Mar 20 13:46:26 poweredge ollama[139401]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key. > Mar 20 13:46:26 poweredge ollama[139401]: Error: could not create directory mkdir /usr/share/ollama: permission denied > Mar 20 13:46:26 poweredge systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE > Mar 20 13:46:26 poweredge systemd[1]: ollama.service: Failed with result 'exit-code'. I've relaoded daemon, restarted ollama, killed process, etc etc - still always goes to the default `/usr/share/ollama`
Author
Owner

@rick-github commented on GitHub (Mar 20, 2025):

Ownership is wrong for /usr/share/ollama or the directory doesn't exist.

<!-- gh-comment-id:2740533010 --> @rick-github commented on GitHub (Mar 20, 2025): Ownership is wrong for `/usr/share/ollama` or the directory doesn't exist.
Author
Owner

@bkgoodman commented on GitHub (Mar 20, 2025):

@rick-github Correct. It does not exist.

It is supposed to be /var/bkgdata/ollama as indicated in the OLLAMA_MODELS Environment variable.

<!-- gh-comment-id:2740789265 --> @bkgoodman commented on GitHub (Mar 20, 2025): @rick-github Correct. It does _not_ exist. It is _supposed_ to be `/var/bkgdata/ollama` as indicated in the `OLLAMA_MODELS` Environment variable.
Author
Owner

@bkgoodman commented on GitHub (Mar 20, 2025):

...wait...

Are you saying that /usr/share/ollama must exist, and that the OLLAMA_MODELS variable should point specifically to the models directory - i.e. /var/bkgdata/ollama/.ollama/models or whatever??

And meaning that there is no way to change the /usr/share/ollama base directory?

<!-- gh-comment-id:2740796421 --> @bkgoodman commented on GitHub (Mar 20, 2025): ...wait... Are you saying that `/usr/share/ollama` _must_ exist, and that the `OLLAMA_MODELS` variable should point specifically to the _models_ directory - i.e. `/var/bkgdata/ollama/.ollama/models` or whatever?? And meaning that there is no way to change the `/usr/share/ollama` base directory?
Author
Owner

@rick-github commented on GitHub (Mar 20, 2025):

/usr/share/ollama is the home directory of the ollama user. If you want to change that you can usermod -d /var/bkgdata/ollama ollama, or for a temporary change: HOME=/var/bkgdata/ollama ollama serve. In a service file:

[Service]
Environment="HOME=/var/bkgdata/ollama"

OLLAMA_MODELS is specifically for the directory that the models are stored in. If you don't set it, the models are stored in $HOME/.ollama/models.

<!-- gh-comment-id:2740836670 --> @rick-github commented on GitHub (Mar 20, 2025): `/usr/share/ollama` is the home directory of the ollama user. If you want to change that you can `usermod -d /var/bkgdata/ollama ollama`, or for a temporary change: `HOME=/var/bkgdata/ollama ollama serve`. In a service file: ``` [Service] Environment="HOME=/var/bkgdata/ollama" ``` `OLLAMA_MODELS` is specifically for the directory that the models are stored in. If you don't set it, the models are stored in `$HOME/.ollama/models`.
Author
Owner

@vmajor commented on GitHub (Mar 20, 2025):

It's not an ollama problem, it's a permission problem. If it can't traverse the path because of permissions, it can't access the models.

Perhaps Bindpaths will suit your use case better.

Bindpaths also does not work:

BindPaths=/media/user/AI2/models/ollama:/home/ollama/.ollama/models

it still insists on writing to /home/ollama/.ollama/models

Is there any way at all to make ollama store model files in a location of my choice? Please recall that attempting to give it access to the correct filepath resulted in Mar 20 09:31:44 KHH-Ubuntu ollama[1059742]: Error: mkdir /media/user/AI2: permission denied Please read this again. It wants to mkdir a mountpoint. That is not an issue of read access, but a potentially catastrophic bug.

I explained that there is no way that I will give ollama permission to write anywhere outside of its target directory. ollama should not require write access to the entire path, but read access which is has.

Saying that this is not an ollama problem but permission problem is disingenuous. It is your product and your installer. I am not given any control over what it does, no choice where to place its models directory and so far it is proving impossible to have it obey any request to use a location that holds the models and has enough space. It is an architecture problem, so please advise if it is even possible to have ollama use a dedicated directory that it CAN traverse to and has full ownership of.

<!-- gh-comment-id:2741838025 --> @vmajor commented on GitHub (Mar 20, 2025): > It's not an ollama problem, it's a permission problem. If it can't traverse the path because of permissions, it can't access the models. > > Perhaps [Bindpaths](https://github.com/ollama/ollama/issues/8512#issuecomment-2605177997) will suit your use case better. Bindpaths also does not work: `BindPaths=/media/user/AI2/models/ollama:/home/ollama/.ollama/models` it still insists on writing to `/home/ollama/.ollama/models` Is there any way at all to make ollama store model files in a location of my choice? Please recall that attempting to give it access to the correct filepath resulted in `Mar 20 09:31:44 KHH-Ubuntu ollama[1059742]: Error: mkdir /media/user/AI2: permission denied` Please read this again. It wants to `mkdir` a mountpoint. That is not an issue of read access, but a potentially catastrophic bug. I explained that there is no way that I will give ollama permission to write anywhere outside of its target directory. ollama should not require write access to the entire path, but read access which is has. Saying that this is not an ollama problem but permission problem is disingenuous. It is your product and your installer. I am not given any control over what it does, no choice where to place its models directory and so far it is proving impossible to have it obey any request to use a location that holds the models and has enough space. It is an architecture problem, so please advise if it is even possible to have ollama use a dedicated directory that it CAN traverse to and has full ownership of.
Author
Owner

@rick-github commented on GitHub (Mar 20, 2025):

Bindpaths also does not work:

BindPaths=/media/user/AI2/models/ollama:/home/ollama/.ollama/models

it still insists on writing to /home/ollama/.ollama/models

Well, yes, that's the point.

I make a path that I don't want anyone be able to traverse, with a terminal directory that ollama can modify:

$ mkdir -p /tmp/user/AI2/models/ollama
$ chmod 700 /tmp/user
$ sudo chown ollama:ollama /tmp/user/AI2/models/ollama

Now I create an ollama service file with a Bindpaths element:

$ systemctl cat ollama --no-pager
# /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

[Install]
WantedBy=default.target

# /etc/systemd/system/ollama.service.d/override.conf
[Service]
BindPaths=/tmp/user/AI2/models/ollama:/usr/local/ollama/.ollama/models

Now start the service with the bind point:

$ sudo systemctl stop ollama
$ sudo systemctl start ollama

ollama has no models:

$ ollama list
NAME    ID    SIZE    MODIFIED 

The path we created now has manifests and blobs directories:

$ find /tmp/user
/tmp/user
/tmp/user/AI2
/tmp/user/AI2/models
/tmp/user/AI2/models/ollama
/tmp/user/AI2/models/ollama/manifests
/tmp/user/AI2/models/ollama/blobs

Pull a model:

$ ollama pull qwen2.5:0.5b
pulling manifest 
pulling c5396e06af29... 100% ▕███████████████████▏ 397 MB                         
pulling 66b9ea09bd5b... 100% ▕███████████████████▏   68 B                         
pulling eb4402837c78... 100% ▕███████████████████▏ 1.5 KB                         
pulling 832dd9e00a68... 100% ▕███████████████████▏  11 KB                         
pulling 005f95c74751... 100% ▕███████████████████▏  490 B                         
verifying sha256 digest 
writing manifest 
success 

ollama can see the new model:

$ ollama list
NAME            ID              SIZE      MODIFIED       
qwen2.5:0.5b    a8b0c5157701    397 MB    34 seconds ago    

The path we created now has data:

$ find /tmp/user
/tmp/user
/tmp/user/AI2
/tmp/user/AI2/models
/tmp/user/AI2/models/ollama
/tmp/user/AI2/models/ollama/manifests
/tmp/user/AI2/models/ollama/manifests/registry.ollama.ai
/tmp/user/AI2/models/ollama/manifests/registry.ollama.ai/library
/tmp/user/AI2/models/ollama/manifests/registry.ollama.ai/library/qwen2.5
/tmp/user/AI2/models/ollama/manifests/registry.ollama.ai/library/qwen2.5/0.5b
/tmp/user/AI2/models/ollama/blobs
/tmp/user/AI2/models/ollama/blobs/sha256-832dd9e00a68dd83b3c3fb9f5588dad7dcf337a0db50f7d9483f310cd292e92e
/tmp/user/AI2/models/ollama/blobs/sha256-eb4402837c7829a690fa845de4d7f3fd842c2adee476d5341da8a46ea9255175
/tmp/user/AI2/models/ollama/blobs/sha256-005f95c7475154a17e84b85cd497949d6dd2a4f9d77c096e3c66e4d9c32acaf5
/tmp/user/AI2/models/ollama/blobs/sha256-66b9ea09bd5b7099cbb4fc820f31b575c0366fa439b08245566692c6784e281e
/tmp/user/AI2/models/ollama/blobs/sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515

The files are owned by ollama:

$ ls -l /tmp/user/AI2/models/ollama/blobs
total 388516
-rw-r--r-- 1 ollama ollama       490 Mär 21 00:37 sha256-005f95c7475154a17e84b85cd497949d6dd2a4f9d77c096e3c66e4d9c32acaf5
-rw-r--r-- 1 ollama ollama        68 Mär 21 00:37 sha256-66b9ea09bd5b7099cbb4fc820f31b575c0366fa439b08245566692c6784e281e
-rw-r--r-- 1 ollama ollama     11343 Mär 21 00:37 sha256-832dd9e00a68dd83b3c3fb9f5588dad7dcf337a0db50f7d9483f310cd292e92e
-rw-r--r-- 1 ollama ollama 397807936 Mär 21 00:37 sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515
-rw-r--r-- 1 ollama ollama      1482 Mär 21 00:37 sha256-eb4402837c7829a690fa845de4d7f3fd842c2adee476d5341da8a46ea9255175

The mount point is still opaque to everybody except the owner:

$ ls -ld /tmp/user
drwx------ 3 rick rick 4096 Mär 21 00:03 /tmp/user

Is there any way at all to make ollama store model files in a location of my choice? Please recall that attempting to give it access to the correct filepath resulted in Mar 20 09:31:44 KHH-Ubuntu ollama[1059742]: Error: mkdir /media/user/AI2: permission denied Please read this again. It wants to mkdir a mountpoint. That is not an issue of read access, but a potentially catastrophic bug.

It's not a bug. You've explicitly told ollama you want to store files in /media/user/AI2/models/ollama. In order to access the ollama terminal directory, ollama has to traverse the file path. In doing so, it found that it could not access an element of that path. This is because ollama has no access permission for the parent directory. Doing a chmod o+x on that directory would allow ollama to access it and continue with the path traversal. Since you don't want to do that, the alternative is to mount the terminal directory in a place where ollama can access it with BindPaths.

I explained that there is no way that I will give ollama permission to write anywhere outside of its target directory. ollama should not require write access to the entire path, but read access which is has.

It does not have read access to part of the path and hence the traversal fails.

Saying that this is not an ollama problem but permission problem is disingenuous. It is your product and your installer. I am not given any control over what it does, no choice where to place its models directory and so far it is proving impossible to have it obey any request to use a location that holds the models and has enough space. It is an architecture problem, so please advise if it is even possible to have ollama use a dedicated directory that it CAN traverse to and has full ownership of.

You are given every control over what it does.

<!-- gh-comment-id:2741905469 --> @rick-github commented on GitHub (Mar 20, 2025): > Bindpaths also does not work: > > `BindPaths=/media/user/AI2/models/ollama:/home/ollama/.ollama/models` > > it still insists on writing to `/home/ollama/.ollama/models` Well, yes, that's the point. I make a path that I don't want anyone be able to traverse, with a terminal directory that ollama can modify: ```console $ mkdir -p /tmp/user/AI2/models/ollama $ chmod 700 /tmp/user $ sudo chown ollama:ollama /tmp/user/AI2/models/ollama ``` Now I create an ollama service file with a `Bindpaths` element: ```console $ systemctl cat ollama --no-pager # /etc/systemd/system/ollama.service [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" [Install] WantedBy=default.target # /etc/systemd/system/ollama.service.d/override.conf [Service] BindPaths=/tmp/user/AI2/models/ollama:/usr/local/ollama/.ollama/models ``` Now start the service with the bind point: ```console $ sudo systemctl stop ollama $ sudo systemctl start ollama ``` ollama has no models: ```console $ ollama list NAME ID SIZE MODIFIED ``` The path we created now has `manifests` and `blobs` directories: ```console $ find /tmp/user /tmp/user /tmp/user/AI2 /tmp/user/AI2/models /tmp/user/AI2/models/ollama /tmp/user/AI2/models/ollama/manifests /tmp/user/AI2/models/ollama/blobs ``` Pull a model: ```console $ ollama pull qwen2.5:0.5b pulling manifest pulling c5396e06af29... 100% ▕███████████████████▏ 397 MB pulling 66b9ea09bd5b... 100% ▕███████████████████▏ 68 B pulling eb4402837c78... 100% ▕███████████████████▏ 1.5 KB pulling 832dd9e00a68... 100% ▕███████████████████▏ 11 KB pulling 005f95c74751... 100% ▕███████████████████▏ 490 B verifying sha256 digest writing manifest success ``` ollama can see the new model: ```console $ ollama list NAME ID SIZE MODIFIED qwen2.5:0.5b a8b0c5157701 397 MB 34 seconds ago ``` The path we created now has data: ```console $ find /tmp/user /tmp/user /tmp/user/AI2 /tmp/user/AI2/models /tmp/user/AI2/models/ollama /tmp/user/AI2/models/ollama/manifests /tmp/user/AI2/models/ollama/manifests/registry.ollama.ai /tmp/user/AI2/models/ollama/manifests/registry.ollama.ai/library /tmp/user/AI2/models/ollama/manifests/registry.ollama.ai/library/qwen2.5 /tmp/user/AI2/models/ollama/manifests/registry.ollama.ai/library/qwen2.5/0.5b /tmp/user/AI2/models/ollama/blobs /tmp/user/AI2/models/ollama/blobs/sha256-832dd9e00a68dd83b3c3fb9f5588dad7dcf337a0db50f7d9483f310cd292e92e /tmp/user/AI2/models/ollama/blobs/sha256-eb4402837c7829a690fa845de4d7f3fd842c2adee476d5341da8a46ea9255175 /tmp/user/AI2/models/ollama/blobs/sha256-005f95c7475154a17e84b85cd497949d6dd2a4f9d77c096e3c66e4d9c32acaf5 /tmp/user/AI2/models/ollama/blobs/sha256-66b9ea09bd5b7099cbb4fc820f31b575c0366fa439b08245566692c6784e281e /tmp/user/AI2/models/ollama/blobs/sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 ``` The files are owned by ollama: ```console $ ls -l /tmp/user/AI2/models/ollama/blobs total 388516 -rw-r--r-- 1 ollama ollama 490 Mär 21 00:37 sha256-005f95c7475154a17e84b85cd497949d6dd2a4f9d77c096e3c66e4d9c32acaf5 -rw-r--r-- 1 ollama ollama 68 Mär 21 00:37 sha256-66b9ea09bd5b7099cbb4fc820f31b575c0366fa439b08245566692c6784e281e -rw-r--r-- 1 ollama ollama 11343 Mär 21 00:37 sha256-832dd9e00a68dd83b3c3fb9f5588dad7dcf337a0db50f7d9483f310cd292e92e -rw-r--r-- 1 ollama ollama 397807936 Mär 21 00:37 sha256-c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515 -rw-r--r-- 1 ollama ollama 1482 Mär 21 00:37 sha256-eb4402837c7829a690fa845de4d7f3fd842c2adee476d5341da8a46ea9255175 ``` The mount point is still opaque to everybody except the owner: ```console $ ls -ld /tmp/user drwx------ 3 rick rick 4096 Mär 21 00:03 /tmp/user ``` > Is there any way at all to make ollama store model files in a location of my choice? Please recall that attempting to give it access to the correct filepath resulted in `Mar 20 09:31:44 KHH-Ubuntu ollama[1059742]: Error: mkdir /media/user/AI2: permission denied` Please read this again. It wants to `mkdir` a mountpoint. That is not an issue of read access, but a potentially catastrophic bug. It's not a bug. You've explicitly told ollama you want to store files in `/media/user/AI2/models/ollama`. In order to access the `ollama` terminal directory, ollama has to traverse the file path. In doing so, it found that it could not access an element of that path. This is because ollama has no access permission for the parent directory. Doing a `chmod o+x` on that directory would allow ollama to access it and continue with the path traversal. Since you don't want to do that, the alternative is to mount the terminal directory in a place where ollama can access it with `BindPaths`. > I explained that there is no way that I will give ollama permission to write anywhere outside of its target directory. ollama should not require write access to the entire path, but read access which is has. It does not have read access to part of the path and hence the traversal fails. > Saying that this is not an ollama problem but permission problem is disingenuous. It is your product and your installer. I am not given any control over what it does, no choice where to place its models directory and so far it is proving impossible to have it obey any request to use a location that holds the models and has enough space. It is an architecture problem, so please advise if it is even possible to have ollama use a dedicated directory that it CAN traverse to and has full ownership of. You are given every control over what it does.
Author
Owner

@HKMV commented on GitHub (Mar 26, 2025):

@rick-github Thank you, I also encountered the same problem, because the/media/user directory does not have x permissions, use chmod o+x /media/user added permissions will be normal.

<!-- gh-comment-id:2753406878 --> @HKMV commented on GitHub (Mar 26, 2025): @rick-github Thank you, I also encountered the same problem, because the`/media/user `directory does not have x permissions, use `chmod o+x /media/user` added permissions will be normal.
Author
Owner

@pythonms commented on GitHub (Nov 12, 2025):

The problem lies somewhere in how Ollama is launched as a service (ollama.service). If we run Ollama using OLLAMA_MODELS:/data2/models ollama serve, the OLLAMA_MODELS variable is set and models are stored in /data2/models. We probably need to look into how variables are passed from the ollama.service configuration or check the permissions for the service user.

<!-- gh-comment-id:3523270656 --> @pythonms commented on GitHub (Nov 12, 2025): The problem lies somewhere in how Ollama is launched as a service (ollama.service). If we run Ollama using `OLLAMA_MODELS:/data2/models ollama serve`, the `OLLAMA_MODELS` variable is set and models are stored in `/data2/models`. We probably need to look into how variables are passed from the `ollama.service` configuration or check the permissions for the service user.
Author
Owner

@rick-github commented on GitHub (Nov 12, 2025):

@pythonms Passing of the OLLAMA_MODELS variable is not the problem in this case, it's the ownership/permissions of the directory. If you run ollama from the command line then you are probably starting it as a different user, which may not be able to access /data2/models. What's the output of ls -ld / /data2 /data2/models, and which user is being used to manually start ollama?

<!-- gh-comment-id:3523306147 --> @rick-github commented on GitHub (Nov 12, 2025): @pythonms Passing of the `OLLAMA_MODELS` variable is not the problem in this case, it's the ownership/permissions of the directory. If you run ollama from the command line then you are probably starting it as a different user, which may not be able to access `/data2/models`. What's the output of `ls -ld / /data2 /data2/models`, and which user is being used to manually start ollama?
Author
Owner

@pythonms commented on GitHub (Nov 12, 2025):

@rick-github ls -ld / /data2 /data2/models drwxr-xr-x 26 root root 4096 maj 26 07:25 / drwx------ 4 ms ms 4096 lis 12 18:57 /data2 drwxrwxr-x 4 ms ms 4096 lis 12 18:58 /data2/models

but
You need to add execute permissions for the models directory: sudo chmod g+x /data2/models/. If I add my user (ms) to the ollama group and run the service as user ms, it works. This means the ollama.service has access to the /data2/models/ directory.

For the service to run as the ollama user (which is probably more cool 😉), you need to change the owner of the parent directory, in this case /data2, using:

sudo chown ollama:ollama /data2/

The output of sudo ls -ld /data2 /data2/models/ shows the correct ownership and permissions:

Bash

drwx------ 4 ollama ollama 4096 lis 12 18:57 /data2
drwx--x--- 4 ollama ollama 4096 lis 12 18:58 /data2/models/

And it seems to be working, as indicated by my ollama.service file:

`[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=5
Environment="PATH=$PATH" "OLLAMA_MODELS=/data2/models"

[Install]
WantedBy=multi-user.target`

<!-- gh-comment-id:3524364767 --> @pythonms commented on GitHub (Nov 12, 2025): @rick-github `ls -ld / /data2 /data2/models drwxr-xr-x 26 root root 4096 maj 26 07:25 / drwx------ 4 ms ms 4096 lis 12 18:57 /data2 drwxrwxr-x 4 ms ms 4096 lis 12 18:58 /data2/models` but You need to add execute permissions for the models directory: `sudo chmod g+x /data2/models/`. If I add my user (ms) to the ollama group and run the service as user ms, it works. This means the ollama.service has access to the /data2/models/ directory. For the service to run as the ollama user (which is probably more cool 😉), you need to change the owner of the parent directory, in this case /data2, using: `sudo chown ollama:ollama /data2/` The output of sudo `ls -ld /data2 /data2/models/ ` shows the correct ownership and permissions: Bash `drwx------ 4 ollama ollama 4096 lis 12 18:57 /data2` `drwx--x--- 4 ollama ollama 4096 lis 12 18:58 /data2/models/` And it seems to be working, as indicated by my ollama.service file: `[Unit] Description=Ollama Service After=network-online.target [Service] ExecStart= ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=5 Environment="PATH=$PATH" "OLLAMA_MODELS=/data2/models" [Install] WantedBy=multi-user.target`
Author
Owner

@nickovaras commented on GitHub (Mar 3, 2026):

This is still a problem with Ollama 0.17.4 at the time of writing this.

<!-- gh-comment-id:3988564025 --> @nickovaras commented on GitHub (Mar 3, 2026): This is still a problem with Ollama 0.17.4 at the time of writing this.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32226