[GH-ISSUE #2147] permission denied when setting OLLAMA_MODELS in service file #63263

Closed
opened 2026-05-03 12:46:26 -05:00 by GiteaMirror · 24 comments
Owner

Originally created by @lasseedfast on GitHub (Jan 22, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2147

Originally assigned to: @dhiltgen on GitHub.

I'm trying to set MODEL_FILE env variable in /etc/systemd/system/ollama.service.d but the logs shows that the service tries to create the directory:

Jan 22 21:25:41 airig systemd[1]: ollama.service: Scheduled restart job, restart counter is at 151.
Jan 22 21:25:41 airig systemd[1]: Stopped ollama.service - Ollama Service.
Jan 22 21:25:41 airig systemd[1]: Started ollama.service - Ollama Service.
Jan 22 21:25:41 airig sh[301002]: Error: mkdir /home/lasse/model_drive: permission denied
Jan 22 21:25:41 airig systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 21:25:41 airig systemd[1]: ollama.service: Failed with result 'exit-code'.

environment.conf:

~$ cat /etc/systemd/system/ollama.service.d/environment.conf
[Service]
Environment="OLLAMA_MODELS=/home/lasse/model_drive/ollama"

The model_file folder is a mount point for a SSD disk, but when checking permissions for my user and the ollama user it looks fine.

drwxrwxrwx 5 lasse lasse 4096 Jan 21 19:18 model_drive

When starting the service like OLLAMA_MODELS=~/model_drive/ollama ollama serve everything works fine, only when using the conf file as proposed in the FAQ.

This might be related to the bug in https://github.com/jmorganca/ollama/issues/1066

Originally created by @lasseedfast on GitHub (Jan 22, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2147 Originally assigned to: @dhiltgen on GitHub. I'm trying to set MODEL_FILE env variable in /etc/systemd/system/ollama.service.d but the logs shows that the service tries to create the directory: ``` Jan 22 21:25:41 airig systemd[1]: ollama.service: Scheduled restart job, restart counter is at 151. Jan 22 21:25:41 airig systemd[1]: Stopped ollama.service - Ollama Service. Jan 22 21:25:41 airig systemd[1]: Started ollama.service - Ollama Service. Jan 22 21:25:41 airig sh[301002]: Error: mkdir /home/lasse/model_drive: permission denied Jan 22 21:25:41 airig systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE Jan 22 21:25:41 airig systemd[1]: ollama.service: Failed with result 'exit-code'. ``` environment.conf: ``` ~$ cat /etc/systemd/system/ollama.service.d/environment.conf [Service] Environment="OLLAMA_MODELS=/home/lasse/model_drive/ollama" ``` The model_file folder is a mount point for a SSD disk, but when checking permissions for my user and the ollama user it looks fine. `drwxrwxrwx 5 lasse lasse 4096 Jan 21 19:18 model_drive` When starting the service like `OLLAMA_MODELS=~/model_drive/ollama ollama serve` everything works fine, only when using the conf file as proposed in the [FAQ](https://github.com/jmorganca/ollama/blob/main/docs/faq.md#where-are-models-stored). This might be related to the bug in https://github.com/jmorganca/ollama/issues/1066
Author
Owner

@odeemi commented on GitHub (Jan 22, 2024):

Fighting with the same thing here. Tried giving permissions in every possible way and nothing works...
Perhaps some sleep and tomorrow will be brighter 🤞

<!-- gh-comment-id:1904908453 --> @odeemi commented on GitHub (Jan 22, 2024): Fighting with the same thing here. Tried giving permissions in every possible way and nothing works... Perhaps some sleep and tomorrow will be brighter :crossed_fingers:
Author
Owner

@mxyng commented on GitHub (Jan 22, 2024):

Home directories (/home/*) sometimes have permissions 750 which prevent others from reading or accessing the directory. Ollama runs as user/group ollama which won't have access to your home directory.

There's two options:

  1. Update ollama.service to run as your user, e.g. User=lasse and Group=lasse
  2. Update OLLAMA_MODELS to a directory with permissions 755 or you're willing to chown to ollama:ollama
<!-- gh-comment-id:1904953268 --> @mxyng commented on GitHub (Jan 22, 2024): Home directories (`/home/*`) sometimes have permissions 750 which prevent others from reading or accessing the directory. Ollama runs as user/group ollama which won't have access to your home directory. There's two options: 1. Update ollama.service to run as your user, e.g. `User=lasse` and `Group=lasse` 2. Update OLLAMA_MODELS to a directory with permissions 755 or you're willing to chown to ollama:ollama
Author
Owner

@godardt commented on GitHub (Jan 31, 2024):

I gave up on my side, I just ended up doing:

sudo ln -s /mnt/ext_datasets/ollama_models /usr/share/ollama/.ollama/models
sudo chown ollama:ollama /usr/share/ollama/.ollama/models

And it worked

<!-- gh-comment-id:1918568883 --> @godardt commented on GitHub (Jan 31, 2024): I gave up on my side, I just ended up doing: ``` sudo ln -s /mnt/ext_datasets/ollama_models /usr/share/ollama/.ollama/models sudo chown ollama:ollama /usr/share/ollama/.ollama/models ``` And it worked
Author
Owner

@BrianNormant commented on GitHub (Feb 8, 2024):

Its what I wanted to do, but because I mount my other drive in my home dir, symlink won't fix the problem.
Updating ollama.service as a group doesn't work, because it get permission denied when trying to access /var/lib/ollama (I thinks archlinux decided to put it here for..., reason).
So I tried updating HOME to /home/me/mydisk/ollama, but then I get
Error: mkdir /home/me: permission denied.
Which is beyond strange, as the directoy exist and it run as me

<!-- gh-comment-id:1933342399 --> @BrianNormant commented on GitHub (Feb 8, 2024): Its what I wanted to do, but because I mount my other drive in my home dir, symlink won't fix the problem. Updating ollama.service as a group doesn't work, because it get permission denied when trying to access /var/lib/ollama (I thinks archlinux decided to put it here for..., reason). So I tried updating HOME to /home/me/mydisk/ollama, but then I get ` Error: mkdir /home/me: permission denied. ` Which is beyond strange, as the directoy exist and it run as `me`
Author
Owner

@M1TR commented on GitHub (Feb 9, 2024):

I found a solution in the archlinux forums

I am still having troubles when setting $OLLAMA_MODELS, as it tries to create all the directory structure, and if it does not have permission to write even the top directory at $OLLAMA_MODELS, it fails. I reckon that is a bug.

The issue, as also described in the post, is that ollama tries to create the entire directory structure which you specify in the OLLAMA_MODELS environment variable. So even if you do a chown -R ollama:ollama /my/path/model_dir ollama tries to do a mkdir /my/path and errors out. The solution in the forum post is do a bind mount:

sudo mount --bind /my/path/model_dir /usr/share/ollama/.ollama/models
<!-- gh-comment-id:1936422862 --> @M1TR commented on GitHub (Feb 9, 2024): I found a solution in the [archlinux forums ](https://bbs.archlinux.org/viewtopic.php?pid=2148322#p2148322) > I am still having troubles when setting $OLLAMA_MODELS, as it tries to create all the directory structure, and if it does not have permission to write even the top directory at $OLLAMA_MODELS, it fails. I reckon that is a bug. The issue, as also described in the post, is that ollama tries to create the entire directory structure which you specify in the `OLLAMA_MODELS` environment variable. So even if you do a `chown -R ollama:ollama /my/path/model_dir` ollama tries to do a `mkdir /my/path` and errors out. The solution in the forum post is do a bind mount: ``` sudo mount --bind /my/path/model_dir /usr/share/ollama/.ollama/models ```
Author
Owner

@dhiltgen commented on GitHub (Mar 12, 2024):

@lasseedfast if you set write permissions for the ollama user (and group), or ideally make the directory owned by that user, things should work properly. If you're still having problems, please share ls -lR /home/lasse/model_drive as well as the server log.

<!-- gh-comment-id:1992322592 --> @dhiltgen commented on GitHub (Mar 12, 2024): @lasseedfast if you set write permissions for the `ollama` user (and group), or ideally make the directory owned by that user, things should work properly. If you're still having problems, please share `ls -lR /home/lasse/model_drive` as well as the server log.
Author
Owner

@Nilabb commented on GitHub (Jul 5, 2024):

@lasseedfast if you set write permissions for the ollama user (and group), or ideally make the directory owned by that user, things should work properly. If you're still having problems, please share ls -lR /home/lasse/model_drive as well as the server log.

Hey, curious if there should be some sort of documentation in the faq about what @mxyng mentioned where in home directories the daemon can't make /home/user/OLLAMA_MODELS/ because home directories have strict permissions. I imagine many home users, myself included, have their homeand/directories on separate drives, like an ssd and hdd, and faced trouble trying to set OLLAMA_MODELS to something like/home/user/.ollamamodels`. Sorry if there already is documentation about this somewhere, I admittedly didn't look much.

<!-- gh-comment-id:2210786014 --> @Nilabb commented on GitHub (Jul 5, 2024): > @lasseedfast if you set write permissions for the `ollama` user (and group), or ideally make the directory owned by that user, things should work properly. If you're still having problems, please share `ls -lR /home/lasse/model_drive` as well as the server log. Hey, curious if there should be some sort of documentation in the faq about what @mxyng mentioned where in home directories the daemon can't make `/home/user/OLLAMA_MODELS/ because home directories have strict permissions. I imagine many home users, myself included, have their `home` and `/` directories on separate drives, like an ssd and hdd, and faced trouble trying to set OLLAMA_MODELS to something like `/home/user/.ollamamodels`. Sorry if there already is documentation about this somewhere, I admittedly didn't look much.
Author
Owner

@mihow commented on GitHub (Jul 30, 2024):

Is the issue that Ollama tries to do something in the parent of the custom models directory when running as a service? Because the issue persists if the custom models directory already exists and has the correct permissions.

Example:

# mount an SSD at /mnt/models
sudo mv  /usr/share/ollama/.ollama/models /mnt/models/ollama
sudo chown -R ollama:ollama /mnt/models/ollama

This works!

OLLAMA_MODELS=/mnt/models/ollama ollama serve

But this does not

# sudo vim /etc/systemd/system/ollama.service
[Service]
Environment="OLLAMA_MODELS=/mnt/models/ollama"
Environment="OLLAMA_DEBUG=true"
...

sudo systemctl daemon-reload
sudo systemctl restart ollama
sudo systemctl status ollama

ollama[380307]: Error: mkdir /mnt/models: permission denied

What is the service trying to do in the parent directory if the new models directory already exists?

<!-- gh-comment-id:2257375165 --> @mihow commented on GitHub (Jul 30, 2024): Is the issue that Ollama tries to do something in the _parent_ of the custom models directory when running as a service? Because the issue persists if the custom models directory already exists and has the correct permissions. Example: ``` # mount an SSD at /mnt/models sudo mv /usr/share/ollama/.ollama/models /mnt/models/ollama sudo chown -R ollama:ollama /mnt/models/ollama ``` This works! ``` OLLAMA_MODELS=/mnt/models/ollama ollama serve ``` But this does not ``` # sudo vim /etc/systemd/system/ollama.service [Service] Environment="OLLAMA_MODELS=/mnt/models/ollama" Environment="OLLAMA_DEBUG=true" ... sudo systemctl daemon-reload sudo systemctl restart ollama sudo systemctl status ollama ``` **ollama[380307]: Error: mkdir /mnt/models: permission denied** What is the service trying to do in the parent directory if the new models directory already exists?
Author
Owner

@robertosw commented on GitHub (Aug 19, 2024):

@dhiltgen Why is this closed? This issue is clearly not resolved. The entire path has to belong to the user and group defined in the systemd service and the error logging is bad:

(st-start)[26277]: ollama.service: Changing to the requested working directory failed: Permission denied for WHICH directory?

I can reliable reproduce this issue. For me ollamas files are in /mnt/860Evo/ollama/ and the entire /mnt/860Evo has to belong to ollama in order to not fail at the start of the systemd service

I think ollama needs to own just one folder above. It worked before when I accidentily had /mnt/860Evo/ollama/ollama. I will check if that is the case I find some time tomorrow


Log from systemd service:

Aug 19 23:45:32 systemd[1]: Starting Server for local large language models...
Aug 19 23:45:32 (st-start)[26277]: ollama.service: Changing to the requested working directory failed: Permission denied
Aug 19 23:45:32 (st-start)[26277]: ollama.service: Failed at step CHDIR spawning /nix/store/4hcwnx0la4ij52bq62l5drfh3iaillbc-unit-script-ollama-post-start/bin/ollama-post-start: Permission denied
Aug 19 23:45:32 (ollama)[26276]: ollama.service: Changing to the requested working directory failed: Permission denied
Aug 19 23:45:32 (ollama)[26276]: ollama.service: Failed at step CHDIR spawning /nix/store/gskgnhqfv88blbydyb08n4iygsbv1bjr-ollama-0.3.4/bin/ollama: Permission denied
Aug 19 23:45:32 systemd[1]: ollama.service: Main process exited, code=exited, status=200/CHDIR
Aug 19 23:45:32 systemd[1]: ollama.service: Control process exited, code=exited, status=200/CHDIR
Aug 19 23:45:32 systemd[1]: ollama.service: Failed with result 'exit-code'.
Aug 19 23:45:32 systemd[1]: Failed to start Server for local large language models.

sudo chown -R ollama:ollama /mnt/860Evo/ is the fix, but not usable, because I'd like to use my ssd for other things as well

systemd service file (generated by nixos)

[Service]
Environment="HIP_VISIBLE_DEVICES=0"
Environment="HOME=/mnt/860Evo/ollama/"
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
Environment="LOCALE_ARCHIVE=/nix/store/..."
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_KEEP_ALIVE=30m"
Environment="OLLAMA_LLM_LIBRARY=rocm"
Environment="OLLAMA_MAX_LOADED_MODELS=2"
Environment="OLLAMA_MAX_QUEUE=3"
Environment="OLLAMA_MODELS=/mnt/860Evo/ollama/models"
Environment="OLLAMA_NUM_PARALLEL=1"
Environment="PATH=/nix/store/..."
Environment="TZDIR=/nix/store/..."
CapabilityBoundingSet=
DeviceAllow=char-nvidiactl
DeviceAllow=char-nvidia-caps
DeviceAllow=char-nvidia-frontend
DeviceAllow=char-nvidia-uvm
DeviceAllow=char-drm
DeviceAllow=char-kfd
DevicePolicy=closed
DynamicUser=true
ExecStart=/nix/store/gskgnhqfv88blbydyb08n4iygsbv1bjr-ollama-0.3.4/bin/ollama serve
ExecStartPost=/nix/store/4hcwnx0la4ij52bq62l5drfh3iaillbc-unit-script-ollama-post-start/bin/ollama-post-start
Group=ollama
LockPersonality=true
MemoryDenyWriteExecute=true
NoNewPrivileges=true
PrivateDevices=false
PrivateTmp=true
PrivateUsers=true
ProcSubset=all
ProtectClock=true
ProtectControlGroups=true
ProtectHome=true
ProtectHostname=true
ProtectKernelLogs=true
ProtectKernelModules=true
ProtectKernelTunables=true
ProtectProc=invisible
ProtectSystem=strict
ReadWritePaths=/mnt/860Evo/ollama/
ReadWritePaths=/mnt/860Evo/ollama//models
RemoveIPC=true
RestrictAddressFamilies=AF_INET
RestrictAddressFamilies=AF_INET6
RestrictAddressFamilies=AF_UNIX
RestrictNamespaces=true
RestrictRealtime=true
RestrictSUIDSGID=true
StateDirectory=ollama
SupplementaryGroups=render
SystemCallArchitectures=native
SystemCallFilter=@system-service @resources
SystemCallFilter=~@privileged
UMask=0077
User=ollama
WorkingDirectory=/mnt/860Evo/ollama/
<!-- gh-comment-id:2297518459 --> @robertosw commented on GitHub (Aug 19, 2024): @dhiltgen Why is this closed? This issue is clearly not resolved. The entire path has to belong to the user and group defined in the systemd service and the error logging is bad: ``` (st-start)[26277]: ollama.service: Changing to the requested working directory failed: Permission denied ``` for WHICH directory? I can reliable reproduce this issue. For me ollamas files are in `/mnt/860Evo/ollama/` and the entire `/mnt/860Evo` has to belong to ollama in order to not fail at the start of the systemd service I think ollama needs to own just one folder above. It worked before when I accidentily had `/mnt/860Evo/ollama/ollama`. I will check if that is the case I find some time tomorrow --- Log from systemd service: ```shell Aug 19 23:45:32 systemd[1]: Starting Server for local large language models... Aug 19 23:45:32 (st-start)[26277]: ollama.service: Changing to the requested working directory failed: Permission denied Aug 19 23:45:32 (st-start)[26277]: ollama.service: Failed at step CHDIR spawning /nix/store/4hcwnx0la4ij52bq62l5drfh3iaillbc-unit-script-ollama-post-start/bin/ollama-post-start: Permission denied Aug 19 23:45:32 (ollama)[26276]: ollama.service: Changing to the requested working directory failed: Permission denied Aug 19 23:45:32 (ollama)[26276]: ollama.service: Failed at step CHDIR spawning /nix/store/gskgnhqfv88blbydyb08n4iygsbv1bjr-ollama-0.3.4/bin/ollama: Permission denied Aug 19 23:45:32 systemd[1]: ollama.service: Main process exited, code=exited, status=200/CHDIR Aug 19 23:45:32 systemd[1]: ollama.service: Control process exited, code=exited, status=200/CHDIR Aug 19 23:45:32 systemd[1]: ollama.service: Failed with result 'exit-code'. Aug 19 23:45:32 systemd[1]: Failed to start Server for local large language models. ``` `sudo chown -R ollama:ollama /mnt/860Evo/` is the fix, but not usable, because I'd like to use my ssd for other things as well systemd service file (generated by nixos) ```shell [Service] Environment="HIP_VISIBLE_DEVICES=0" Environment="HOME=/mnt/860Evo/ollama/" Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0" Environment="LOCALE_ARCHIVE=/nix/store/..." Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="OLLAMA_KEEP_ALIVE=30m" Environment="OLLAMA_LLM_LIBRARY=rocm" Environment="OLLAMA_MAX_LOADED_MODELS=2" Environment="OLLAMA_MAX_QUEUE=3" Environment="OLLAMA_MODELS=/mnt/860Evo/ollama/models" Environment="OLLAMA_NUM_PARALLEL=1" Environment="PATH=/nix/store/..." Environment="TZDIR=/nix/store/..." CapabilityBoundingSet= DeviceAllow=char-nvidiactl DeviceAllow=char-nvidia-caps DeviceAllow=char-nvidia-frontend DeviceAllow=char-nvidia-uvm DeviceAllow=char-drm DeviceAllow=char-kfd DevicePolicy=closed DynamicUser=true ExecStart=/nix/store/gskgnhqfv88blbydyb08n4iygsbv1bjr-ollama-0.3.4/bin/ollama serve ExecStartPost=/nix/store/4hcwnx0la4ij52bq62l5drfh3iaillbc-unit-script-ollama-post-start/bin/ollama-post-start Group=ollama LockPersonality=true MemoryDenyWriteExecute=true NoNewPrivileges=true PrivateDevices=false PrivateTmp=true PrivateUsers=true ProcSubset=all ProtectClock=true ProtectControlGroups=true ProtectHome=true ProtectHostname=true ProtectKernelLogs=true ProtectKernelModules=true ProtectKernelTunables=true ProtectProc=invisible ProtectSystem=strict ReadWritePaths=/mnt/860Evo/ollama/ ReadWritePaths=/mnt/860Evo/ollama//models RemoveIPC=true RestrictAddressFamilies=AF_INET RestrictAddressFamilies=AF_INET6 RestrictAddressFamilies=AF_UNIX RestrictNamespaces=true RestrictRealtime=true RestrictSUIDSGID=true StateDirectory=ollama SupplementaryGroups=render SystemCallArchitectures=native SystemCallFilter=@system-service @resources SystemCallFilter=~@privileged UMask=0077 User=ollama WorkingDirectory=/mnt/860Evo/ollama/ ```
Author
Owner

@vap0rtranz commented on GitHub (Sep 4, 2024):

Changing ownership/permissions at the parent directory of mountpoint doesn't fix this issue, nor the model directory, or the mountpoint of the external drive.

This is mind boggling for how Linux services/daemons have worked for ... well a dang long time.

Why was the service developed to demand create permissions for a directory several levels above its configured working directories?!?!

Just silently move on if the directory structure exists. Or if the service cannot create a file in its working directories, or whatever directory, then there is a problem. An simple file create test would be all that's needed, like to put a new model in.

Error:

Sep 04 10:06:29 justin-two-towers systemd[1]: Started Ollama Service.
Sep 04 10:06:29 justin-two-towers ollama[1025227]: 2024/09/04 10:06:29 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/media/justin/external/CodeReady/LLM-Models/ollama-models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
Sep 04 10:06:29 justin-two-towers ollama[1025227]: Error: mkdir /media/justin/external: permission denied
Sep 04 10:06:29 justin-two-towers systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
Sep 04 10:06:29 justin-two-towers systemd[1]: ollama.service: Failed with result 'exit-code'.

Permissions

$ ls -l /media/justin/
total 10
drwxrwxrwx 15 justin ollama 4096 Aug 31 13:57 external

$ ls -l /media/justin/external/CodeReady/LLM-Models/ | grep ollama-models
drwxrwxrwx 4 ollama ollama        4096 Sep  4 09:01 ollama-models

$ ls -l .. | grep LLM
drwxrwxr-x   3 justin ollama  4096 Sep  4 09:44 LLM-Models

And the service config:

### Editing /etc/systemd/system/ollama.service.d/override.conf
### Anything between here and the comment below will become the new contents of the file

[Service]
Environment="OLLAMA_MODELS=/media/justin/external/CodeReady/LLM-Models/ollama-models"
<!-- gh-comment-id:2329344451 --> @vap0rtranz commented on GitHub (Sep 4, 2024): Changing ownership/permissions at the parent directory of mountpoint doesn't fix this issue, nor the model directory, or the mountpoint of the external drive. This is mind boggling for how Linux services/daemons have worked for ... well a dang long time. Why was the service developed to demand create permissions for a directory several levels above its configured working directories?!?! Just silently move on if the directory structure exists. Or if the service cannot create a file in its working directories, or whatever directory, then there is a problem. An simple file create test would be all that's needed, like to put a new model in. Error: ``` Sep 04 10:06:29 justin-two-towers systemd[1]: Started Ollama Service. Sep 04 10:06:29 justin-two-towers ollama[1025227]: 2024/09/04 10:06:29 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/media/justin/external/CodeReady/LLM-Models/ollama-models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" Sep 04 10:06:29 justin-two-towers ollama[1025227]: Error: mkdir /media/justin/external: permission denied Sep 04 10:06:29 justin-two-towers systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE Sep 04 10:06:29 justin-two-towers systemd[1]: ollama.service: Failed with result 'exit-code'. ``` Permissions ``` $ ls -l /media/justin/ total 10 drwxrwxrwx 15 justin ollama 4096 Aug 31 13:57 external $ ls -l /media/justin/external/CodeReady/LLM-Models/ | grep ollama-models drwxrwxrwx 4 ollama ollama 4096 Sep 4 09:01 ollama-models $ ls -l .. | grep LLM drwxrwxr-x 3 justin ollama 4096 Sep 4 09:44 LLM-Models ``` And the service config: ``` ### Editing /etc/systemd/system/ollama.service.d/override.conf ### Anything between here and the comment below will become the new contents of the file [Service] Environment="OLLAMA_MODELS=/media/justin/external/CodeReady/LLM-Models/ollama-models" ```
Author
Owner

@therealkenc commented on GitHub (Dec 9, 2024):

Just silently move on if the directory structure exists. Or if the service cannot create a file in its working directories, or whatever directory, then there is a problem. An simple file create test would be all that's needed, like to put a new model in.

User ollama doesn't even have permission to stat the directory (to see it isn't there) nevermind mkdir it. A "simple file create test" will not work because ollama does not have perms to the tree. The mkdir fail is a red herring; it is just the first thing to fail.

Try it with:

sudo su - ollama

Then touch a file in the directory. [You won't even be able to cd there let alone touch it.]

You (user justin) can start ollama on that directory b/c you have free reign on it. User ollama does not.

If you want to give user ollama the same free reign you have, then add the group justin to the ollama user:

$ sudo usermod -a -G justin ollama

To people bind mounting, no, that's cray.

<!-- gh-comment-id:2526530868 --> @therealkenc commented on GitHub (Dec 9, 2024): > Just silently move on if the directory structure exists. Or if the service cannot create a file in its working directories, or whatever directory, then there is a problem. An simple file create test would be all that's needed, like to put a new model in. User `ollama` doesn't even have permission to `stat` the directory (to see it isn't there) nevermind `mkdir` it. A "simple file create test" will not work because `ollama` does not have perms to the tree. The `mkdir` fail is a red herring; it is just the first thing to fail. Try it with: ``` sudo su - ollama ``` Then `touch` a file in the directory. [You won't even be able to `cd` there let alone `touch` it.] _You_ (user `justin`) can start `ollama` on that directory b/c you have free reign on it. User `ollama` does not. If you want to give user `ollama` the same free reign _you_ have, then add the group `justin` to the `ollama` user: ``` $ sudo usermod -a -G justin ollama ``` To people [bind mounting](https://github.com/ollama/ollama/issues/2147#issuecomment-1936422862), no, that's cray.
Author
Owner

@M1TR commented on GitHub (Dec 9, 2024):

To people bind mounting, no, that's cray.

Any particular reason why?

<!-- gh-comment-id:2526647767 --> @M1TR commented on GitHub (Dec 9, 2024): > To people [bind mounting](https://github.com/ollama/ollama/issues/2147#issuecomment-1936422862), no, that's cray. Any particular reason why?
Author
Owner

@metabinary-ltd commented on GitHub (Dec 18, 2024):

I just got this working after trying a bunch of different "sane" workarounds.

My error was stemming from my SSD mount itself. I (ollama user) had full permissions to my models directory, but the root of the SSD had 750 permissions.

root@ollamabox:/mnt/data-a-01# ls -alh
total 16K
drwxr-x--- 3 root   root  4.0K Dec 13 14:43 .
drwxr-xr-x 6 root   root   4.0K Dec  9 14:49 ..
drwxrwxr-x 4 ollama ollama 4.0K Dec 16 21:35 ollama

I had to chmod 755 /mnt/data-a-01/ to allow anyone (everyone) read permissions to the SSD. After that, it worked!
In the future, I'll probably change the group for the SSD to a group that ollama can join and chmod back to 750. But otherwise, no fancy links or binds here!

PS: I had to enable my ollama user's shell as it defaults to /bin/false, so sudo su - ollama would not change to the ollama user. Once I did that, I was able to try to navigate to the models directory directly, which failed. I then tried to navigate to it from the root up and found the blockage.

<!-- gh-comment-id:2552288470 --> @metabinary-ltd commented on GitHub (Dec 18, 2024): I just got this working after trying a bunch of different "sane" workarounds. My error was stemming from my SSD mount itself. I (ollama user) had full permissions to my models directory, but the root of the SSD had 750 permissions. ``` root@ollamabox:/mnt/data-a-01# ls -alh total 16K drwxr-x--- 3 root root 4.0K Dec 13 14:43 . drwxr-xr-x 6 root root 4.0K Dec 9 14:49 .. drwxrwxr-x 4 ollama ollama 4.0K Dec 16 21:35 ollama ``` I had to `chmod 755 /mnt/data-a-01/` to allow anyone (everyone) read permissions to the SSD. After that, it worked! In the future, I'll probably change the group for the SSD to a group that ollama can join and chmod back to 750. But otherwise, no fancy links or binds here! PS: I had to enable my ollama user's shell as it defaults to /bin/false, so `sudo su - ollama` would not change to the ollama user. Once I did that, I was able to try to navigate to the models directory directly, which failed. I then tried to navigate to it from the root up and found the blockage.
Author
Owner

@teto commented on GitHub (Dec 30, 2024):

I've just hit this as well. I am trying BindPaths = "/home/teto/.ollama/models:/var/lib/ollama/models"; (not sure about the paths yet) in a systemd service with DynamicUser and I would like to check what models ollama sees via the API curl -X GET http://localhost:11434/api/models but it's a 404. Is there any such endpoint ?

<!-- gh-comment-id:2565990947 --> @teto commented on GitHub (Dec 30, 2024): I've just hit this as well. I am trying `BindPaths = "/home/teto/.ollama/models:/var/lib/ollama/models";` (not sure about the paths yet) in a systemd service with DynamicUser and I would like to check what models ollama sees via the API `curl -X GET http://localhost:11434/api/models` but it's a 404. Is there any such endpoint ?
Author
Owner

@LiyaoTang commented on GitHub (Jan 31, 2025):

Home directories (/home/*) sometimes have permissions 750 which prevent others from reading or accessing the directory. Ollama runs as user/group ollama which won't have access to your home directory.

There's two options:

  1. Update ollama.service to run as your user, e.g. User=lasse and Group=lasse
  2. Update OLLAMA_MODELS to a directory with permissions 755 or you're willing to chown to ollama:ollama

Solved by updating ollama.service to run as my current user.

<!-- gh-comment-id:2626980194 --> @LiyaoTang commented on GitHub (Jan 31, 2025): > Home directories (`/home/*`) sometimes have permissions 750 which prevent others from reading or accessing the directory. Ollama runs as user/group ollama which won't have access to your home directory. > > There's two options: > > 1. Update ollama.service to run as your user, e.g. `User=lasse` and `Group=lasse` > 2. Update OLLAMA_MODELS to a directory with permissions 755 or you're willing to chown to ollama:ollama Solved by updating `ollama.service` to run as my current user.
Author
Owner

@JamesCHub commented on GitHub (Feb 5, 2025):

This also fails if you are using a link on your home directory to secondary storage (which is not home - but the link is). This specific issue can be fixed by simply changing the reference to the actual mount point
so instead of:
Environment=OLLAMA_MODELS=/home/someuser/FriendlyDirLinkName/ollama/.ollama/models
you do:
Environment=OLLAMA_MODELS=/mnt/data02/ollama/.ollama/models (where /mnt/data02 is what you linked to)

My recollection is that Redis has a similar restriction, but in the service file you can just addProtectHome=no
and it will allow you to use a dir you create there.

<!-- gh-comment-id:2637905379 --> @JamesCHub commented on GitHub (Feb 5, 2025): This also fails if you are using a link on your home directory to secondary storage (which is not home - but the link is). This specific issue can be fixed by simply changing the reference to the actual mount point so instead of: `Environment=OLLAMA_MODELS=/home/someuser/FriendlyDirLinkName/ollama/.ollama/models` you do: `Environment=OLLAMA_MODELS=/mnt/data02/ollama/.ollama/models` (where /mnt/data02 is what you linked to) My recollection is that Redis has a similar restriction, but in the service file you can just addProtectHome=no and it will allow you to use a dir you create there.
Author
Owner

@zkzkzk2015 commented on GitHub (Feb 24, 2025):

I do not understand why I am forced to use /var/share/ollama directory. What is wrong using my home directory?
Solution that is provided is obviously not working for many people. mkdir error keeps coming. ... I need a break.

<!-- gh-comment-id:2679093227 --> @zkzkzk2015 commented on GitHub (Feb 24, 2025): I do not understand why I am forced to use /var/share/ollama directory. What is wrong using my home directory? Solution that is provided is obviously not working for many people. mkdir error keeps coming. ... I need a break.
Author
Owner

@dscarmo commented on GitHub (Feb 27, 2025):

Home directories (/home/*) sometimes have permissions 750 which prevent others from reading or accessing the directory. Ollama runs as user/group ollama which won't have access to your home directory.
There's two options:

  1. Update ollama.service to run as your user, e.g. User=lasse and Group=lasse
  2. Update OLLAMA_MODELS to a directory with permissions 755 or you're willing to chown to ollama:ollama

Solved by updating ollama.service to run as my current user.

can confirm changing user to my user in the .service solves the issue

<!-- gh-comment-id:2687808039 --> @dscarmo commented on GitHub (Feb 27, 2025): > > Home directories (`/home/*`) sometimes have permissions 750 which prevent others from reading or accessing the directory. Ollama runs as user/group ollama which won't have access to your home directory. > > There's two options: > > > > 1. Update ollama.service to run as your user, e.g. `User=lasse` and `Group=lasse` > > 2. Update OLLAMA_MODELS to a directory with permissions 755 or you're willing to chown to ollama:ollama > > Solved by updating `ollama.service` to run as my current user. can confirm changing user to my user in the .service solves the issue
Author
Owner

@ijaketak commented on GitHub (Mar 11, 2025):

I encounted the same problem. ProtectHome = false solves this. Here is my NixOS configuration.nix.

services.ollama = {
  enable = true;
  home = "/home/ollama";
  host = "0.0.0.0";
  openFirewall = true;
  user = "ollama";
};
systemd.services.ollama.serviceConfig = {
  DynamicUser = lib.mkForce false;
  PrivateUsers = lib.mkForce false;
  ProtectHome = lib.mkForce false;
};
<!-- gh-comment-id:2715909325 --> @ijaketak commented on GitHub (Mar 11, 2025): I encounted the same problem. `ProtectHome = false` solves this. Here is my NixOS `configuration.nix`. ```nix services.ollama = { enable = true; home = "/home/ollama"; host = "0.0.0.0"; openFirewall = true; user = "ollama"; }; systemd.services.ollama.serviceConfig = { DynamicUser = lib.mkForce false; PrivateUsers = lib.mkForce false; ProtectHome = lib.mkForce false; }; ```
Author
Owner

@fanlessfan commented on GitHub (Mar 27, 2025):

I have the exactly same error. I want to move the model to an NFS share from local drive. in the local drive I can create a symbolic link, but once I move the models to the NFS share the symbolic link is not working. it always try to mkdir but failed even though it has permission. I also tried to use OLLAMA_MODELS and got the same error. But you can see that the dir has 777 permission and owned by ollama and I can touch a file. the ollama.0 is the old ollama folder.

mkdir /mnt/LLM/ollama: permission denied

journalctl -r -u ollama

Mar 27 15:49:20 x11dph systemd[1]: ollama.service: Failed with result 'exit-code'.
Mar 27 15:49:20 x11dph systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
Mar 27 15:49:20 x11dph ollama[18437]: Error: mkdir /mnt/LLM/ollama: permission denied
Mar 27 15:49:20 x11dph ollama[18437]: 2025/03/27 15:49:20 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_V>
Mar 27 15:49:20 x11dph systemd[1]: Started ollama.service - Ollama Service.

uxadm@x11dph:/mnt/LLM$ touch a
uxadm@x11dph:/mnt/LLM$ ls -al
total 4
drwxrwxrwx 1 ollama ollama 30 Mar 27 15:46 .
drwxr-xr-x 5 root root 4096 Mar 27 13:14 ..
-rwxrwxrwx 1 uxadm uxadm 0 Mar 27 15:46 a
drwxr-x--- 1 ollama ollama 68 Mar 24 11:20 ollama.0

cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
Environment="OLLAMA_KEEP_ALIVE=-1"
Environment="OLLAMA_MODELS=/mnt/LLM/ollama/.ollama/models"

[Install]
WantedBy=default.target

<!-- gh-comment-id:2759325371 --> @fanlessfan commented on GitHub (Mar 27, 2025): I have the exactly same error. I want to move the model to an NFS share from local drive. in the local drive I can create a symbolic link, but once I move the models to the NFS share the symbolic link is not working. it always try to mkdir but failed even though it has permission. I also tried to use OLLAMA_MODELS and got the same error. But you can see that the dir has 777 permission and owned by ollama and I can touch a file. the ollama.0 is the old ollama folder. mkdir /mnt/LLM/ollama: permission denied journalctl -r -u ollama Mar 27 15:49:20 x11dph systemd[1]: ollama.service: Failed with result 'exit-code'. Mar 27 15:49:20 x11dph systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE Mar 27 15:49:20 x11dph ollama[18437]: Error: mkdir /mnt/LLM/ollama: permission denied Mar 27 15:49:20 x11dph ollama[18437]: 2025/03/27 15:49:20 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_V> Mar 27 15:49:20 x11dph systemd[1]: Started ollama.service - Ollama Service. uxadm@x11dph:/mnt/LLM$ touch a uxadm@x11dph:/mnt/LLM$ ls -al total 4 drwxrwxrwx 1 ollama ollama 30 Mar 27 15:46 . drwxr-xr-x 5 root root 4096 Mar 27 13:14 .. -rwxrwxrwx 1 uxadm uxadm 0 Mar 27 15:46 a drwxr-x--- 1 ollama ollama 68 Mar 24 11:20 ollama.0 cat /etc/systemd/system/ollama.service [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" Environment="OLLAMA_KEEP_ALIVE=-1" Environment="OLLAMA_MODELS=/mnt/LLM/ollama/.ollama/models" [Install] WantedBy=default.target
Author
Owner

@BRUrban commented on GitHub (Oct 10, 2025):

@dhiltgen any update as to why this was marked completed, or has there been any investigation or work regarding the reported issues after you closed and self-assigned it a year ago?

There are continued reports of this issue being present due to permission requirements beyond the level of the directory being written into with workarounds that solved it for individuals backing that up as the cause, this is clearly not an issue limited to somebody not setting user or group permissions properly on the specific directory the write is occurring on, which would be valid/correct behavior.

In the absence of any explanation or justification as to why the permissions of higher directories are also needed, this seems like incorrect behavior on the part of ollama on linux systems.

<!-- gh-comment-id:3391773908 --> @BRUrban commented on GitHub (Oct 10, 2025): @dhiltgen any update as to why this was marked completed, or has there been any investigation or work regarding the reported issues after you closed and self-assigned it a year ago? There are continued reports of this issue being present due to permission requirements beyond the level of the directory being written into with workarounds that solved it for individuals backing that up as the cause, this is clearly not an issue limited to somebody not setting user or group permissions properly on the specific directory the write is occurring on, which would be valid/correct behavior. In the absence of any explanation or justification as to why the permissions of higher directories are also needed, this seems like incorrect behavior on the part of ollama on linux systems.
Author
Owner

@igoiglesias commented on GitHub (Oct 21, 2025):

I had the same problem. To fix I just did that:

sudo mkdir /usr/share/ollama
sudo chown -R ollama:ollama /usr/share/ollama
sudo chmod -R 755 /usr/share/ollama

Now it works!

<!-- gh-comment-id:3425919123 --> @igoiglesias commented on GitHub (Oct 21, 2025): I had the same problem. To fix I just did that: ``` sudo mkdir /usr/share/ollama sudo chown -R ollama:ollama /usr/share/ollama sudo chmod -R 755 /usr/share/ollama ``` Now it works!
Author
Owner

@AnsenIO commented on GitHub (Jan 27, 2026):

i temporarily assigned write permission to the upper folders to allow it to complete its attempt to mkdir, even tho the folder was already existing. after removing the write permission it keeps working.

here the steps i did:

on a second terminal i was monitoring sudo journalctl -xfu ollama

ollama list # my models were not present
sudo chown -R ollama:ollama /data/ollama
sudo chown ollama /data
sudo nano /etc/systemd/system/ollama.service
Environment="OLLAMA_MODELS=/data/ollama/models"
Environment="OLLAMA_HOST=0.0.0.0:11434" # to listen on all interfaces
sudo systemctl restart ollama
sudo chown myuser /data
sudo systemctl restart ollama
ollama list # my models were finally present it took the right path.

<!-- gh-comment-id:3802508906 --> @AnsenIO commented on GitHub (Jan 27, 2026): i temporarily assigned write permission to the upper folders to allow it to complete its attempt to mkdir, even tho the folder was already existing. after removing the write permission it keeps working. here the steps i did: on a second terminal i was monitoring sudo journalctl -xfu ollama ollama list # my models were not present sudo chown -R ollama:ollama /data/ollama sudo chown ollama /data sudo nano /etc/systemd/system/ollama.service Environment="OLLAMA_MODELS=/data/ollama/models" Environment="OLLAMA_HOST=0.0.0.0:11434" # to listen on all interfaces sudo systemctl restart ollama sudo chown myuser /data sudo systemctl restart ollama ollama list # my models were finally present it took the right path.
Author
Owner

@kittcarr commented on GitHub (Apr 14, 2026):

I was trying to save in /home/user/.ollama and it took me an hour to fix it by changing ProtectHome = no in systemd. Hope this saves some1 time

<!-- gh-comment-id:4246166402 --> @kittcarr commented on GitHub (Apr 14, 2026): I was trying to save in /home/user/.ollama and it took me an hour to fix it by changing ProtectHome = no in systemd. Hope this saves some1 time
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#63263