[GH-ISSUE #2676] OLLAMA-MODELS does not work for system ollama.service #63630

Closed
opened 2026-05-03 14:31:26 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @QIN2DIM on GitHub (Feb 22, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2676

Originally assigned to: @dhiltgen on GitHub.

ollama show --modelfile

The default model_path is in /usr/share/ollama/.ollama/models, as mentioned in the document.

(base) root@x:~# ollama ls
NAME                    ID              SIZE    MODIFIED     
deepseek-coder:33b      acec7c0b0fd9    18 GB   3 weeks ago 
deepseek-coder:6.7b     ce298d984115    3.8 GB  3 weeks ago 
gemma:latest            cb9e0badc99d    4.8 GB  19 hours ago
llava:34b-v1.6          3d2d24f46674    20 GB   3 weeks ago 
yi:34b-chat             5f8365d57cb8    19 GB   3 weeks ago 

(base) root@x:~# ollama show gemma --modelfile
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM gemma:latest

FROM /usr/share/ollama/.ollama/models/blobs/sha256:2c5f288be750bf8ee4c7d6e9afc9563f9685f570a8c7924d829c773c8401d584
TEMPLATE """<start_of_turn>user
{{ if .System }}{{ .System }} {{ end }}{{ .Prompt }}<end_of_turn>
<start_of_turn>model
{{ .Response }}<end_of_turn>
"""
PARAMETER stop "<start_of_turn>"
PARAMETER stop "<end_of_turn>"

systemctl status ollama

When I did not add the OLLAMA_MODELS env in the service configuration file, Ollama's system service can run normally.

(base) root@x:~# systemctl status ollama
● ollama.service - Ollama Service
     Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2024-02-22 17:21:06 CST; 3h 47min ago
   Main PID: 57912 (ollama)
      Tasks: 113 (limit: 629145)
     Memory: 2.6G
        CPU: 16min 43.111s
     CGroup: /system.slice/ollama.service
             └─57912 /usr/local/bin/ollama serve

Add Environment

After adding, the system service cannot run normally.

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=..."
Environment="OLLAMA_HOST=0.0.0.0:11434"
# Environment="OLLAMA_MODELS=/path/to/models"

[Install]
WantedBy=default.target
systemctl daemon-reload
systemctl restart ollama
(base) root@x:~# systemctl status ollama
● ollama.service - Ollama Service
     Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled)
     Active: activating (auto-restart) (Result: exit-code) since Thu 2024-02-22 21:15:39 CST; 79ms ago
    Process: 1002136 ExecStart=/usr/local/bin/ollama serve (code=exited, status=1/FAILURE)
   Main PID: 1002136 (code=exited, status=1/FAILURE)
        CPU: 31ms
Originally created by @QIN2DIM on GitHub (Feb 22, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2676 Originally assigned to: @dhiltgen on GitHub. ## ollama show --modelfile The default model_path is in `/usr/share/ollama/.ollama/models`, as mentioned in the [document](https://github.com/ollama/ollama/blob/bdc0ea1ba5346161c386f39a2414af810ba955e6/docs/faq.md#where-are-models-stored). ```bash (base) root@x:~# ollama ls NAME ID SIZE MODIFIED deepseek-coder:33b acec7c0b0fd9 18 GB 3 weeks ago deepseek-coder:6.7b ce298d984115 3.8 GB 3 weeks ago gemma:latest cb9e0badc99d 4.8 GB 19 hours ago llava:34b-v1.6 3d2d24f46674 20 GB 3 weeks ago yi:34b-chat 5f8365d57cb8 19 GB 3 weeks ago (base) root@x:~# ollama show gemma --modelfile # Modelfile generated by "ollama show" # To build a new Modelfile based on this one, replace the FROM line with: # FROM gemma:latest FROM /usr/share/ollama/.ollama/models/blobs/sha256:2c5f288be750bf8ee4c7d6e9afc9563f9685f570a8c7924d829c773c8401d584 TEMPLATE """<start_of_turn>user {{ if .System }}{{ .System }} {{ end }}{{ .Prompt }}<end_of_turn> <start_of_turn>model {{ .Response }}<end_of_turn> """ PARAMETER stop "<start_of_turn>" PARAMETER stop "<end_of_turn>" ``` ### systemctl status ollama When I did not add the `OLLAMA_MODELS` env in the service configuration file, Ollama's system service can run normally. ```bash (base) root@x:~# systemctl status ollama ● ollama.service - Ollama Service Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2024-02-22 17:21:06 CST; 3h 47min ago Main PID: 57912 (ollama) Tasks: 113 (limit: 629145) Memory: 2.6G CPU: 16min 43.111s CGroup: /system.slice/ollama.service └─57912 /usr/local/bin/ollama serve ``` ### Add Environment After adding, the system service cannot run normally. ``` [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=..." Environment="OLLAMA_HOST=0.0.0.0:11434" # Environment="OLLAMA_MODELS=/path/to/models" [Install] WantedBy=default.target ``` ```shell systemctl daemon-reload systemctl restart ollama ``` ```shell (base) root@x:~# systemctl status ollama ● ollama.service - Ollama Service Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Thu 2024-02-22 21:15:39 CST; 79ms ago Process: 1002136 ExecStart=/usr/local/bin/ollama serve (code=exited, status=1/FAILURE) Main PID: 1002136 (code=exited, status=1/FAILURE) CPU: 31ms ```
GiteaMirror added the bug label 2026-05-03 14:31:26 -05:00
Author
Owner

@mxyng commented on GitHub (Feb 22, 2024):

What are the permissions and ownership of OLLAMA_MODELS? The ollama process is run as ollama/ollama. If the models path does not allow rwx for ollama, the process will fail to start

<!-- gh-comment-id:1960069224 --> @mxyng commented on GitHub (Feb 22, 2024): What are the permissions and ownership of `OLLAMA_MODELS`? The ollama process is run as `ollama/ollama`. If the models path does not allow `rwx` for `ollama`, the process will fail to start
Author
Owner

@dhiltgen commented on GitHub (Mar 12, 2024):

@QIN2DIM please make sure /path/to/models is owned by user ollama (or at least writable by that user/group) if you want to continue to run it as a system service.

<!-- gh-comment-id:1989681593 --> @dhiltgen commented on GitHub (Mar 12, 2024): @QIN2DIM please make sure /path/to/models is owned by user `ollama` (or at least writable by that user/group) if you want to continue to run it as a system service.
Author
Owner

@winglet0996 commented on GitHub (Apr 6, 2024):

Same issue. I've already made /path/to/models chmod 777. It's also in #3438.

<!-- gh-comment-id:2041138404 --> @winglet0996 commented on GitHub (Apr 6, 2024): Same issue. I've already made /path/to/models `chmod 777`. It's also in #3438.
Author
Owner

@QIN2DIM commented on GitHub (Apr 6, 2024):

@winglet0996 I have solved this issue.

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=root  # important
Group=root  # important
Restart=always
RestartSec=3
Environment="PATH=/root/.cargo/bin:/usr/local/cuda-11.7/bin:/eam/gsfctl:/eam/conda/bin:/eam/conda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_MODELS=/path_to/ollama/.ollama/models"
Environment="NO_PROXY=localhost,127.0.0.1,.example.com"

[Install]
WantedBy=default.target

The main changes were to User, Group and Environment.

After modifying the configuration files, I used the command rsync -ah --progress /usr/share/ollama/.ollama/models/ /path-to/models/ollama/.ollama/models to move all the ollama caches (models and configuration files) to the another physical hard disk. Then restart the service and the migration will work.

You need to create the /path-to/ directory in advance, otherwise the migration command may not automatically create a path that does not already exist.

You may need to reinstall ollama curl -fsSL https://ollama.com/install.sh | sh in the meantime. After reinstallation, the service configuration file will be overwritten, just modify it again.

<!-- gh-comment-id:2041142438 --> @QIN2DIM commented on GitHub (Apr 6, 2024): @winglet0996 I have solved this issue. ``` [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=root # important Group=root # important Restart=always RestartSec=3 Environment="PATH=/root/.cargo/bin:/usr/local/cuda-11.7/bin:/eam/gsfctl:/eam/conda/bin:/eam/conda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin" Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="OLLAMA_MODELS=/path_to/ollama/.ollama/models" Environment="NO_PROXY=localhost,127.0.0.1,.example.com" [Install] WantedBy=default.target ``` The main changes were to `User`, `Group` and `Environment`. After modifying the configuration files, I used the command `rsync -ah --progress /usr/share/ollama/.ollama/models/ /path-to/models/ollama/.ollama/models` to move all the ollama caches (models and configuration files) to the another physical hard disk. Then restart the service and the migration will work. You need to create the `/path-to/` directory in advance, otherwise the migration command may not automatically create a path that does not already exist. You may need to reinstall ollama `curl -fsSL https://ollama.com/install.sh | sh` in the meantime. After reinstallation, the service configuration file will be overwritten, just modify it again.
Author
Owner

@ColKernel0x8E commented on GitHub (Apr 13, 2024):

@winglet0996 I also struggled with this. Thanks to the advice in #680 I finally made it work after many futile attempts.
For me, the missing piece that finally made it work was passing OLLAMA_MODELS in the shell to ollama serve:
$ OLLAMA_MODELS=/path_to/ollama_models/ ollama serve
$ ollama run llava

<!-- gh-comment-id:2053734465 --> @ColKernel0x8E commented on GitHub (Apr 13, 2024): @winglet0996 I also struggled with this. Thanks to the advice in #680 I finally made it work after many futile attempts. For me, the missing piece that finally made it work was passing `OLLAMA_MODELS` in the shell to `ollama serve`: $ `OLLAMA_MODELS=/path_to/ollama_models/ ollama serve` $ `ollama run llava`
Author
Owner

@briankelley commented on GitHub (Jun 4, 2024):

It's embarrassing how long I struggled to just set this dumb little flag to redirect the models root. I was certain this was a bug none of the smart people writing code here were willing to acknowledge. PEBCAK!

@QIN2DIM I'm sure your stuff is running fine now as root, but give this a shot...

Make sure the user ollama has read and execute permissions on the folders from the root of where you're storing models all the way down. You can chown and chmod on the models folder all day, but if the user doesn't have read and execute perms on each of the parent folders, it'll never work. This was my scenario and the command below fixed it.

sudo chmod 755 /path /path/to /path/to/relocated

then, after that, as long as you've already done this...

sudo chown -R ollama:ollama /path/to/relocated/models
sudo chmod -R 775 /path/to/relocated/models

The trigger for me was the output from sudo -u ollama ls /path/to/relocated/models still generated permission errors even though I set ownership and mode correctly on the models folder. Thanks for the tip adding the additional ENVIRONMENT setting in the service file. Hopefully this works for you. Did for me.

<!-- gh-comment-id:2146609823 --> @briankelley commented on GitHub (Jun 4, 2024): It's embarrassing how long I struggled to just set this dumb little flag to redirect the models root. I was certain this was a bug none of the smart people writing code here were willing to acknowledge. PEBCAK! @QIN2DIM I'm sure your stuff is running fine now as root, but give this a shot... Make sure the user ollama has read and execute permissions _**on the folders**_ from the root of where you're storing models all the way down. You can `chown` and `chmod` on the models folder all day, but if the user doesn't have read and execute perms on each of the parent folders, it'll never work. This was my scenario and the command below fixed it. `sudo chmod 755 /path /path/to /path/to/relocated` then, after that, as long as you've already done this... `sudo chown -R ollama:ollama /path/to/relocated/models` `sudo chmod -R 775 /path/to/relocated/models` The trigger for me was the output from `sudo -u ollama ls /path/to/relocated/models` still generated permission errors even though I set ownership and mode correctly on the models folder. Thanks for the tip adding the additional `ENVIRONMENT` setting in the service file. Hopefully this works for you. Did for me.
Author
Owner

@konoDioDA253 commented on GitHub (Jun 17, 2024):

What did the trick for me was to put a space before the variable like so :
Environment=" OLLAMA_MODELS=path/to/models"
instead of :
Environment="OLLAMA_MODELS=path/to/models"

I was using systemctl edit ollama and adding variables with this just appends it to the list, without space, so needed to add that space in the Environment value that is given.

<!-- gh-comment-id:2172185274 --> @konoDioDA253 commented on GitHub (Jun 17, 2024): What did the trick for me was to put a space before the variable like so : `Environment=" OLLAMA_MODELS=path/to/models"` instead of : `Environment="OLLAMA_MODELS=path/to/models"` I was using `systemctl edit ollama` and adding variables with this just appends it to the list, without space, so needed to add that space in the `Environment` value that is given.
Author
Owner

@Danne980 commented on GitHub (Oct 17, 2024):

I am a rookie with Ubuntu and Ollama but got this to work by using a Ai service. Maybe it can help someone.


Guide to Configure Ollama Service with Custom Model Directory

This guide details the steps taken to configure the Ollama service on Ubuntu to use a custom directory for storing models and temporary files. The setup involves using a mounted external drive.

Step 1: Create the Directory Structure

First, create the necessary directories on the mounted drive where the models and temporary files will be stored.

  1. Create directories for models and temporary files:

    sudo mkdir -p /media/dan/185497C35497A254/ollama/models
    sudo mkdir -p /media/dan/185497C35497A254/ollama/tmp
    
  2. Set initial ownership and permissions:

    sudo chown -R ollama:users /media/dan/185497C35497A254/ollama
    sudo chmod -R 775 /media/dan/185497C35497A254/ollama
    

    These commands ensure that the directories are owned by the ollama user, with group write permissions for the users group.

Step 2: Set Correct Permissions on Parent Directories

To allow the ollama user to access the custom directories, the permissions for all parent directories must be correctly set.

  1. Change permissions on /media, /media/dan, and /media/dan/185497C35497A254:
    sudo chmod 755 /media
    sudo chmod 755 /media/dan
    sudo chmod 755 /media/dan/185497C35497A254
    
    This ensures that the ollama user can traverse the directories leading up to the target location.

Step 3: Update Permissions and Ownership for the Ollama Directory

Further configure the permissions and ownership for the directory used by Ollama.

  1. Set ownership of the ollama directory to the ollama user:

    sudo chown -R ollama:ollama /media/dan/185497C35497A254/ollama
    

    This makes the ollama user the owner of the directory structure.

  2. Set permissions to ensure the directory is accessible:

    sudo chmod -R 775 /media/dan/185497C35497A254/ollama
    

    This grants read, write, and execute permissions to the owner and group, and read and execute permissions to others.

Step 4: Configure the Ollama Service

Modify the ollama.service file to set the necessary environment variables for the custom directories.

  1. Edit the /etc/systemd/system/ollama.service file and add the environment variables:

    [Unit]
    Description=Ollama Service
    After=network-online.target
    
    [Service]
    ExecStart=/usr/local/bin/ollama serve
    User=ollama
    Group=ollama
    Restart=always
    RestartSec=3
    Environment="PATH=/home/dan/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
    Environment="OLLAMA_MODELS=/media/dan/185497C35497A254/ollama/models"
    Environment="OLLAMA_TMPDIR=/media/dan/185497C35497A254/ollama/tmp"
    WorkingDirectory=/home/dan
    
    [Install]
    WantedBy=default.target
    
  2. Save the file and exit.

Step 5: Reload and Restart the Service

  1. Reload the systemd configuration to apply the changes:

    sudo systemctl daemon-reload
    
  2. Restart the Ollama service:

    sudo systemctl restart ollama
    
  3. Verify the service status:

    sudo systemctl status ollama
    

    The service should now be active and running without errors.

Conclusion

By following these steps—creating the directories, setting correct permissions, and configuring the service file—you can successfully configure the Ollama service to use a custom directory for models and temporary files.


<!-- gh-comment-id:2419250392 --> @Danne980 commented on GitHub (Oct 17, 2024): I am a rookie with Ubuntu and Ollama but got this to work by using a Ai service. Maybe it can help someone. --- ## Guide to Configure Ollama Service with Custom Model Directory This guide details the steps taken to configure the Ollama service on Ubuntu to use a custom directory for storing models and temporary files. The setup involves using a mounted external drive. ### Step 1: Create the Directory Structure First, create the necessary directories on the mounted drive where the models and temporary files will be stored. 1. **Create directories for models and temporary files:** ```bash sudo mkdir -p /media/dan/185497C35497A254/ollama/models sudo mkdir -p /media/dan/185497C35497A254/ollama/tmp ``` 2. **Set initial ownership and permissions:** ```bash sudo chown -R ollama:users /media/dan/185497C35497A254/ollama sudo chmod -R 775 /media/dan/185497C35497A254/ollama ``` These commands ensure that the directories are owned by the `ollama` user, with group write permissions for the `users` group. ### Step 2: Set Correct Permissions on Parent Directories To allow the `ollama` user to access the custom directories, the permissions for all parent directories must be correctly set. 1. **Change permissions on `/media`, `/media/dan`, and `/media/dan/185497C35497A254`:** ```bash sudo chmod 755 /media sudo chmod 755 /media/dan sudo chmod 755 /media/dan/185497C35497A254 ``` This ensures that the `ollama` user can traverse the directories leading up to the target location. ### Step 3: Update Permissions and Ownership for the Ollama Directory Further configure the permissions and ownership for the directory used by Ollama. 1. **Set ownership of the `ollama` directory to the `ollama` user:** ```bash sudo chown -R ollama:ollama /media/dan/185497C35497A254/ollama ``` This makes the `ollama` user the owner of the directory structure. 2. **Set permissions to ensure the directory is accessible:** ```bash sudo chmod -R 775 /media/dan/185497C35497A254/ollama ``` This grants read, write, and execute permissions to the owner and group, and read and execute permissions to others. ### Step 4: Configure the Ollama Service Modify the `ollama.service` file to set the necessary environment variables for the custom directories. 1. **Edit the `/etc/systemd/system/ollama.service` file** and add the environment variables: ```plaintext [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/home/dan/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" Environment="OLLAMA_MODELS=/media/dan/185497C35497A254/ollama/models" Environment="OLLAMA_TMPDIR=/media/dan/185497C35497A254/ollama/tmp" WorkingDirectory=/home/dan [Install] WantedBy=default.target ``` 2. **Save the file and exit.** ### Step 5: Reload and Restart the Service 1. **Reload the systemd configuration to apply the changes:** ```bash sudo systemctl daemon-reload ``` 2. **Restart the Ollama service:** ```bash sudo systemctl restart ollama ``` 3. **Verify the service status:** ```bash sudo systemctl status ollama ``` The service should now be active and running without errors. ### Conclusion By following these steps—creating the directories, setting correct permissions, and configuring the service file—you can successfully configure the Ollama service to use a custom directory for models and temporary files. ---
Author
Owner

@ebaudrez commented on GitHub (Apr 1, 2025):

I just wanted to point out that, in addition to the steps written down above, I had to check and edit /etc/passwd to make the home directory of the ollama user point to the right place!

<!-- gh-comment-id:2768557278 --> @ebaudrez commented on GitHub (Apr 1, 2025): I just wanted to point out that, in addition to the steps written down above, I had to check and edit `/etc/passwd` to make the home directory of the `ollama` user point to the right place!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#63630