[GH-ISSUE #3195] Modified /systemd/system/ollama.service but it didn't take effect #1970

Closed
opened 2026-04-12 12:08:51 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @michelle-chou25 on GitHub (Mar 17, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3195

What is the issue?

Ollama service file's modification didn't take effect

What did you expect to see?

Make the modification of the service file effective?

Steps to reproduce

  1. I tried to start ollama service but failed it, used "sudo journalctl -u ollama --reverse --lines=100" to check the log and it showed:
    Failed at step EXEC spawning /usr/bin/ollama: No such file or directory
    Started ollama.service.
    Stopped ollama.service.
    ollama.service holdoff time over, scheduling restart.
    ollama.service failed.
    Unit ollama.service entered failed state.
    ollama.service: main process exited, code=exited, status=203/EXEC

  2. Then I found my ollama file is actually here: /usr/local/bin/ollama
    I modified my ollama service file to the other path as above, didn't change anything else excep modifying "ExecStart=/usr/bin/ollama serve" to "ExecStart=/usr/local/bin/ollama serve"
    cat /etc/systemd/system/ollama.service and it's like the following:
    [unit]
    Description=Ollama Service
    After=network-online.target

[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_MODELS=/data/ollama/.ollama/models"
ExecStart=/usr/local/bin/ollama serve
User=root
Group=root
Restart=always
RestartSec=3

[Install]
WantedBy=default.target

  1. then start the service
    sudo systemctl daemon-reload
    sudo systemctl enable ollama
    but still get the same error.

Looks like the service is stilling looking for /user/bin/ollama so it failed.
I tried to run this command and it succeded: /usr/local/bin/ollama serve
the log is:systemctl status ollama
● ollama.service
Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2024-03-17 22:38:42 HKT; 2min 38s ago
Main PID: 75147 (ollama)
Tasks: 23
Memory: 454.5M
CGroup: /system.slice/ollama.service
└─75147 /usr/local/bin/ollama serve

My env:
centos7

I tried to reinstall ollama but it also fails

Are there any recent changes that introduced the issue?

No.

OS

Linux

Architecture

x86

Platform

WSL

Ollama version

0.1.29

GPU

Nvidia

GPU info

No response

CPU

No response

Other software

No response

Originally created by @michelle-chou25 on GitHub (Mar 17, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3195 ### What is the issue? Ollama service file's modification didn't take effect ### What did you expect to see? Make the modification of the service file effective? ### Steps to reproduce 1. I tried to start ollama service but failed it, used "sudo journalctl -u ollama --reverse --lines=100" to check the log and it showed: Failed at step EXEC spawning /usr/bin/ollama: No such file or directory Started ollama.service. Stopped ollama.service. ollama.service holdoff time over, scheduling restart. ollama.service failed. Unit ollama.service entered failed state. ollama.service: main process exited, code=exited, status=203/EXEC 2. Then I found my ollama file is actually here: /usr/local/bin/ollama I modified my ollama service file to the other path as above, didn't change anything else excep modifying "ExecStart=/usr/bin/ollama serve" to "ExecStart=/usr/local/bin/ollama serve" cat /etc/systemd/system/ollama.service and it's like the following: [unit] Description=Ollama Service After=network-online.target [Service] Environment="OLLAMA_HOST=0.0.0.0" Environment="OLLAMA_MODELS=/data/ollama/.ollama/models" ExecStart=/usr/local/bin/ollama serve User=root Group=root Restart=always RestartSec=3 [Install] WantedBy=default.target 3. then start the service sudo systemctl daemon-reload sudo systemctl enable ollama but still get the same error. Looks like the service is stilling looking for /user/bin/ollama so it failed. I tried to run this command and it succeded: /usr/local/bin/ollama serve the log is:systemctl status ollama ● ollama.service Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: disabled) Active: active (running) since Sun 2024-03-17 22:38:42 HKT; 2min 38s ago Main PID: 75147 (ollama) Tasks: 23 Memory: 454.5M CGroup: /system.slice/ollama.service └─75147 /usr/local/bin/ollama serve My env: centos7 I tried to reinstall ollama but it also fails ### Are there any recent changes that introduced the issue? No. ### OS Linux ### Architecture x86 ### Platform WSL ### Ollama version 0.1.29 ### GPU Nvidia ### GPU info _No response_ ### CPU _No response_ ### Other software _No response_
GiteaMirror added the bug label 2026-04-12 12:08:51 -05:00
Author
Owner

@mxyng commented on GitHub (Mar 18, 2024):

How did you install Ollama? The install script should set up the systemd service to use the right path.

systemctl daemon-reload must be run before systemctl start in order to pick up service updates

<!-- gh-comment-id:2003124924 --> @mxyng commented on GitHub (Mar 18, 2024): How did you install Ollama? The install script should set up the systemd service to use the right path. `systemctl daemon-reload` must be run before `systemctl start` in order to pick up service updates
Author
Owner

@michelle-chou25 commented on GitHub (Apr 21, 2024):

I run following commands:
sudo systemctl daemon-reload
sudo systemctl enable ollama
and then sudo systemctl start ollama

somehow this time it's correct now.

<!-- gh-comment-id:2067925918 --> @michelle-chou25 commented on GitHub (Apr 21, 2024): I run following commands: sudo systemctl daemon-reload sudo systemctl enable ollama and then sudo systemctl start ollama somehow this time it's correct now.
Author
Owner

@michelle-chou25 commented on GitHub (May 10, 2024):

It occurred again, I installed ollama on another linux machine, centos 7.
Modified the configuration file and set OLLAMA_HOST = "0.0.0.0:80"
Then run:
systemctl daemon-reload
systemctl restart ollama
Then run:
ollama serve
time=2024-05-10T21:50:14.255+08:00 level=INFO source=images.go:828 msg="total blobs: 10"
time=2024-05-10T21:50:14.255+08:00 level=INFO source=images.go:835 msg="total unused blobs removed: 0"
time=2024-05-10T21:50:14.256+08:00 level=INFO source=routes.go:1071 msg="Listening on 127.0.0.1:11434 (versi on 0.1.33)"
time=2024-05-10T21:50:14.326+08:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ ollama584540587/runners
time=2024-05-10T21:50:18.708+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60002 cpu]"
time=2024-05-10T21:50:18.708+08:00 level=INFO source=gpu.go:96 msg="Detecting GPUs"
time=2024-05-10T21:50:18.724+08:00 level=INFO source=gpu.go:101 msg="detected GPUs" library=/tmp/ollama584540587/runners/cuda_v11/libcudart.so.11.0 count=2
time=2024-05-10T21:50:18.724+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[GIN] 2024/05/10 - 21:56:42 | 200 | 146.377µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/10 - 21:56:44 | 200 | 1.827386919s | 127.0.0.1 | POST "/api/pull"

OLLAMA is still listening on the local address with default port 11434, not the new address in ollama.service file.

<!-- gh-comment-id:2104680282 --> @michelle-chou25 commented on GitHub (May 10, 2024): It occurred again, I installed ollama on another linux machine, centos 7. Modified the configuration file and set OLLAMA_HOST = "0.0.0.0:80" Then run: systemctl daemon-reload systemctl restart ollama Then run: ollama serve time=2024-05-10T21:50:14.255+08:00 level=INFO source=images.go:828 msg="total blobs: 10" time=2024-05-10T21:50:14.255+08:00 level=INFO source=images.go:835 msg="total unused blobs removed: 0" time=2024-05-10T21:50:14.256+08:00 level=INFO source=routes.go:1071 msg="Listening on 127.0.0.1:11434 (versi on 0.1.33)" time=2024-05-10T21:50:14.326+08:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ ollama584540587/runners time=2024-05-10T21:50:18.708+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60002 cpu]" time=2024-05-10T21:50:18.708+08:00 level=INFO source=gpu.go:96 msg="Detecting GPUs" time=2024-05-10T21:50:18.724+08:00 level=INFO source=gpu.go:101 msg="detected GPUs" library=/tmp/ollama584540587/runners/cuda_v11/libcudart.so.11.0 count=2 time=2024-05-10T21:50:18.724+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" [GIN] 2024/05/10 - 21:56:42 | 200 | 146.377µs | 127.0.0.1 | HEAD "/" [GIN] 2024/05/10 - 21:56:44 | 200 | 1.827386919s | 127.0.0.1 | POST "/api/pull" OLLAMA is still listening on the local address with default port 11434, not the new address in ollama.service file.
Author
Owner

@mxyng commented on GitHub (May 10, 2024):

The changes you're making are applied only to the systemd service. By running ollama serve explicitly, you're bypassing the updated configurations.

Since it's already running as a service, there's no reason to run ollama serve; it's already serving on your requested port (0.0.0.0:80)

<!-- gh-comment-id:2104850210 --> @mxyng commented on GitHub (May 10, 2024): The changes you're making are applied only to the systemd service. By running `ollama serve` explicitly, you're bypassing the updated configurations. Since it's already running as a service, there's no reason to run `ollama serve`; it's already serving on your requested port (0.0.0.0:80)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1970