[GH-ISSUE #7562] ollama update fails to restart systemd service #66871

Closed
opened 2026-05-04 08:31:37 -05:00 by GiteaMirror · 18 comments
Owner

Originally created by @remco-pc on GitHub (Nov 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7562

What is the issue?

i tried updating ollama with the | sh command and try to run the llama vision model but got this error:

root@f456206f9006:/mnt/Vps3/Mount# curl -fsSL https://ollama.com/install.sh | sh

Installing ollama to /usr/local
Downloading Linux amd64 bundle
######################################################################## 100.0%
Adding ollama user to video group...
Adding current user to ollama group...
Creating ollama systemd service...
WARNING: Unable to detect NVIDIA/AMD GPU. Install lspci or lshw to automatically detect and install GPU dependencies.
The Ollama API is now available at 127.0.0.1:11434.
Install complete. Run "ollama" from the command line.
root@f456206f9006:/mnt/Vps3/Mount# ollama pull llama3.2-vision
pulling manifest
Error: pull model manifest: 412:

The model you are attempting to pull requires a newer version of Ollama.

Please download the latest version at:

    https://ollama.com/download

root@f456206f9006:/mnt/Vps3/Mount#

OS

Linux

GPU

No response

CPU

Intel

Ollama version

ollama version is 0.3.12 Warning: client version is 0.4.0

Originally created by @remco-pc on GitHub (Nov 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7562 ### What is the issue? i tried updating ollama with the | sh command and try to run the llama vision model but got this error: root@f456206f9006:/mnt/Vps3/Mount# curl -fsSL https://ollama.com/install.sh | sh >>> Installing ollama to /usr/local >>> Downloading Linux amd64 bundle ######################################################################## 100.0% >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... WARNING: Unable to detect NVIDIA/AMD GPU. Install lspci or lshw to automatically detect and install GPU dependencies. >>> The Ollama API is now available at 127.0.0.1:11434. >>> Install complete. Run "ollama" from the command line. root@f456206f9006:/mnt/Vps3/Mount# ollama pull llama3.2-vision pulling manifest Error: pull model manifest: 412: The model you are attempting to pull requires a newer version of Ollama. Please download the latest version at: https://ollama.com/download root@f456206f9006:/mnt/Vps3/Mount# ### OS Linux ### GPU _No response_ ### CPU Intel ### Ollama version ollama version is 0.3.12 Warning: client version is 0.4.0
GiteaMirror added the questionlinuxdocker labels 2026-05-04 08:31:42 -05:00
Author
Owner

@remco-pc commented on GitHub (Nov 7, 2024):

also: https://www.youtube.com/watch?v=a8gd892WKLM for someone who is interested in seeing it in action...

<!-- gh-comment-id:2463049449 --> @remco-pc commented on GitHub (Nov 7, 2024): also: https://www.youtube.com/watch?v=a8gd892WKLM for someone who is interested in seeing it in action...
Author
Owner

@etherblitzdev commented on GitHub (Nov 7, 2024):

sudo tar -C /usr -xzf ollama-linux-amd64-rocm.tgz
sudo useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama
sudo usermod -a -G ollama $(whoami)
sudo nano /etc/systemd/system/ollama.service
sudo systemctl enable ollama
sudo systemctl daemon-reload
sudo systemctl restart ollama

sudo systemctl status ollama

#Yes this below resolved ,it on this machine

yes configured as a service,  /etc/systemd/system/ollama.service

worked like Lucky charms! Above was insufficient.

sudo reboot

ollama --version
ollama version is 0.4.0

#Downloaded :-D
ollama pull llama3.2-vision

#Downloading :-D
ollama pull llama3.2-vision:90b

<!-- gh-comment-id:2463062426 --> @etherblitzdev commented on GitHub (Nov 7, 2024): sudo tar -C /usr -xzf ollama-linux-amd64-rocm.tgz sudo useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama sudo usermod -a -G ollama $(whoami) sudo nano /etc/systemd/system/ollama.service sudo systemctl enable ollama sudo systemctl daemon-reload sudo systemctl restart ollama sudo systemctl status ollama #Yes this below resolved ,it on this machine # yes configured as a service,  /etc/systemd/system/ollama.service # worked like Lucky charms! Above was insufficient. sudo reboot ollama --version ollama version is 0.4.0 #Downloaded :-D ollama pull llama3.2-vision #Downloading :-D ollama pull llama3.2-vision:90b
Author
Owner

@rick-github commented on GitHub (Nov 7, 2024):

What's the output of

systemctl --no-pager cat ollama
<!-- gh-comment-id:2463127123 --> @rick-github commented on GitHub (Nov 7, 2024): What's the output of ``` systemctl --no-pager cat ollama ```
Author
Owner

@remco-pc commented on GitHub (Nov 7, 2024):

kill 540... and the process is guarded... and restarts automatically

<!-- gh-comment-id:2463127793 --> @remco-pc commented on GitHub (Nov 7, 2024): kill 540... and the process is guarded... and restarts automatically
Author
Owner

@remco-pc commented on GitHub (Nov 7, 2024):

afbeelding

<!-- gh-comment-id:2463128488 --> @remco-pc commented on GitHub (Nov 7, 2024): ![afbeelding](https://github.com/user-attachments/assets/b559083f-dadf-4138-b699-65658f8429fe)
Author
Owner

@dhiltgen commented on GitHub (Nov 8, 2024):

@remco-pc what does systemctl is-system-running report on your system? It looks like that didn't return what we were expecting, and we didn't automatically restart the service, so the old version was still running. A quick workaround is to just bounce the service sudo systemctl restart ollama but it looks like you already got it running from your latest update.

<!-- gh-comment-id:2465837294 --> @dhiltgen commented on GitHub (Nov 8, 2024): @remco-pc what does `systemctl is-system-running` report on your system? It looks like that didn't return what we were expecting, and we didn't automatically restart the service, so the old version was still running. A quick workaround is to just bounce the service `sudo systemctl restart ollama` but it looks like you already got it running from your latest update.
Author
Owner

@remco-pc commented on GitHub (Nov 15, 2024):

@dhiltgen its in docker without system-d, and its after an update it should kill and restart the service

<!-- gh-comment-id:2478058655 --> @remco-pc commented on GitHub (Nov 15, 2024): @dhiltgen its in docker without system-d, and its after an update it should kill and restart the service
Author
Owner

@rick-github commented on GitHub (Nov 15, 2024):

If you update ollama inside a container with curl|sh, you lose the update when the container is restarted. If you want to update ollama, use docker pull ollama/olllama to fetch the latest image

<!-- gh-comment-id:2478277233 --> @rick-github commented on GitHub (Nov 15, 2024): If you update ollama inside a container with `curl|sh`, you lose the update when the container is restarted. If you want to update ollama, use `docker pull ollama/olllama` to fetch the latest image
Author
Owner

@remco-pc commented on GitHub (Nov 17, 2024):

@rick-github if you reinstall with docker, you can loose your models, i've symlinked them to a mount point then you only loose the symlink on reinstallation

<!-- gh-comment-id:2481502853 --> @remco-pc commented on GitHub (Nov 17, 2024): @rick-github if you reinstall with docker, you can loose your models, i've symlinked them to a mount point then you only loose the symlink on reinstallation
Author
Owner

@rick-github commented on GitHub (Nov 17, 2024):

Mount the model storage location.

docker run -d -v /path/to/model/on/host/system:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
<!-- gh-comment-id:2481506368 --> @rick-github commented on GitHub (Nov 17, 2024): Mount the model storage location. ``` docker run -d -v /path/to/model/on/host/system:/root/.ollama -p 11434:11434 --name ollama ollama/ollama ```
Author
Owner

@remco-pc commented on GitHub (Nov 18, 2024):

volumes:
  mount:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /mnt/Vps3/Mount
  mount2:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /mnt/Disk2

Disk2 is www-data (full disk 2TB) or smaller /bigger

<!-- gh-comment-id:2482343947 --> @remco-pc commented on GitHub (Nov 18, 2024): ```docker-compose volumes: mount: driver: local driver_opts: type: none o: bind device: /mnt/Vps3/Mount mount2: driver: local driver_opts: type: none o: bind device: /mnt/Disk2 ``` Disk2 is www-data (full disk 2TB) or smaller /bigger
Author
Owner

@rick-github commented on GitHub (Nov 18, 2024):

This is how to update:

docker pull ollama/ollama
docker compose up -d ollama
<!-- gh-comment-id:2482367657 --> @rick-github commented on GitHub (Nov 18, 2024): This is how to update: ``` docker pull ollama/ollama docker compose up -d ollama ```
Author
Owner

@remco-pc commented on GitHub (Nov 18, 2024):

i do the sh command and restart it

<!-- gh-comment-id:2484057220 --> @remco-pc commented on GitHub (Nov 18, 2024): i do the sh command and restart it
Author
Owner

@rick-github commented on GitHub (Nov 18, 2024):

If you do curl|sh inside the container and restart it, the update will be deleted. That's the point of a container, you can make changes in it and restart and it's back to the original state. To update a container, you pull the new version.

<!-- gh-comment-id:2484062932 --> @rick-github commented on GitHub (Nov 18, 2024): If you do `curl|sh` inside the container and restart it, the update will be deleted. That's the point of a container, you can make changes in it and restart and it's back to the original state. To update a container, you pull the new version.
Author
Owner

@remco-pc commented on GitHub (Nov 18, 2024):

@rick-github Inside docker compose i run: RUN curl -fsSL https://ollama.com/install.sh | sh and make a symlink to my model directory, works like a charm, for update, i need to kill the old process and my guards will restart the ollama process (have hard & soft locks on cpu)

<!-- gh-comment-id:2484383781 --> @remco-pc commented on GitHub (Nov 18, 2024): @rick-github Inside docker compose i run: RUN curl -fsSL https://ollama.com/install.sh | sh and make a symlink to my model directory, works like a charm, for update, i need to kill the old process and my guards will restart the ollama process (have hard & soft locks on cpu)
Author
Owner

@rick-github commented on GitHub (Nov 18, 2024):

That sounds more like a Dockerfile (used to build images) than a docker compose file. Which you could avoid by just pulling the updated ollama image. It just sounds like you are making life hard for yourself, building bespoke images that are already available and having to kill processes etc which would just be handled as part of a docker compose up. The upshot is that if you really want to do it this way, that's up to you, but it's not an ollama problem.

<!-- gh-comment-id:2484392990 --> @rick-github commented on GitHub (Nov 18, 2024): That sounds more like a Dockerfile (used to build images) than a docker compose file. Which you could avoid by just pulling the updated ollama image. It just sounds like you are making life hard for yourself, building bespoke images that are already available and having to kill processes etc which would just be handled as part of a `docker compose up`. The upshot is that if you really want to do it this way, that's up to you, but it's not an ollama problem.
Author
Owner

@remco-pc commented on GitHub (Nov 18, 2024):

It runs on a different port inside my docker-compose container with custom backend / frontend, i am fine thank you for pointing it out, ollama got added quickly and i am not using something else than a local connection with server sent events and json without system-d to have a more secure environment

<!-- gh-comment-id:2484398334 --> @remco-pc commented on GitHub (Nov 18, 2024): It runs on a different port inside my docker-compose container with custom backend / frontend, i am fine thank you for pointing it out, ollama got added quickly and i am not using something else than a local connection with server sent events and json without system-d to have a more secure environment
Author
Owner

@remco-pc commented on GitHub (Nov 19, 2024):

@rick-github this one i had to duplicate a line to get the end result

tetris

<!-- gh-comment-id:2485026701 --> @remco-pc commented on GitHub (Nov 19, 2024): @rick-github this one i had to duplicate a line to get the end result [tetris](https://www.youtube.com/watch?v=mU_2mnw_wwQ)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66871