The unit ollama.service has entered the 'failed' state with result 'exit-code'. #7635

Closed
opened 2025-11-12 14:12:59 -06:00 by GiteaMirror · 8 comments
Owner

Originally created by @rrevi on GitHub (Jul 28, 2025).

What is the issue?

After an ollama installing using: curl -fsSL https://ollama.com/install.sh | sh

the systemd ollama service fails to start

Relevant log output

Jul 28 08:28:30 omnibook-ultra-14 systemd[1]: Started Ollama Service.
░░ Subject: A start job for unit ollama.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit ollama.service has finished successfully.
░░
░░ The job identifier is 119545.
Jul 28 08:28:30 omnibook-ultra-14 ollama[74021]: Couldn't find '/var/lib/ollama/.ollama/id_ed25519'. Generating new private key.
Jul 28 08:28:30 omnibook-ultra-14 ollama[74021]: Error: could not create directory mkdir /var/lib/ollama: permission denied
Jul 28 08:28:30 omnibook-ultra-14 systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ An ExecStart= process belonging to unit ollama.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Jul 28 08:28:30 omnibook-ultra-14 systemd[1]: ollama.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ The unit ollama.service has entered the 'failed' state with result 'exit-code'.

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

No response

Originally created by @rrevi on GitHub (Jul 28, 2025). ### What is the issue? After an ollama installing using: `curl -fsSL https://ollama.com/install.sh | sh` the systemd ollama service fails to start ### Relevant log output ```shell Jul 28 08:28:30 omnibook-ultra-14 systemd[1]: Started Ollama Service. ░░ Subject: A start job for unit ollama.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel ░░ ░░ A start job for unit ollama.service has finished successfully. ░░ ░░ The job identifier is 119545. Jul 28 08:28:30 omnibook-ultra-14 ollama[74021]: Couldn't find '/var/lib/ollama/.ollama/id_ed25519'. Generating new private key. Jul 28 08:28:30 omnibook-ultra-14 ollama[74021]: Error: could not create directory mkdir /var/lib/ollama: permission denied Jul 28 08:28:30 omnibook-ultra-14 systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE ░░ Subject: Unit process exited ░░ Defined-By: systemd ░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel ░░ ░░ An ExecStart= process belonging to unit ollama.service has exited. ░░ ░░ The process' exit code is 'exited' and its exit status is 1. Jul 28 08:28:30 omnibook-ultra-14 systemd[1]: ollama.service: Failed with result 'exit-code'. ░░ Subject: Unit failed ░░ Defined-By: systemd ░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel ░░ ░░ The unit ollama.service has entered the 'failed' state with result 'exit-code'. ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version _No response_
GiteaMirror added the bug label 2025-11-12 14:12:59 -06:00
Author
Owner

@rick-github commented on GitHub (Jul 28, 2025):

Output of

systemctl cat ollama
@rick-github commented on GitHub (Jul 28, 2025): Output of ``` systemctl cat ollama ```
Author
Owner

@rrevi commented on GitHub (Jul 28, 2025):

Output of

systemctl cat ollama
# /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=no
RestartSec=3
Environment="PATH=/home/rrevi/.local/share/mise/installs/java/21.0.2/bin:/home/rrevi/.local/share/mise/installs/kotlin/2.1.21/kotlinc/bin:/home/rrevi/.local/share/mise/installs/kotlin/2>

[Install]
WantedBy=default.target

@rick-github

@rrevi commented on GitHub (Jul 28, 2025): > Output of > > ``` > systemctl cat ollama > ``` ``` # /etc/systemd/system/ollama.service [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=no RestartSec=3 Environment="PATH=/home/rrevi/.local/share/mise/installs/java/21.0.2/bin:/home/rrevi/.local/share/mise/installs/kotlin/2.1.21/kotlinc/bin:/home/rrevi/.local/share/mise/installs/kotlin/2> [Install] WantedBy=default.target ``` @rick-github
Author
Owner

@rick-github commented on GitHub (Jul 28, 2025):

grep ollama /etc/passwd
@rick-github commented on GitHub (Jul 28, 2025): ``` grep ollama /etc/passwd ```
Author
Owner

@rrevi commented on GitHub (Jul 28, 2025):

grep ollama /etc/passwd
ollama:x:947:947:ollama user:/var/lib/ollama:/usr/bin/nologin
@rrevi commented on GitHub (Jul 28, 2025): > ``` > grep ollama /etc/passwd > ``` ``` ollama:x:947:947:ollama user:/var/lib/ollama:/usr/bin/nologin ```
Author
Owner

@sohochaser commented on GitHub (Jul 28, 2025):

I also got the problem. It works well use ollama serve but will exit after a while(aboute 10 minute without any log).

Jul 28 22:46:48 pm-78bf systemd[1]: Started Ollama Service.
Jul 28 22:46:48 pm-78bf systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
Jul 28 22:46:48 pm-78bf systemd[1]: ollama.service: Failed with result 'exit-code'.
Jul 28 22:46:51 pm-78bf systemd[1]: ollama.service: Service RestartSec=3s expired, scheduling restart.
Jul 28 22:46:51 pm-78bf systemd[1]: ollama.service: Scheduled restart job, restart counter is at 455.
Jul 28 22:46:51 pm-78bf systemd[1]: Stopped Ollama Service.
Jul 28 22:46:51 pm-78bf systemd[1]: Started Ollama Service.
Jul 28 22:46:51 pm-78bf systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
Jul 28 22:46:51 pm-78bf systemd[1]: ollama.service: Failed with result 'exit-code'.

(base) [root@pm-78bf log]# ollama serve
time=2025-07-28T22:43:18.190+08:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:7999 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/mnt/extdisk2/docker/volumes/llm/_data/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-07-28T22:43:18.194+08:00 level=INFO source=images.go:476 msg="total blobs: 35"
time=2025-07-28T22:43:18.194+08:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0"
time=2025-07-28T22:43:18.194+08:00 level=INFO source=routes.go:1288 msg="Listening on [::]:7999 (version 0.9.6)"
time=2025-07-28T22:43:18.197+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-07-28T22:43:21.470+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-04e02cef-4819-94ba-92c0-00178c9bc952 library=cuda variant=v12 compute=8.0 driver=12.4 name="NVIDIA A800-SXM4-40GB" total="39.6 GiB" available="5.8 GiB"

it work well.

ollama -v

ollama version is 0.9.6

@sohochaser commented on GitHub (Jul 28, 2025): I also got the problem. It works well use ollama serve but will exit after a while(aboute 10 minute without any log). Jul 28 22:46:48 pm-78bf systemd[1]: Started Ollama Service. Jul 28 22:46:48 pm-78bf systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE Jul 28 22:46:48 pm-78bf systemd[1]: ollama.service: Failed with result 'exit-code'. Jul 28 22:46:51 pm-78bf systemd[1]: ollama.service: Service RestartSec=3s expired, scheduling restart. Jul 28 22:46:51 pm-78bf systemd[1]: ollama.service: Scheduled restart job, restart counter is at 455. Jul 28 22:46:51 pm-78bf systemd[1]: Stopped Ollama Service. Jul 28 22:46:51 pm-78bf systemd[1]: Started Ollama Service. Jul 28 22:46:51 pm-78bf systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE Jul 28 22:46:51 pm-78bf systemd[1]: ollama.service: Failed with result 'exit-code'. (base) [root@pm-78bf log]# ollama serve time=2025-07-28T22:43:18.190+08:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:7999 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/mnt/extdisk2/docker/volumes/llm/_data/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-07-28T22:43:18.194+08:00 level=INFO source=images.go:476 msg="total blobs: 35" time=2025-07-28T22:43:18.194+08:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0" time=2025-07-28T22:43:18.194+08:00 level=INFO source=routes.go:1288 msg="Listening on [::]:7999 (version 0.9.6)" time=2025-07-28T22:43:18.197+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-07-28T22:43:21.470+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-04e02cef-4819-94ba-92c0-00178c9bc952 library=cuda variant=v12 compute=8.0 driver=12.4 name="NVIDIA A800-SXM4-40GB" total="39.6 GiB" available="5.8 GiB" it work well. # ollama -v ollama version is 0.9.6
Author
Owner

@rick-github commented on GitHub (Jul 28, 2025):

@rrevi

sudo mkdir -p /var/lib/ollama
sudo chown ollama:ollama /var/lib/ollama

@sohochaser
Open a new ticket. Include server log and output of systemctl cat ollama.

@rick-github commented on GitHub (Jul 28, 2025): @rrevi ``` sudo mkdir -p /var/lib/ollama sudo chown ollama:ollama /var/lib/ollama ``` @sohochaser Open a new ticket. Include server log and output of `systemctl cat ollama`.
Author
Owner

@rrevi commented on GitHub (Jul 29, 2025):

@rrevi

sudo mkdir -p /var/lib/ollama
sudo chown ollama:ollama /var/lib/ollama

@sohochaser Open a new ticket. Include server log and output of systemctl cat ollama.

@rick-github , that did it! Thank you.

Unfortunately, despite the successful service start, my ollama install can't seem to find my iGPU :(

Jul 28 20:52:04 omnibook-ultra-14 systemd[1]: Started Ollama Service.
░░ Subject: A start job for unit ollama.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit ollama.service has finished successfully.
░░
░░ The job identifier is 26095.
Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: Couldn't find '/var/lib/ollama/.ollama/id_ed25519'. Generating new private key.
Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: Your new public key is:
Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHantvXTIoDqpGd7Ahu3Ccxc6FyyYYf63AA3X2sScq5h
Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.657-04:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDIN>
Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.667-04:00 level=INFO source=images.go:476 msg="total blobs: 0"
Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.667-04:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0"
Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.667-04:00 level=INFO source=routes.go:1288 msg="Listening on 127.0.0.1:11434 (version 0.9.6)"
Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.667-04:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.697-04:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/>
Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.698-04:00 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=0 total="512.0>
Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.698-04:00 level=INFO source=amd_linux.go:402 msg="no compatible amdgpu devices detected"
Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.698-04:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.698-04:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver>

I can use LM Studion with Vulkan just fine. Does ollama support Vulkan? How can I get ollama to recognize my AMD iGPU (gfx1150).

@rrevi commented on GitHub (Jul 29, 2025): > [@rrevi](https://github.com/rrevi) > > ``` > sudo mkdir -p /var/lib/ollama > sudo chown ollama:ollama /var/lib/ollama > ``` > > [@sohochaser](https://github.com/sohochaser) Open a new ticket. Include server log and output of `systemctl cat ollama`. @rick-github , that did it! Thank you. Unfortunately, despite the successful service start, my ollama install can't seem to find my iGPU :( ``` Jul 28 20:52:04 omnibook-ultra-14 systemd[1]: Started Ollama Service. ░░ Subject: A start job for unit ollama.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel ░░ ░░ A start job for unit ollama.service has finished successfully. ░░ ░░ The job identifier is 26095. Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: Couldn't find '/var/lib/ollama/.ollama/id_ed25519'. Generating new private key. Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: Your new public key is: Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHantvXTIoDqpGd7Ahu3Ccxc6FyyYYf63AA3X2sScq5h Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.657-04:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDIN> Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.667-04:00 level=INFO source=images.go:476 msg="total blobs: 0" Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.667-04:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0" Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.667-04:00 level=INFO source=routes.go:1288 msg="Listening on 127.0.0.1:11434 (version 0.9.6)" Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.667-04:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.697-04:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/> Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.698-04:00 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=0 total="512.0> Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.698-04:00 level=INFO source=amd_linux.go:402 msg="no compatible amdgpu devices detected" Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.698-04:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" Jul 28 20:52:04 omnibook-ultra-14 ollama[103703]: time=2025-07-28T20:52:04.698-04:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver> ``` I can use LM Studion with Vulkan just fine. Does ollama support Vulkan? How can I get ollama to recognize my AMD iGPU (gfx1150).
Author
Owner

@rick-github commented on GitHub (Jul 29, 2025):

Ollama doesn't currently support Vulkan or iGPUs.

@rick-github commented on GitHub (Jul 29, 2025): Ollama doesn't currently support Vulkan or iGPUs.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#7635