[GH-ISSUE #4856] OLLAMA_MODELS is broken in 0.1.41 #49583

Closed
opened 2026-04-28 12:21:06 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @rcarmo on GitHub (Jun 6, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4856

What is the issue?

I am running Fedora Silverblue (with an immutable filesystem), so I have long set ollama to run with my own systemd unit:

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
Restart=always
RestartSec=3
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_MODELS=/var/mnt/models/ollama"
Environment="PATH=/var/home/me/.local/bin:/var/home/me/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin"
ExecStart=/usr/local/bin/ollama serve

[Install]
WantedBy=default.target

Upgrading to 0.1.41 broke this spectacularl, because it seems to have stopped using environment variables at all. The config above used to force ollama to look for its data under /var/mnt/models, but it doesn't. What I get is this:

 make restart
systemctl --user restart ollama
systemctl --user status ollama
● ollama.service - Ollama Service
     Loaded: loaded (/etc/systemd/user/ollama.service; enabled; preset: disabled)
    Drop-In: /usr/lib/systemd/user/service.d
             └─10-timeout-abort.conf
     Active: active (running) since Thu 2024-06-06 14:41:28 WEST; 5ms ago
   Main PID: 8099 (ollama)
      Tasks: 5 (limit: 38341)
     Memory: 2.9M (peak: 3.1M)
        CPU: 5ms
     CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/ollama.service
             └─8099 /usr/local/bin/ollama serve

Jun 06 14:41:28 silverblue systemd[7552]: Started ollama.service - Ollama Service.
me@silverblue:~/.ollama$ make logs
journalctl -fu ollama
Jun 06 14:30:27 silverblue systemd[1]: Started ollama.service - Ollama Service.
Jun 06 14:30:27 silverblue ollama[7830]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Jun 06 14:30:27 silverblue ollama[7830]: Error: could not create directory mkdir /usr/share/ollama: read-only file system
Jun 06 14:30:27 silverblue systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE
Jun 06 14:30:27 silverblue systemd[1]: ollama.service: Failed with result 'exit-code'.
Jun 06 14:30:27 silverblue systemd[1]: Stopped ollama.service - Ollama Service.

I have a Makefile that handles upgrading and switching to my custom unit file above that works like this:

upgrade:
	curl -fsSL https://ollama.com/install.sh | sh
	sudo systemctl stop ollama
	sudo systemctl disable ollama
	sudo rm /etc/systemd/system/ollama.service
	sudo cp ollama.service /etc/systemd/user/
	systemctl --user daemon-reload
	systemctl --user start ollama
	systemctl --user status ollama

logs:
	journalctl -fu ollama

restart:
	systemctl --user restart ollama
	systemctl --user status ollama

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.41

Originally created by @rcarmo on GitHub (Jun 6, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4856 ### What is the issue? I am running Fedora Silverblue (with an immutable filesystem), so I have long set ollama to run with my own systemd unit: ```ini [Unit] Description=Ollama Service After=network-online.target [Service] Restart=always RestartSec=3 Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="OLLAMA_MODELS=/var/mnt/models/ollama" Environment="PATH=/var/home/me/.local/bin:/var/home/me/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin" ExecStart=/usr/local/bin/ollama serve [Install] WantedBy=default.target ``` Upgrading to `0.1.41` broke this spectacularl, because it seems to have stopped using environment variables at all. The config above used to force ollama to look for its data under `/var/mnt/models`, but it doesn't. What I get is this: ```bash make restart systemctl --user restart ollama systemctl --user status ollama ● ollama.service - Ollama Service Loaded: loaded (/etc/systemd/user/ollama.service; enabled; preset: disabled) Drop-In: /usr/lib/systemd/user/service.d └─10-timeout-abort.conf Active: active (running) since Thu 2024-06-06 14:41:28 WEST; 5ms ago Main PID: 8099 (ollama) Tasks: 5 (limit: 38341) Memory: 2.9M (peak: 3.1M) CPU: 5ms CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/ollama.service └─8099 /usr/local/bin/ollama serve Jun 06 14:41:28 silverblue systemd[7552]: Started ollama.service - Ollama Service. me@silverblue:~/.ollama$ make logs journalctl -fu ollama Jun 06 14:30:27 silverblue systemd[1]: Started ollama.service - Ollama Service. Jun 06 14:30:27 silverblue ollama[7830]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key. Jun 06 14:30:27 silverblue ollama[7830]: Error: could not create directory mkdir /usr/share/ollama: read-only file system Jun 06 14:30:27 silverblue systemd[1]: ollama.service: Main process exited, code=exited, status=1/FAILURE Jun 06 14:30:27 silverblue systemd[1]: ollama.service: Failed with result 'exit-code'. Jun 06 14:30:27 silverblue systemd[1]: Stopped ollama.service - Ollama Service. ``` I have a Makefile that handles upgrading and switching to my custom unit file above that works like this: ```makefile upgrade: curl -fsSL https://ollama.com/install.sh | sh sudo systemctl stop ollama sudo systemctl disable ollama sudo rm /etc/systemd/system/ollama.service sudo cp ollama.service /etc/systemd/user/ systemctl --user daemon-reload systemctl --user start ollama systemctl --user status ollama logs: journalctl -fu ollama restart: systemctl --user restart ollama systemctl --user status ollama ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.41
GiteaMirror added the bug label 2026-04-28 12:21:06 -05:00
Author
Owner

@pdevine commented on GitHub (Jun 8, 2024):

@rcarmo I'm assuming that the issue is w/ systemd, but can you set OLLAMA_DEBUG=1 and post the logs from the ollama service? The first entry in the logs should look something like:

2024/06/06 18:15:07 routes.go:1007: INFO server config env="map[OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE:20m OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:true OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"

Which will see if systemd is passing in the env variables correctly.

<!-- gh-comment-id:2155742347 --> @pdevine commented on GitHub (Jun 8, 2024): @rcarmo I'm assuming that the issue is w/ systemd, but can you set `OLLAMA_DEBUG=1` and post the logs from the ollama service? The first entry in the logs should look something like: ``` 2024/06/06 18:15:07 routes.go:1007: INFO server config env="map[OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE:20m OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:true OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]" ``` Which will see if systemd is passing in the env variables correctly.
Author
Owner

@glibg10b commented on GitHub (Jun 12, 2024):

I'm not the original poster and don't have any issues, but just for interest's sake, here's mine:

INFO server config env="map[OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
Environment="OLLAMA_MODELS=/home/waldo/media/main/sysroot/usr/share/ollama/.ollama/models"
<!-- gh-comment-id:2162338810 --> @glibg10b commented on GitHub (Jun 12, 2024): I'm not the original poster and don't have any issues, but just for interest's sake, here's mine: ```journalctl INFO server config env="map[OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]" ``` ```bash Environment="OLLAMA_MODELS=/home/waldo/media/main/sysroot/usr/share/ollama/.ollama/models" ```
Author
Owner

@pdevine commented on GitHub (Jun 12, 2024):

Ah crap, I realized the OLLAMA_MODELS variable doesn't actually print out in there (it still gets set, but just doesn't get displayed there).

<!-- gh-comment-id:2164015769 --> @pdevine commented on GitHub (Jun 12, 2024): Ah crap, I realized the `OLLAMA_MODELS` variable doesn't actually print out in there (it still gets set, but just doesn't get displayed there).
Author
Owner

@pdevine commented on GitHub (Jun 13, 2024):

Fixed the issue w/ OLLAMA_MODELS not being displayed correctly in #5029 . OLLAMA_HOST also wasn't being displayed properly so I fixed that w/ #5009

<!-- gh-comment-id:2166585942 --> @pdevine commented on GitHub (Jun 13, 2024): Fixed the issue w/ `OLLAMA_MODELS` not being displayed correctly in #5029 . `OLLAMA_HOST` also wasn't being displayed properly so I fixed that w/ #5009
Author
Owner

@pdevine commented on GitHub (Jun 13, 2024):

@rcarmo Ok, I missed this in the original logs:

Jun 06 14:30:27 silverblue ollama[7830]: Error: could not create directory mkdir /usr/share/ollama: read-only file system

Just check that the permissions are set correctly.

I'm going to go ahead and close the issue. #5029 should make this easier to debug in the future.

<!-- gh-comment-id:2166656253 --> @pdevine commented on GitHub (Jun 13, 2024): @rcarmo Ok, I missed this in the original logs: ``` Jun 06 14:30:27 silverblue ollama[7830]: Error: could not create directory mkdir /usr/share/ollama: read-only file system ``` Just check that the permissions are set correctly. I'm going to go ahead and close the issue. #5029 should make this easier to debug in the future.
Author
Owner

@rzippo commented on GitHub (Jan 22, 2026):

Just check that the permissions are set correctly.

This is not a simple permissions issue. The /usr/share folder is read-only because it is an immutable distribution.

This is what happens because of #8297.

<!-- gh-comment-id:3784616885 --> @rzippo commented on GitHub (Jan 22, 2026): > Just check that the permissions are set correctly. This is not a simple permissions issue. The `/usr/share` folder is read-only because it is an immutable distribution. This is what happens because of [#8297](https://github.com/ollama/ollama/issues/8297).
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#49583