[GH-ISSUE #742] Changing the model when running as a background service on Linux #46860

Closed
opened 2026-04-28 01:10:07 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @wifiuk on GitHub (Oct 9, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/742

Originally assigned to: @BruceMacD on GitHub.

i still do know how to change the model, when its running as a background service, its stuck with the llama7b.

There is no documentation to show this, cant we just do Environment="OLLAMA_MODEL=Llama213b" etc?

cat /etc/systemd/system/ollama.service

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="HOME=/usr/share/ollama"
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin"
Environment="OLLAMA_HOST=0.0.0.0:11435"
Environment="OLLAMA_ORIGINS=http://192.168.x.x:*"
[Install]
WantedBy=default.target

Even when its running in the background as a service, I cant connect to it locally from another terminal window as it thinks there serv hasn't been started. Then it wants you to run it again on a different port.

Can we have some better documentation around this please?

Originally created by @wifiuk on GitHub (Oct 9, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/742 Originally assigned to: @BruceMacD on GitHub. i still do know how to change the model, when its running as a background service, its stuck with the llama7b. There is no documentation to show this, cant we just do Environment="OLLAMA_MODEL=Llama213b" etc? `cat /etc/systemd/system/ollama.service` ``` [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="HOME=/usr/share/ollama" Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin" Environment="OLLAMA_HOST=0.0.0.0:11435" Environment="OLLAMA_ORIGINS=http://192.168.x.x:*" [Install] WantedBy=default.target ``` Even when its running in the background as a service, I cant connect to it locally from another terminal window as it thinks there serv hasn't been started. Then it wants you to run it again on a different port. Can we have some better documentation around this please?
Author
Owner

@mxyng commented on GitHub (Oct 9, 2023):

The model is selected during runtime. A generate request has field model which describes which model to use. If a model is already e.g. llama2 loaded and another model is selected, e.g. falcon, Ollama will swap it out.

<!-- gh-comment-id:1753470366 --> @mxyng commented on GitHub (Oct 9, 2023): The model is selected during runtime. A generate request has field [`model`][1] which describes which model to use. If a model is already e.g. `llama2` loaded and another model is selected, e.g. `falcon`, Ollama will swap it out. [1]: https://github.com/jmorganca/ollama/blob/main/docs/api.md#generate-a-completion
Author
Owner

@wifiuk commented on GitHub (Oct 9, 2023):

however when i use model i get:

Ollama call failed with status code 400: stat /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/llama2/13b: no such file or directory

when I have downloaded the 13b before

<!-- gh-comment-id:1753693365 --> @wifiuk commented on GitHub (Oct 9, 2023): however when i use model i get: Ollama call failed with status code 400: stat /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/llama2/13b: no such file or directory when I have downloaded the 13b before
Author
Owner

@BruceMacD commented on GitHub (Oct 9, 2023):

Sounds like #718, where the model needs a pull first. I'm not sure why it would need to be pulled again in you had used it previously though.

<!-- gh-comment-id:1753710746 --> @BruceMacD commented on GitHub (Oct 9, 2023): Sounds like #718, where the model needs a pull first. I'm not sure why it would need to be pulled again in you had used it previously though.
Author
Owner

@antonio-castellon commented on GitHub (Oct 10, 2023):

I have the same problem, if I run ollama run llama2 is working well, any pull is indicating that is already downloaded, but running the service (ollama serve) is not able to find any model.

<!-- gh-comment-id:1755375974 --> @antonio-castellon commented on GitHub (Oct 10, 2023): I have the same problem, if I run ollama run llama2 is working well, any pull is indicating that is already downloaded, but running the service (ollama serve) is not able to find any model.
Author
Owner

@byteconcepts commented on GitHub (Oct 23, 2023):

Strangely, I yesterday noticed, that oolama suddenly listened on a different port (run as system-service).

I then added...
Environment="OLLAMA_HOST=0.0.0.0:4711"
...to the ollama.service file like wifiuk.

To make acces a little easier, I then added...

alias ollama-run='OLLAMA_HOST="192.168.0.15:4711" ollama run'
alias ollama-list='OLLAMA_HOST="192.168.0.15:4711" ollama list'

... to my user's ~/.bash_aliases file.

192.168.0.15 is the machine's external interface address.
(if client and server are on the same machine, 127.0.0.1 is enougth)

Then, after a
source ~/.bash_aliases
I could use the "commands" ollama-run [model-name] or ollama-list successfully.

Just notice, I should also add an alias for "ollama show".

Without adding the aliases I enter in the console:

$ OLLAMA_HOST="127.0.0.1:4711" ollama list
NAME                            ID              SIZE    MODIFIED     
ellie:latest                    71f25ef48cab    3.8 GB  3 hours ago 
everythinglm:latest             bb66cc8d6bfe    7.4 GB  7 hours ago 
jolie:latest                    72c8b2005de1    7.4 GB  3 hours ago 
llama2:latest                   7da22eda89ac    3.8 GB  8 days ago  
llama2-uncensored:latest        ff4791cdfa68    3.8 GB  26 hours ago
mistral-openorca:latest         12dc6acc14d0    4.1 GB  8 days ago  
starcoder:latest                18be557f0e69    1.8 GB  7 days ago  
wizardlm-uncensored:latest      5c4b5d543c3b    7.4 GB  9 hours ago
<!-- gh-comment-id:1776010931 --> @byteconcepts commented on GitHub (Oct 23, 2023): Strangely, I yesterday noticed, that oolama suddenly listened on a different port (run as system-service). I then added... `Environment="OLLAMA_HOST=0.0.0.0:4711"` ...to the ollama.service file like wifiuk. To make acces a little easier, I then added... ``` alias ollama-run='OLLAMA_HOST="192.168.0.15:4711" ollama run' alias ollama-list='OLLAMA_HOST="192.168.0.15:4711" ollama list' ``` ... to my user's ~/.bash_aliases file. 192.168.0.15 is the machine's external interface address. (if client and server are on the same machine, 127.0.0.1 is enougth) Then, after a `source ~/.bash_aliases` I could use the "commands" ollama-run [model-name] or ollama-list successfully. Just notice, I should also add an alias for "ollama show". Without adding the aliases I enter in the console: ``` $ OLLAMA_HOST="127.0.0.1:4711" ollama list NAME ID SIZE MODIFIED ellie:latest 71f25ef48cab 3.8 GB 3 hours ago everythinglm:latest bb66cc8d6bfe 7.4 GB 7 hours ago jolie:latest 72c8b2005de1 7.4 GB 3 hours ago llama2:latest 7da22eda89ac 3.8 GB 8 days ago llama2-uncensored:latest ff4791cdfa68 3.8 GB 26 hours ago mistral-openorca:latest 12dc6acc14d0 4.1 GB 8 days ago starcoder:latest 18be557f0e69 1.8 GB 7 days ago wizardlm-uncensored:latest 5c4b5d543c3b 7.4 GB 9 hours ago ```
Author
Owner

@mxyng commented on GitHub (Oct 25, 2023):

On Linux, ollama serve should not be called directly and the systemd service should be used instead. The FAQ has been updated with more information: https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-can-i-expose-the-ollama-server

<!-- gh-comment-id:1779990605 --> @mxyng commented on GitHub (Oct 25, 2023): On Linux, `ollama serve` should not be called directly and the systemd service should be used instead. The FAQ has been updated with more information: https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-can-i-expose-the-ollama-server
Author
Owner

@technovangelist commented on GitHub (Dec 4, 2023):

I think Michaels comment above should address everything from the original issue so I will go ahead and close it now. If you think there is anything we left out, reopen and we can address. Thanks for being part of this great community.

<!-- gh-comment-id:1839413455 --> @technovangelist commented on GitHub (Dec 4, 2023): I think Michaels comment above should address everything from the original issue so I will go ahead and close it now. If you think there is anything we left out, reopen and we can address. Thanks for being part of this great community.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#46860