[GH-ISSUE #1391] Totally stumped :-( #62773

Closed
opened 2026-05-03 10:17:01 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @itscvenk on GitHub (Dec 5, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1391

I have this in the config (and yes, it is below and above the respective sections, as i learnt the hard way, LOL)

Environment="OLLAMA_HOST=mysubdomain.domain.com:11434"
Environment="OLLAMA_ORIGINS='my.ip.in.v4'"

Actual values were used above, server was also rebooted (as restarting the service had no effect)

And, with localhost, it works fine: for example:

 curl http://localhost:11434/api/generate -d '{
 "model": "llama2",
 "prompt":"Why is the sky blue?"
 }'

But when I use mysubdomain.domain.com, i get connection refused even when try from a shell on the same host :-(
And doesn't matter if I use http or https. I have installed let's encrypt certificates on the server

curl http://mysubdomain.domain.com:11434/api/generate -d '{
>  "model": "llama2",
he sky b>  "prompt":"Why is the sky blue?"
>  }'
curl: (7) Failed to connect to mysubdomain.mydomain.com port 11434 after 140 ms: Connection refused

This has me totally foxed! The http call should work, right? And, i hope https will work remote if it is allowed in "OLLAMA_ORIGINS" in the config

Please help

Thanks

Originally created by @itscvenk on GitHub (Dec 5, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1391 I have this in the config (and yes, it is below and above the respective sections, as i learnt the hard way, LOL) ``` Environment="OLLAMA_HOST=mysubdomain.domain.com:11434" Environment="OLLAMA_ORIGINS='my.ip.in.v4'" ``` Actual values were used above, server was also rebooted (as restarting the service had no effect) And, with localhost, it works fine: for example: ``` curl http://localhost:11434/api/generate -d '{ "model": "llama2", "prompt":"Why is the sky blue?" }' ``` But when I use mysubdomain.domain.com, i get connection refused even when try from a shell on the same host :-( And doesn't matter if I use http or https. I have installed let's encrypt certificates on the server ``` curl http://mysubdomain.domain.com:11434/api/generate -d '{ > "model": "llama2", he sky b> "prompt":"Why is the sky blue?" > }' curl: (7) Failed to connect to mysubdomain.mydomain.com port 11434 after 140 ms: Connection refused ``` This has me totally foxed! The http call should work, right? And, i hope https will work remote if it is allowed in "OLLAMA_ORIGINS" in the config Please help Thanks
Author
Owner

@jmorganca commented on GitHub (Dec 5, 2023):

Sorry about this – we'll work on making it easier. In the meantime, I believe you want to set OLLAMA_HOST to either localhost:11434 or 0.0.0.0:11434 (expose Ollama externally). Let me know if this helps!

<!-- gh-comment-id:1841507928 --> @jmorganca commented on GitHub (Dec 5, 2023): Sorry about this – we'll work on making it easier. In the meantime, I believe you want to set `OLLAMA_HOST` to either `localhost:11434` or `0.0.0.0:11434` (expose Ollama externally). Let me know if this helps!
Author
Owner

@mxyng commented on GitHub (Dec 5, 2023):

Actual values were used above, server was also rebooted (as restarting the service had no effect)

Changing these settings requires reloading systemd with systemctl daemon-reload

To ensure configurations are set correctly, can you attach the outputs of systemctl cat ollama?

Detailed instructions for exposing the service are described in the FAQ

<!-- gh-comment-id:1841782278 --> @mxyng commented on GitHub (Dec 5, 2023): > Actual values were used above, server was also rebooted (as restarting the service had no effect) Changing these settings requires reloading systemd with `systemctl daemon-reload` To ensure configurations are set correctly, can you attach the outputs of `systemctl cat ollama`? Detailed instructions for exposing the service are described in the [FAQ](https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-can-i-expose-ollama-on-my-network)
Author
Owner

@itscvenk commented on GitHub (Dec 6, 2023):

Hi

Thanks for the prompt reply

I did reload the daemon

The hostname command shows the actual subdomain that points to this server (mysubdomain.domain.com). Actual domain names are removed.

And here's the output for systemctl cat ollama

`root@mysubdomain:/home/username77# systemctl cat ollama

/etc/systemd/system/ollama.service

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"

[Install]
WantedBy=default.target

/etc/systemd/system/ollama.service.d/override.conf

Environment="OLLAMA_HOST=mysubdomain.mydomain.com:11434"
Environment="OLLAMA_ORIGINS='allowed.ip.address.here'"`

Thanks again

Actual values were used above, server was also rebooted (as restarting the service had no effect)

Changing these settings requires reloading systemd with systemctl daemon-reload

To ensure configurations are set correctly, can you attach the outputs of systemctl cat ollama?

Detailed instructions for exposing the service are described in the FAQ

<!-- gh-comment-id:1842125520 --> @itscvenk commented on GitHub (Dec 6, 2023): Hi Thanks for the prompt reply I did reload the daemon The hostname command shows the actual subdomain that points to this server (mysubdomain.domain.com). Actual domain names are removed. And here's the output for systemctl cat ollama `root@mysubdomain:/home/username77# systemctl cat ollama # /etc/systemd/system/ollama.service [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin" [Install] WantedBy=default.target # /etc/systemd/system/ollama.service.d/override.conf Environment="OLLAMA_HOST=mysubdomain.mydomain.com:11434" Environment="OLLAMA_ORIGINS='allowed.ip.address.here'"` Thanks again > > Actual values were used above, server was also rebooted (as restarting the service had no effect) > > Changing these settings requires reloading systemd with `systemctl daemon-reload` > > To ensure configurations are set correctly, can you attach the outputs of `systemctl cat ollama`? > > Detailed instructions for exposing the service are described in the [FAQ](https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-can-i-expose-ollama-on-my-network)
Author
Owner

@itscvenk commented on GitHub (Dec 6, 2023):

That didn't help. Neither did 0.0.0.0 nor locahost helped.

I also tried to have the subdomain within single inverted commas

Environment="OLLAMA_HOST='mysubdomain.mydomain.com:11434'"

But that didn't help neither: (I did restart the daemon... and just for the heck of it rebooted my VM as well)

Note: hostname, IP, etc. are all set for mysubdomain and it resolves fine, etc. When the real IP is 1.2.3.4, for example the CURL to this IP fails as well with the connection refused message. Very strange

When I do an nslookup the IP shows up fine

All this on multiple shells within the same host :-(

Sorry about this – we'll work on making it easier. In the meantime, I believe you want to set OLLAMA_HOST to either localhost:11434 or 0.0.0.0:11434 (expose Ollama externally). Let me know if this helps!

<!-- gh-comment-id:1842135038 --> @itscvenk commented on GitHub (Dec 6, 2023): That didn't help. Neither did 0.0.0.0 nor locahost helped. I also tried to have the subdomain within single inverted commas `Environment="OLLAMA_HOST='mysubdomain.mydomain.com:11434'"` But that didn't help neither: (I did restart the daemon... and just for the heck of it rebooted my VM as well) Note: hostname, IP, etc. are all set for mysubdomain and it resolves fine, etc. When the real IP is 1.2.3.4, for example the CURL to this IP fails as well with the connection refused message. Very strange When I do an nslookup the IP shows up fine All this on multiple shells within the same host :-( > Sorry about this – we'll work on making it easier. In the meantime, I believe you want to set `OLLAMA_HOST` to either `localhost:11434` or `0.0.0.0:11434` (expose Ollama externally). Let me know if this helps!
Author
Owner

@lfoppiano commented on GitHub (Dec 6, 2023):

I have this configuration

luca@wanda:~/development/github/Ollama-Gui$ systemctl cat ollama
# /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/opt/slurm/19.05.6/bin:/opt/cuda/10.1/open64/bin:/opt/cuda/10.1/samples/bin/x86_64/linux/release:/opt/cuda/10.1/bin:/opt/singularit

[Install]
WantedBy=default.target

# /etc/systemd/system/ollama.service.d/environment.conf
[Service]
Environment="HTTPS_PROXY=http://proxyout.nims.go.jp:8888"
Environment="OLLAMA_ORIGINS=*"
Environment="OLLAMA_HOST=0.0.0.0:11434"

but when I call

(base) Lucas-M1-MacBook-Pro:~ lfoppiano$ curl -I http://wanda.nims.go.jp:11434
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Date: Wed, 06 Dec 2023 07:28:12 GMT
Content-Length: 17

(base) Lucas-M1-MacBook-Pro:~ lfoppiano$ curl -i http://wanda.nims.go.jp:11434
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Date: Wed, 06 Dec 2023 07:28:16 GMT
Content-Length: 17

I don't seem to see the correct headers for allowing the CORS

When I comment out the OLLAMA_HOST, I see that the change is taken into account

Dec 06 16:33:53 wanda.nims.go.jp ollama[48424]: 2023/12/06 16:33:53 images.go:734: total blobs: 21
Dec 06 16:33:53 wanda.nims.go.jp ollama[48424]: 2023/12/06 16:33:53 images.go:741: total unused blobs removed: 0
Dec 06 16:33:53 wanda.nims.go.jp ollama[48424]: 2023/12/06 16:33:53 routes.go:787: Listening on 127.0.0.1:11434 (version 0.1.13)
Dec 06 16:34:16 wanda.nims.go.jp systemd[1]: Stopping Ollama Service...
Dec 06 16:34:16 wanda.nims.go.jp systemd[1]: Stopped Ollama Service.
Dec 06 16:34:16 wanda.nims.go.jp systemd[1]: Started Ollama Service.
Dec 06 16:34:16 wanda.nims.go.jp ollama[48522]: 2023/12/06 16:34:16 images.go:734: total blobs: 21
Dec 06 16:34:16 wanda.nims.go.jp ollama[48522]: 2023/12/06 16:34:16 images.go:741: total unused blobs removed: 0
Dec 06 16:34:16 wanda.nims.go.jp ollama[48522]: 2023/12/06 16:34:16 routes.go:787: Listening on [::]:11434 (version 0.1.13)

Any suggestion?

<!-- gh-comment-id:1842220704 --> @lfoppiano commented on GitHub (Dec 6, 2023): I have this configuration ``` luca@wanda:~/development/github/Ollama-Gui$ systemctl cat ollama # /etc/systemd/system/ollama.service [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/opt/slurm/19.05.6/bin:/opt/cuda/10.1/open64/bin:/opt/cuda/10.1/samples/bin/x86_64/linux/release:/opt/cuda/10.1/bin:/opt/singularit [Install] WantedBy=default.target # /etc/systemd/system/ollama.service.d/environment.conf [Service] Environment="HTTPS_PROXY=http://proxyout.nims.go.jp:8888" Environment="OLLAMA_ORIGINS=*" Environment="OLLAMA_HOST=0.0.0.0:11434" ``` but when I call ```shell (base) Lucas-M1-MacBook-Pro:~ lfoppiano$ curl -I http://wanda.nims.go.jp:11434 HTTP/1.1 200 OK Content-Type: text/plain; charset=utf-8 Date: Wed, 06 Dec 2023 07:28:12 GMT Content-Length: 17 (base) Lucas-M1-MacBook-Pro:~ lfoppiano$ curl -i http://wanda.nims.go.jp:11434 HTTP/1.1 200 OK Content-Type: text/plain; charset=utf-8 Date: Wed, 06 Dec 2023 07:28:16 GMT Content-Length: 17 ``` I don't seem to see the correct headers for allowing the CORS When I comment out the OLLAMA_HOST, I see that the change is taken into account ``` Dec 06 16:33:53 wanda.nims.go.jp ollama[48424]: 2023/12/06 16:33:53 images.go:734: total blobs: 21 Dec 06 16:33:53 wanda.nims.go.jp ollama[48424]: 2023/12/06 16:33:53 images.go:741: total unused blobs removed: 0 Dec 06 16:33:53 wanda.nims.go.jp ollama[48424]: 2023/12/06 16:33:53 routes.go:787: Listening on 127.0.0.1:11434 (version 0.1.13) Dec 06 16:34:16 wanda.nims.go.jp systemd[1]: Stopping Ollama Service... Dec 06 16:34:16 wanda.nims.go.jp systemd[1]: Stopped Ollama Service. Dec 06 16:34:16 wanda.nims.go.jp systemd[1]: Started Ollama Service. Dec 06 16:34:16 wanda.nims.go.jp ollama[48522]: 2023/12/06 16:34:16 images.go:734: total blobs: 21 Dec 06 16:34:16 wanda.nims.go.jp ollama[48522]: 2023/12/06 16:34:16 images.go:741: total unused blobs removed: 0 Dec 06 16:34:16 wanda.nims.go.jp ollama[48522]: 2023/12/06 16:34:16 routes.go:787: Listening on [::]:11434 (version 0.1.13) ``` Any suggestion?
Author
Owner

@itscvenk commented on GitHub (Dec 6, 2023):

I just stumbled upon https://github.com/jmorganca/ollama/blob/main/docs/faq.md and shall follow instructions there.

In any case, I now have a GPU based linux to play with, so will have to install it again there

Thanks all

<!-- gh-comment-id:1843272455 --> @itscvenk commented on GitHub (Dec 6, 2023): I just stumbled upon https://github.com/jmorganca/ollama/blob/main/docs/faq.md and shall follow instructions there. In any case, I now have a GPU based linux to play with, so will have to install it again there Thanks all
Author
Owner

@mxyng commented on GitHub (Dec 6, 2023):

@itscvenk it seems you figured it out but for posterity, Environment needs to be under section [Service], e.g.

[Service]
Environment="OLLAMA_HOST=mysubdomain.mydomain.com:11434"
Environment="OLLAMA_ORIGINS='allowed.ip.address.here'"

@lfoppiano the configuration looks fine. OLLAMA_ORIGINS configures the origins that are allowed to communicate with Ollama. Here's an example which should clarify what I mean:

Without settings OLLAMA_ORIGINS

$ curl -i -H 'Origin:ollama.example.com' 127.0.0.1:11434/
HTTP/1.1 403 Forbidden
Date: Wed, 06 Dec 2023 17:47:14 GMT
Content-Length: 0

Setting OLLAMA_ORIGINS='*'

$ curl -i -H 'Origin:ollama.example.com' 127.0.0.1:11434/
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Type: text/plain; charset=utf-8
Date: Wed, 06 Dec 2023 17:48:02 GMT
Content-Length: 17

Ollama is running

Note: Access-Control-Allow-Origin only appears when Origin is set

$ curl -i 127.0.0.1:11434/
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Date: Wed, 06 Dec 2023 17:48:29 GMT
Content-Length: 17

Ollama is running
<!-- gh-comment-id:1843385117 --> @mxyng commented on GitHub (Dec 6, 2023): @itscvenk it seems you figured it out but for posterity, `Environment` needs to be under section `[Service]`, e.g. ``` [Service] Environment="OLLAMA_HOST=mysubdomain.mydomain.com:11434" Environment="OLLAMA_ORIGINS='allowed.ip.address.here'" ``` @lfoppiano the configuration looks fine. `OLLAMA_ORIGINS` configures the origins that are allowed to communicate with Ollama. Here's an example which should clarify what I mean: Without settings `OLLAMA_ORIGINS` ``` $ curl -i -H 'Origin:ollama.example.com' 127.0.0.1:11434/ HTTP/1.1 403 Forbidden Date: Wed, 06 Dec 2023 17:47:14 GMT Content-Length: 0 ``` Setting `OLLAMA_ORIGINS='*'` ``` $ curl -i -H 'Origin:ollama.example.com' 127.0.0.1:11434/ HTTP/1.1 200 OK Access-Control-Allow-Origin: * Content-Type: text/plain; charset=utf-8 Date: Wed, 06 Dec 2023 17:48:02 GMT Content-Length: 17 Ollama is running ``` > Note: `Access-Control-Allow-Origin` only appears when `Origin` is set ``` $ curl -i 127.0.0.1:11434/ HTTP/1.1 200 OK Content-Type: text/plain; charset=utf-8 Date: Wed, 06 Dec 2023 17:48:29 GMT Content-Length: 17 Ollama is running ```
Author
Owner

@itscvenk commented on GitHub (Dec 6, 2023):

@mxyng : have several nice years ahead my friend

Stay happy and blessed

<!-- gh-comment-id:1843451117 --> @itscvenk commented on GitHub (Dec 6, 2023): @mxyng : have several nice years ahead my friend Stay happy and blessed
Author
Owner

@lfoppiano commented on GitHub (Dec 7, 2023):

@mxyng Thanks!

<!-- gh-comment-id:1844850145 --> @lfoppiano commented on GitHub (Dec 7, 2023): @mxyng Thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62773