[GH-ISSUE #703] Allow listening on all local interfaces #62361

Closed
opened 2026-05-03 08:25:44 -05:00 by GiteaMirror · 64 comments
Owner

Originally created by @vRobM on GitHub (Oct 4, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/703

This means not loopback but all other private networks

Makes it unusable in containers and configs with proxies in front.

Originally created by @vRobM on GitHub (Oct 4, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/703 This means not loopback but all other private networks Makes it unusable in containers and configs with proxies in front.
Author
Owner

@65a commented on GitHub (Oct 5, 2023):

This suprised me because it is not settable by flag (which is where I usually look for stuff like that), but setting OLLAMA_HOST=0.0.0.0 in the environment works for me, and should be easy to include container stuff like k8s or docker.

<!-- gh-comment-id:1747857562 --> @65a commented on GitHub (Oct 5, 2023): This suprised me because it is not settable by flag (which is where I usually look for stuff like that), but setting `OLLAMA_HOST=0.0.0.0` in the environment works for me, and should be easy to include container stuff like k8s or docker.
Author
Owner

@vRobM commented on GitHub (Oct 5, 2023):

It would be nice to have it be a command line argument.

the port can be changed through the same variable as there doesn't appear to be a OLLAMA_PORT.

export OLLAMA_HOST=0.0.0.0:8080

<!-- gh-comment-id:1747881657 --> @vRobM commented on GitHub (Oct 5, 2023): It would be nice to have it be a command line argument. the port can be changed through the same variable as there doesn't appear to be a OLLAMA_PORT. `export OLLAMA_HOST=0.0.0.0:8080`
Author
Owner

@jtoy commented on GitHub (Oct 6, 2023):

agree it should be cli option

<!-- gh-comment-id:1751096277 --> @jtoy commented on GitHub (Oct 6, 2023): agree it should be cli option
Author
Owner

@byteconcepts commented on GitHub (Oct 23, 2023):

in the /etc/systemd/system/ollama.service file, you may also add
Environment="OLLAMA_HOST=0.0.0.0:8080"
and the ollama system service will listen on all Interfaces/IPs so you may reach it from any machine in the network.

In console you may reach it for example like this:
OLLAMA_HOST="127.0.0.1:8080" ollama list

<!-- gh-comment-id:1776028541 --> @byteconcepts commented on GitHub (Oct 23, 2023): in the /etc/systemd/system/ollama.service file, you may also add `Environment="OLLAMA_HOST=0.0.0.0:8080"` and the ollama system service will listen on all Interfaces/IPs so you may reach it from any machine in the network. In console you may reach it for example like this: `OLLAMA_HOST="127.0.0.1:8080" ollama list`
Author
Owner

@jmorganca commented on GitHub (Oct 26, 2023):

Hi @vRobM this should be configurable with OLLAMA_HOST now. I'll close this issue but please do re-open it if it's not solved.

<!-- gh-comment-id:1781841194 --> @jmorganca commented on GitHub (Oct 26, 2023): Hi @vRobM this should be configurable with `OLLAMA_HOST` now. I'll close this issue but please do re-open it if it's not solved.
Author
Owner

@mattbisme commented on GitHub (Dec 21, 2023):

Where do you set Environment when using Ollama.app on macOS?

<!-- gh-comment-id:1865820177 --> @mattbisme commented on GitHub (Dec 21, 2023): Where do you set `Environment` when using `Ollama.app` on macOS?
Author
Owner

@NeuralEmpowerment commented on GitHub (Jan 5, 2024):

Where do you set Environment when using Ollama.app on macOS?

I'm also curious, as I've having trouble connecting to Ollama from another front-end on my network and I haven't been able to get it working with export OLLAMA_HOST=0.0.0.0:8080 or export OLLAMA_HOST=0.0.0.0:11434 🤔

<!-- gh-comment-id:1879178249 --> @NeuralEmpowerment commented on GitHub (Jan 5, 2024): > Where do you set `Environment` when using `Ollama.app` on macOS? I'm also curious, as I've having trouble connecting to Ollama from another front-end on my network and I haven't been able to get it working with `export OLLAMA_HOST=0.0.0.0:8080` or `export OLLAMA_HOST=0.0.0.0:11434` 🤔
Author
Owner

@Ectalite commented on GitHub (Jan 13, 2024):

I'm also curious, as I've having trouble connecting to Ollama from another front-end on my network and I haven't been able to get it working with export OLLAMA_HOST=0.0.0.0:8080 or export OLLAMA_HOST=0.0.0.0:11434 🤔

You have to use launchctl setenv OLLAMA_HOST 0.0.0.0:8080 and restart ollama and the terminal.
https://stackoverflow.com/questions/603785/environment-variables-in-mac-os-x

<!-- gh-comment-id:1890650548 --> @Ectalite commented on GitHub (Jan 13, 2024): > I'm also curious, as I've having trouble connecting to Ollama from another front-end on my network and I haven't been able to get it working with export OLLAMA_HOST=0.0.0.0:8080 or export OLLAMA_HOST=0.0.0.0:11434 🤔 You have to use `launchctl setenv OLLAMA_HOST 0.0.0.0:8080` and restart ollama and the terminal. https://stackoverflow.com/questions/603785/environment-variables-in-mac-os-x
Author
Owner

@AnsenIO commented on GitHub (Feb 18, 2024):

To allow listening on all local interfaces, you can follow these steps:

  1. If you’re running Ollama directly from the command line, use the
    OLLAMA_HOST=0.0.0.0 ollama serve command to specify that it should listen on all local interfaces

Or

  1. Edit the service file: Open /etc/systemd/system/ollama.service and add the following line inside the [Service] section:

Environment="OLLAMA_HOST=0.0.0.0"

Once you’ve made your changes, reload the daemons using the command
sudo systemctl daemon-reload ,
and then restart the service with
sudo systemctl restart ollama.

For a Docker container, add the following to your docker-compose.yml file:

yaml


extra_hosts:
  - "host.docker.internal:host-gateway"

This will allow the Ollama instance to be accessible on any of the host’s networks interfaces. Once your container is running, you can check if it’s accessible from other containers or the host machine using the command:
curl http://host.docker.internal:11434 .

<!-- gh-comment-id:1951444576 --> @AnsenIO commented on GitHub (Feb 18, 2024): To allow listening on all local interfaces, you can follow these steps: 1. If you’re running Ollama directly from the command line, use the `OLLAMA_HOST=0.0.0.0 ollama serve` command to specify that it should listen on all local interfaces Or 3. Edit the service file: Open /etc/systemd/system/ollama.service and add the following line inside the [Service] section: `Environment="OLLAMA_HOST=0.0.0.0"` Once you’ve made your changes, reload the daemons using the command `sudo systemctl daemon-reload` , and then restart the service with `sudo systemctl restart ollama.` For a Docker container, add the following to your docker-compose.yml file: ``` yaml extra_hosts: - "host.docker.internal:host-gateway" ``` This will allow the Ollama instance to be accessible on any of the host’s networks interfaces. Once your container is running, you can check if it’s accessible from other containers or the host machine using the command: `curl http://host.docker.internal:11434` .
Author
Owner

@shamitv commented on GitHub (Feb 25, 2024):

Is there a way to do something similar for Windows ?

EDIT : Setting OLLAMA_HOST works on windows command line

set OLLAMA_HOST=0.0.0.0
ollama serve

Windows will prompt for Firewall Permission, allow that

Setting this env var at system level should work as well.

<!-- gh-comment-id:1962845410 --> @shamitv commented on GitHub (Feb 25, 2024): Is there a way to do something similar for Windows ? EDIT : Setting OLLAMA_HOST works on windows command line ``` set OLLAMA_HOST=0.0.0.0 ollama serve ``` Windows will prompt for Firewall Permission, allow that Setting this env var at system level should work as well.
Author
Owner

@mattbisme commented on GitHub (Feb 25, 2024):

I'm also curious, as I've having trouble connecting to Ollama from another front-end on my network and I haven't been able to get it working with export OLLAMA_HOST=0.0.0.0:8080 or export OLLAMA_HOST=0.0.0.0:11434 🤔

You have to use launchctl setenv OLLAMA_HOST 0.0.0.0:8080 and restart ollama and the terminal. https://stackoverflow.com/questions/603785/environment-variables-in-mac-os-x

If I'm running ollama serve, this works fine. However, is there a way to get the Ollama.app to respect this env variable? The only way I can utilize this is with Terminal running.

<!-- gh-comment-id:1962853570 --> @mattbisme commented on GitHub (Feb 25, 2024): > > I'm also curious, as I've having trouble connecting to Ollama from another front-end on my network and I haven't been able to get it working with export OLLAMA_HOST=0.0.0.0:8080 or export OLLAMA_HOST=0.0.0.0:11434 🤔 > > You have to use `launchctl setenv OLLAMA_HOST 0.0.0.0:8080` and restart ollama and the terminal. https://stackoverflow.com/questions/603785/environment-variables-in-mac-os-x If I'm running `ollama serve`, this works fine. However, is there a way to get the Ollama.app to respect this `env` variable? The only way I can utilize this is with Terminal running.
Author
Owner

@ghost commented on GitHub (Mar 4, 2024):

Is there a way to do something similar for Windows ?

EDIT : Setting OLLAMA_HOST works on windows command line

set OLLAMA_HOST=0.0.0.0
ollama serve

Windows will prompt for Firewall Permission, allow that

Setting this env var at system level should work as well.

amazing thanks!!

<!-- gh-comment-id:1975736136 --> @ghost commented on GitHub (Mar 4, 2024): > Is there a way to do something similar for Windows ? > > EDIT : Setting OLLAMA_HOST works on windows command line > > ``` > set OLLAMA_HOST=0.0.0.0 > ollama serve > ``` > > Windows will prompt for Firewall Permission, allow that > > Setting this env var at system level should work as well. amazing thanks!!
Author
Owner

@iliasch-dev commented on GitHub (Mar 6, 2024):

i did set the OLLAMA_HOST=0.0.0.0 but now i cannot access it locally , only remotely

<!-- gh-comment-id:1981245377 --> @iliasch-dev commented on GitHub (Mar 6, 2024): i did set the OLLAMA_HOST=0.0.0.0 but now i cannot access it locally , only remotely
Author
Owner

@Gdesau commented on GitHub (Mar 12, 2024):

I had the same issue but I'm working on Colab, how can I fix it? Below you can find the error:

ConnectionRefusedError                    Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/urllib3/connection.py](https://e8s3gtj885g-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240308-060124_RC00_613899582#) in _new_conn(self)
    202         try:
--> 203             sock = connection.create_connection(
    204                 (self._dns_host, self.port),

27 frames
ConnectionRefusedError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

NewConnectionError                        Traceback (most recent call last)
NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7c3cec3b9060>: Failed to establish a new connection: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

MaxRetryError                             Traceback (most recent call last)
MaxRetryError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7c3cec3b9060>: Failed to establish a new connection: [Errno 111] Connection refused'))

During handling of the above exception, another exception occurred:

ConnectionError                           Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/requests/adapters.py](https://e8s3gtj885g-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240308-060124_RC00_613899582#) in send(self, request, stream, timeout, verify, cert, proxies)
    517                 raise SSLError(e, request=request)
    518 
--> 519             raise ConnectionError(e, request=request)
    520 
    521         except ClosedPoolError as e:

ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7c3cec3b9060>: Failed to establish a new connection: [Errno 111] Connection refused'))
<!-- gh-comment-id:1992171680 --> @Gdesau commented on GitHub (Mar 12, 2024): I had the same issue but I'm working on Colab, how can I fix it? Below you can find the error: ```` ConnectionRefusedError Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/urllib3/connection.py](https://e8s3gtj885g-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240308-060124_RC00_613899582#) in _new_conn(self) 202 try: --> 203 sock = connection.create_connection( 204 (self._dns_host, self.port), 27 frames ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: NewConnectionError Traceback (most recent call last) NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7c3cec3b9060>: Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: MaxRetryError Traceback (most recent call last) MaxRetryError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7c3cec3b9060>: Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/requests/adapters.py](https://e8s3gtj885g-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240308-060124_RC00_613899582#) in send(self, request, stream, timeout, verify, cert, proxies) 517 raise SSLError(e, request=request) 518 --> 519 raise ConnectionError(e, request=request) 520 521 except ClosedPoolError as e: ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7c3cec3b9060>: Failed to establish a new connection: [Errno 111] Connection refused')) ````
Author
Owner

@ksylvan commented on GitHub (Mar 18, 2024):

export OLLAMA_HOST=0.0.0.0:8080

To do the same in Windows Powershell, you can do:

$env:OLLAMA_HOST="0.0.0.0"
ollama serve
<!-- gh-comment-id:2004055335 --> @ksylvan commented on GitHub (Mar 18, 2024): > export OLLAMA_HOST=0.0.0.0:8080 To do the same in Windows Powershell, you can do: ``` $env:OLLAMA_HOST="0.0.0.0" ollama serve ```
Author
Owner

@nickian commented on GitHub (Mar 20, 2024):

FYI, setx OLLAMA_HOST 0.0.0.0 will have Windows remember the variable. So you don't have to launch it from the command line. The ollama.exe app seems to remember the setting fine for the Windows user.

<!-- gh-comment-id:2008984593 --> @nickian commented on GitHub (Mar 20, 2024): FYI, `setx OLLAMA_HOST 0.0.0.0` will have Windows remember the variable. So you don't have to launch it from the command line. The ollama.exe app seems to remember the setting fine for the Windows user.
Author
Owner

@ksylvan commented on GitHub (Mar 31, 2024):

When you set OLLAMA_HOST=0.0.0.0 in the environment to ensure ollama binds to all interfaces (including the internal WSL network), you need to make sure to reset OLLAMA_HOST appropriately before trying to use any ollama-python calls, otherwise they will fail (both in native windows and in WSL):

Python 3.12.2 (tags/v3.12.2:6abddd9, Feb  6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import ollama
>>> c = ollama.Client()
>>> l = c.list()
Traceback (most recent call last):
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_transports\default.py", line 66, in map_httpcore_exceptions
    yield
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_transports\default.py", line 228, in handle_request
    resp = self._pool.handle_request(req)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpcore\_sync\connection_pool.py", line 216, in handle_request
    raise exc from None
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpcore\_sync\connection_pool.py", line 196, in handle_request
    response = connection.handle_request(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpcore\_sync\connection.py", line 99, in handle_request
    raise exc
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpcore\_sync\connection.py", line 76, in handle_request
    stream = self._connect(request)
             ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpcore\_sync\connection.py", line 122, in _connect
    stream = self._network_backend.connect_tcp(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpcore\_backends\sync.py", line 205, in connect_tcp
    with map_exceptions(exc_map):
  File "C:\Users\kayvan\scoop\apps\python\3.12.2\Lib\contextlib.py", line 158, in __exit__
    self.gen.throw(value)
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpcore\_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ConnectError: [WinError 10049] The requested address is not valid in its context

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\ollama\_client.py", line 328, in list
    return self._request('GET', '/api/tags').json()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\ollama\_client.py", line 68, in _request
    response = self._client.request(method, url, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_client.py", line 814, in request
    return self.send(request, auth=auth, follow_redirects=follow_redirects)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_client.py", line 901, in send
    response = self._send_handling_auth(
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_client.py", line 929, in _send_handling_auth
    response = self._send_handling_redirects(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_client.py", line 966, in _send_handling_redirects
    response = self._send_single_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_client.py", line 1002, in _send_single_request
    response = transport.handle_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_transports\default.py", line 227, in handle_request
    with map_httpcore_exceptions():
  File "C:\Users\kayvan\scoop\apps\python\3.12.2\Lib\contextlib.py", line 158, in __exit__
    self.gen.throw(value)
  File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_transports\default.py", line 83, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ConnectError: [WinError 10049] The requested address is not valid in its context

The same call with OLLAMA_HOST set to localhost works.

<!-- gh-comment-id:2028535245 --> @ksylvan commented on GitHub (Mar 31, 2024): When you set `OLLAMA_HOST=0.0.0.0` in the environment to ensure ollama binds to all interfaces (including the internal WSL network), you need to make sure to reset `OLLAMA_HOST` appropriately before trying to use any ollama-python calls, otherwise they will fail (both in native windows and in WSL): ``` Python 3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import ollama >>> c = ollama.Client() >>> l = c.list() Traceback (most recent call last): File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_transports\default.py", line 66, in map_httpcore_exceptions yield File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_transports\default.py", line 228, in handle_request resp = self._pool.handle_request(req) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpcore\_sync\connection_pool.py", line 216, in handle_request raise exc from None File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpcore\_sync\connection_pool.py", line 196, in handle_request response = connection.handle_request( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpcore\_sync\connection.py", line 99, in handle_request raise exc File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpcore\_sync\connection.py", line 76, in handle_request stream = self._connect(request) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpcore\_sync\connection.py", line 122, in _connect stream = self._network_backend.connect_tcp(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpcore\_backends\sync.py", line 205, in connect_tcp with map_exceptions(exc_map): File "C:\Users\kayvan\scoop\apps\python\3.12.2\Lib\contextlib.py", line 158, in __exit__ self.gen.throw(value) File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpcore\_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.ConnectError: [WinError 10049] The requested address is not valid in its context The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\ollama\_client.py", line 328, in list return self._request('GET', '/api/tags').json() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\ollama\_client.py", line 68, in _request response = self._client.request(method, url, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_client.py", line 814, in request return self.send(request, auth=auth, follow_redirects=follow_redirects) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_client.py", line 901, in send response = self._send_handling_auth( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_client.py", line 929, in _send_handling_auth response = self._send_handling_redirects( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_client.py", line 966, in _send_handling_redirects response = self._send_single_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_client.py", line 1002, in _send_single_request response = transport.handle_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_transports\default.py", line 227, in handle_request with map_httpcore_exceptions(): File "C:\Users\kayvan\scoop\apps\python\3.12.2\Lib\contextlib.py", line 158, in __exit__ self.gen.throw(value) File "C:\Users\kayvan\AppData\Local\pipx\pipx\venvs\fabric\Lib\site-packages\httpx\_transports\default.py", line 83, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.ConnectError: [WinError 10049] The requested address is not valid in its context ``` The same call with OLLAMA_HOST set to `localhost` works.
Author
Owner

@dillfrescott commented on GitHub (Apr 2, 2024):

[cross@cross-pc ~]$ sudo netstat -tunlp | grep 11434
tcp6       0      0 :::11434                :::*                    LISTEN      565114/ollama   

how come its only listening on ipv6? i set OLLAMA_HOST=0.0.0.0:11434

<!-- gh-comment-id:2031310663 --> @dillfrescott commented on GitHub (Apr 2, 2024): ```shell [cross@cross-pc ~]$ sudo netstat -tunlp | grep 11434 tcp6 0 0 :::11434 :::* LISTEN 565114/ollama ``` how come its only listening on ipv6? i set `OLLAMA_HOST=0.0.0.0:11434`
Author
Owner

@piclez commented on GitHub (Apr 2, 2024):

@dillfrescott don't set the port together, only the IP:
OLLAMA_HOST=0.0.0.0

<!-- gh-comment-id:2032009036 --> @piclez commented on GitHub (Apr 2, 2024): @dillfrescott don't set the port together, only the IP: `OLLAMA_HOST=0.0.0.0`
Author
Owner

@dillfrescott commented on GitHub (Apr 2, 2024):

Gotcha. Thank you.

<!-- gh-comment-id:2032304968 --> @dillfrescott commented on GitHub (Apr 2, 2024): Gotcha. Thank you.
Author
Owner

@min918 commented on GitHub (Apr 10, 2024):

[cross@cross-pc ~]$ sudo netstat -tunlp | grep 11434
tcp6       0      0 :::11434                :::*                    LISTEN      565114/ollama   

how come its only listening on ipv6? i set OLLAMA_HOST=0.0.0.0:11434

i get the same problem..

<!-- gh-comment-id:2047025214 --> @min918 commented on GitHub (Apr 10, 2024): > ```shell > [cross@cross-pc ~]$ sudo netstat -tunlp | grep 11434 > tcp6 0 0 :::11434 :::* LISTEN 565114/ollama > ``` > > how come its only listening on ipv6? i set `OLLAMA_HOST=0.0.0.0:11434` i get the same problem..
Author
Owner

@Verfinix commented on GitHub (Apr 10, 2024):

Can any one advise, how to get it working on IP4 ?

<!-- gh-comment-id:2047526315 --> @Verfinix commented on GitHub (Apr 10, 2024): Can any one advise, how to get it working on IP4 ?
Author
Owner

@dillfrescott commented on GitHub (Apr 10, 2024):

I have no clue. I removed the port from the env variable like @piclez said and its still only listening on ipv6. And yes ive even rebooted the machine many times inbetween then and now.

<!-- gh-comment-id:2047532231 --> @dillfrescott commented on GitHub (Apr 10, 2024): I have no clue. I removed the port from the env variable like @piclez said and its still only listening on ipv6. And yes ive even rebooted the machine many times inbetween then and now.
Author
Owner

@AnsenIO commented on GitHub (Apr 12, 2024):

if you are on linux, best is to follow the second approach listed below especially if you reboot the machine. ensure that the service is enabled (to start automatically) and you start it with systemctl . if in stead you run it manually using ollama serve, then use the first method.
Check if is enabled and active with
systemctl status ollama

ollama.service - Ollama Service
     Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2024-04-05 07:19:07 WEST; 1 week 0 days ago
...

To allow listening on all local interfaces, you can follow these steps:

1. If you’re running Ollama directly from the command line, use the
   `OLLAMA_HOST=0.0.0.0 ollama serve` command to specify that it should listen on all local interfaces

Or

3. Edit the service file: Open /etc/systemd/system/ollama.service and add the following line inside the [Service] section:

Environment="OLLAMA_HOST=0.0.0.0"

Once you’ve made your changes, reload the daemons using the command sudo systemctl daemon-reload , and then restart the service with sudo systemctl restart ollama.

For a Docker container, add the following to your docker-compose.yml file:

yaml


extra_hosts:
  - "host.docker.internal:host-gateway"

This will allow the Ollama instance to be accessible on any of the host’s networks interfaces. Once your container is running, you can check if it’s accessible from other containers or the host machine using the command: curl http://host.docker.internal:11434 .

<!-- gh-comment-id:2052494027 --> @AnsenIO commented on GitHub (Apr 12, 2024): if you are on linux, best is to follow the second approach listed below especially if you reboot the machine. ensure that the service is enabled (to start automatically) and you start it with systemctl . if in stead you run it manually using ollama serve, then use the first method. Check if is enabled and active with `systemctl status ollama ` ``` ollama.service - Ollama Service Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2024-04-05 07:19:07 WEST; 1 week 0 days ago ... ``` > To allow listening on all local interfaces, you can follow these steps: > > 1. If you’re running Ollama directly from the command line, use the > `OLLAMA_HOST=0.0.0.0 ollama serve` command to specify that it should listen on all local interfaces > > > Or > > 3. Edit the service file: Open /etc/systemd/system/ollama.service and add the following line inside the [Service] section: > > > `Environment="OLLAMA_HOST=0.0.0.0"` > > Once you’ve made your changes, reload the daemons using the command `sudo systemctl daemon-reload` , and then restart the service with `sudo systemctl restart ollama.` > > For a Docker container, add the following to your docker-compose.yml file: > > ``` > yaml > > > extra_hosts: > - "host.docker.internal:host-gateway" > ``` > > This will allow the Ollama instance to be accessible on any of the host’s networks interfaces. Once your container is running, you can check if it’s accessible from other containers or the host machine using the command: `curl http://host.docker.internal:11434` .
Author
Owner

@mattbisme commented on GitHub (Apr 13, 2024):

Can any one advise, how to get it working on IP4 ?

0.0.0.0 is what you would use for IPv4. You would use :: for IPv6. I suspect you have some other network/configuration issues going on that's preventing you from making requests outside of the host. It's also possible that your Ollama installation is not respecting your ENV variable for some reason and is, therefore, defaulting to 127.0.0.1.

Or at least that would be my best guess. @Verfinix

<!-- gh-comment-id:2052972293 --> @mattbisme commented on GitHub (Apr 13, 2024): > Can any one advise, how to get it working on IP4 ? `0.0.0.0` is what you would use for IPv4. You would use `::` for IPv6. I suspect you have some other network/configuration issues going on that's preventing you from making requests outside of the host. It's also possible that your Ollama installation is not respecting your `ENV` variable for some reason and is, therefore, defaulting to `127.0.0.1`. Or at least that would be my best guess. @Verfinix
Author
Owner

@coder903 commented on GitHub (Apr 18, 2024):

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/home/mike/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sb>
Environment="OLLAMA_HOST=0.0.0.0"

[Install]
WantedBy=default.target

Editing ollama.service by adding this line: Environment="OLLAMA_HOST=0.0.0.0" worked for me. One note. I just upgraded Ollama and the service file was overwritten to it's default state so I had to redo it.

<!-- gh-comment-id:2064217767 --> @coder903 commented on GitHub (Apr 18, 2024): [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/home/mike/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sb> Environment="OLLAMA_HOST=0.0.0.0" [Install] WantedBy=default.target Editing ollama.service by adding this line: Environment="OLLAMA_HOST=0.0.0.0" worked for me. One note. I just upgraded Ollama and the service file was overwritten to it's default state so I had to redo it.
Author
Owner

@darkBuddha commented on GitHub (Apr 20, 2024):

Why is OLLAMA_HOST=0.0.0.0 in /etc/environment not working? Should persist systemd service file updates...

<!-- gh-comment-id:2067612948 --> @darkBuddha commented on GitHub (Apr 20, 2024): Why is `OLLAMA_HOST=0.0.0.0` in /etc/environment not working? Should persist systemd service file updates...
Author
Owner

@letsruletheworld commented on GitHub (Apr 21, 2024):

same problem here. configured to

Environment="OLLAMA_HOST=0.0.0.0"

still it only listen to IP6 instead of IP4.

<!-- gh-comment-id:2068078778 --> @letsruletheworld commented on GitHub (Apr 21, 2024): same problem here. configured to `Environment="OLLAMA_HOST=0.0.0.0"` still it only listen to IP6 instead of IP4.
Author
Owner

@Kmfernan5 commented on GitHub (Apr 23, 2024):

It seems that to permanently set the OLLAMA_HOST environment variable on a Windows system, you can use the setx command. This tool allows you to define environment variables at the system level or for the current user, ensuring that the settings persist across reboots. Here's how you can do it:

  1. Open Command Prompt as Administrator: This step is crucial as setting system-wide environment variables requires administrative privileges.

    • Right-click on the Start button, select "Command Prompt (Admin)", or search for "cmd", right-click it, and choose "Run as administrator".
  2. Set the Environment Variable for All Users: If you want OLLAMA_HOST to be available to all users on the system, use the following command:

    setx OLLAMA_HOST "0.0.0.0" /M
    

    The /M switch specifies that the setting should be applied system-wide.

  3. Set the Environment Variable for the Current User Only: If you only need the environment variable for your user account, omit the /M:

    setx OLLAMA_HOST "0.0.0.0"
    

After executing the appropriate setx command, you'll need to restart any applications or command prompts that need to access the OLLAMA_HOST variable, as changes made with setx are only recognized in new sessions.

This method ensures that OLLAMA_HOST is set permanently and will survive system reboots, making it available whenever required by the application. Is that about right?

<!-- gh-comment-id:2073576282 --> @Kmfernan5 commented on GitHub (Apr 23, 2024): It seems that to permanently set the `OLLAMA_HOST` environment variable on a Windows system, you can use the `setx` command. This tool allows you to define environment variables at the system level or for the current user, ensuring that the settings persist across reboots. Here's how you can do it: 1. **Open Command Prompt as Administrator**: This step is crucial as setting system-wide environment variables requires administrative privileges. - Right-click on the Start button, select "Command Prompt (Admin)", or search for "cmd", right-click it, and choose "Run as administrator". 2. **Set the Environment Variable for All Users**: If you want `OLLAMA_HOST` to be available to all users on the system, use the following command: ```cmd setx OLLAMA_HOST "0.0.0.0" /M ``` The `/M` switch specifies that the setting should be applied system-wide. 3. **Set the Environment Variable for the Current User Only**: If you only need the environment variable for your user account, omit the `/M`: ```cmd setx OLLAMA_HOST "0.0.0.0" ``` After executing the appropriate `setx` command, you'll need to restart any applications or command prompts that need to access the `OLLAMA_HOST` variable, as changes made with `setx` are only recognized in new sessions. This method ensures that `OLLAMA_HOST` is set permanently and will survive system reboots, making it available whenever required by the application. Is that about right?
Author
Owner

@mobile-appz commented on GitHub (Apr 24, 2024):

How do you set this permanently on MacOS? Shouldn't this be an option in the UI?

OLLAMA_HOST 0.0.0.0

<!-- gh-comment-id:2074998007 --> @mobile-appz commented on GitHub (Apr 24, 2024): How do you set this permanently on MacOS? Shouldn't this be an option in the UI? OLLAMA_HOST 0.0.0.0
Author
Owner

@JOduMonT commented on GitHub (Apr 26, 2024):

Where do you set Environment when using Ollama.app on macOS?

MacOS is based on UNIX so like Linux you simply set environment variable at
at the user level is would be in your shell profile
on Mac your default shell is more likely ZSH instead of BASH so ~/.zsh_profile or instead of ~/.bash_profile

<!-- gh-comment-id:2078843595 --> @JOduMonT commented on GitHub (Apr 26, 2024): > Where do you set `Environment` when using `Ollama.app` on macOS? MacOS is based on UNIX so like Linux you simply set environment variable at at the user level is would be in your shell profile on Mac your default shell is more likely ZSH instead of BASH so `~/.zsh_profile` or instead of `~/.bash_profile`
Author
Owner

@mobile-appz commented on GitHub (Apr 27, 2024):

Where do you set Environment when using Ollama.app on macOS?

MacOS is based on UNIX so like Linux you simply set environment variable at

at the user level is would be in your shell profile

on Mac your default shell is more likely ZSH instead of BASH so ~/.zsh_profile or instead of ~/.bash_profile

What do you put in the ~/.zsh_profile file? Have you got this to work on MacOS with the .app gui application? Thanks

<!-- gh-comment-id:2080394030 --> @mobile-appz commented on GitHub (Apr 27, 2024): > > Where do you set `Environment` when using `Ollama.app` on macOS? > > > > MacOS is based on UNIX so like Linux you simply set environment variable at > > at the user level is would be in your shell profile > > on Mac your default shell is more likely ZSH instead of BASH so `~/.zsh_profile` or instead of `~/.bash_profile` > > > > What do you put in the ~/.zsh_profile file? Have you got this to work on MacOS with the .app gui application? Thanks
Author
Owner

@jonathanq9 commented on GitHub (May 6, 2024):

I've added the macOS Ollama.app to the "Open at Login" list in Login Items to automatically start at login. To make the Ollama.app listen on "0.0.0.0", I have to close it, run launchctl setenv OLLAMA_HOST "0.0.0.0" in the terminal, and then restart it. However, the OLLAMA_HOST environment variable doesn't persist after a reboot, and I have to set it manually again. How can I automatically set the environment variable OLLAMA_HOST to "0.0.0.0" before the Ollama.app opens at login and have it persist after a reboot?

<!-- gh-comment-id:2095426951 --> @jonathanq9 commented on GitHub (May 6, 2024): I've added the macOS Ollama.app to the "Open at Login" list in Login Items to automatically start at login. To make the Ollama.app listen on "0.0.0.0", I have to close it, run `launchctl setenv OLLAMA_HOST "0.0.0.0"` in the terminal, and then restart it. However, the OLLAMA_HOST environment variable doesn't persist after a reboot, and I have to set it manually again. How can I automatically set the environment variable OLLAMA_HOST to "0.0.0.0" before the Ollama.app opens at login and have it persist after a reboot?
Author
Owner

@AnsenIO commented on GitHub (May 6, 2024):

on Mac OS, you can check set it to auto launch in ~/Library folder, either on LaunchAgents or LaunchDaemons

Here is what Llama3 says about it:
A Mac OS enthusiast!

To set the OLLAMA=0.0.0.0 variable to be loaded before the automatic launch of OLLAMA on system startup, you can follow these steps:

Method 1: Using Launch Agents

  1. Open the Terminal app on your Mac.
  2. Create a new file in the ~/Library/LaunchAgents directory using the following command:
mkdir -p ~/Library/LaunchAgents
echo '{
    "Label" = "com.yourusername.ollama";
    "ProgramArguments" = ("/usr/bin/env OLLAMA=0.0.0.0");
}' > ~/Library/LaunchAgents/com.yourusername.ollama.plist

Replace yourusername with your actual username.
3. Load the Launch Agent using the following command:

launchctl load ~/Library/LaunchAgents/com.yourusername.ollama.plist
  1. To make this setting persistent across restarts, you need to add a crontab entry. Open the Terminal and run:
crontab -e

Then, add the following line at the end of the file:

@login /usr/bin/env OLLAMA=0.0.0.0

This will load the setting every time you log in.

Method 2: Using launchd configuration files

  1. Create a new file in the ~/Library/LaunchDaemons directory using the following command:
mkdir -p ~/Library/LaunchDaemons
echo '{
    "Label" = "com.yourusername.ollama";
    "ProgramArguments" = ("/usr/bin/env OLLAMA=0.0.0.0");
}' > ~/Library/LaunchDaemons/com.yourusername.ollama.plist

Replace yourusername with your actual username.
2. Load the Launch Daemon using the following command:

launchctl load ~/Library/LaunchDaemons/com.yourusername.ollama.plist
  1. To make this setting persistent across restarts, you need to add a crontab entry. Open the Terminal and run:
crontab -e

Then, add the following line at the end of the file:

@reboot /usr/bin/env OLLAMA=0.0.0.0

This will load the setting every time your Mac restarts.

Remember to replace yourusername with your actual username in both methods.

I hope this helps! Let me know if you have any further questions.

<!-- gh-comment-id:2095597344 --> @AnsenIO commented on GitHub (May 6, 2024): on Mac OS, you can check set it to auto launch in ~/Library folder, either on LaunchAgents or LaunchDaemons Here is what Llama3 says about it: A Mac OS enthusiast! To set the `OLLAMA=0.0.0.0` variable to be loaded before the automatic launch of OLLAMA on system startup, you can follow these steps: **Method 1: Using Launch Agents** 1. Open the Terminal app on your Mac. 2. Create a new file in the `~/Library/LaunchAgents` directory using the following command: ``` mkdir -p ~/Library/LaunchAgents echo '{ "Label" = "com.yourusername.ollama"; "ProgramArguments" = ("/usr/bin/env OLLAMA=0.0.0.0"); }' > ~/Library/LaunchAgents/com.yourusername.ollama.plist ``` Replace `yourusername` with your actual username. 3. Load the Launch Agent using the following command: ``` launchctl load ~/Library/LaunchAgents/com.yourusername.ollama.plist ``` 4. To make this setting persistent across restarts, you need to add a crontab entry. Open the Terminal and run: ``` crontab -e ``` Then, add the following line at the end of the file: ``` @login /usr/bin/env OLLAMA=0.0.0.0 ``` This will load the setting every time you log in. **Method 2: Using launchd configuration files** 1. Create a new file in the `~/Library/LaunchDaemons` directory using the following command: ``` mkdir -p ~/Library/LaunchDaemons echo '{ "Label" = "com.yourusername.ollama"; "ProgramArguments" = ("/usr/bin/env OLLAMA=0.0.0.0"); }' > ~/Library/LaunchDaemons/com.yourusername.ollama.plist ``` Replace `yourusername` with your actual username. 2. Load the Launch Daemon using the following command: ``` launchctl load ~/Library/LaunchDaemons/com.yourusername.ollama.plist ``` 3. To make this setting persistent across restarts, you need to add a crontab entry. Open the Terminal and run: ``` crontab -e ``` Then, add the following line at the end of the file: ``` @reboot /usr/bin/env OLLAMA=0.0.0.0 ``` This will load the setting every time your Mac restarts. Remember to replace `yourusername` with your actual username in both methods. I hope this helps! Let me know if you have any further questions.
Author
Owner

@jonathanq9 commented on GitHub (May 9, 2024):

@AnsenIO Thanks for the reply. I tried the steps provided, but I couldn't get this to work on my Mac for unknown reasons. After a reboot, I can't connect to the Ollama port 11434. I use launchctl getenv OLLAMA_HOST to check if the environment variable is set, but it isn't set after reboot.

I did a search, and it mentioned that plist files use XML format, so I tried that approach.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.yourname.ollama</string>
    <key>ProgramArguments</key>
    <array>
        <string>/usr/bin/env</string>
        <string>OLLAMA=0.0.0.0</string>
    </array>
</dict>
</plist>

I also generated the plist XML for launchctl setenv OLLAMA_HOST "0.0.0.0", but I'm not sure if this is correct.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.yourname.ollama</string>
    <key>ProgramArguments</key>
    <array>
        <string>launchctl</string>
        <string>setenv</string>
        <string>OLLAMA_HOST</string>
        <string>0.0.0.0</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
</dict>
</plist>

I wish Ollama provided a toggle or supported configuration files to set OLLAMA_HOST=0.0.0.0. This is seeming more complicated than I originally thought. I'll just set the environment variable manually. Not a big deal. I appreciate the help.

<!-- gh-comment-id:2102158775 --> @jonathanq9 commented on GitHub (May 9, 2024): @AnsenIO Thanks for the reply. I tried the steps provided, but I couldn't get this to work on my Mac for unknown reasons. After a reboot, I can't connect to the Ollama port 11434. I use `launchctl getenv OLLAMA_HOST` to check if the environment variable is set, but it isn't set after reboot. I did a search, and it mentioned that plist files use XML format, so I tried that approach. ``` <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.yourname.ollama</string> <key>ProgramArguments</key> <array> <string>/usr/bin/env</string> <string>OLLAMA=0.0.0.0</string> </array> </dict> </plist> ``` I also generated the plist XML for `launchctl setenv OLLAMA_HOST "0.0.0.0"`, but I'm not sure if this is correct. ``` <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.yourname.ollama</string> <key>ProgramArguments</key> <array> <string>launchctl</string> <string>setenv</string> <string>OLLAMA_HOST</string> <string>0.0.0.0</string> </array> <key>RunAtLoad</key> <true/> </dict> </plist> ``` I wish Ollama provided a toggle or supported configuration files to set `OLLAMA_HOST=0.0.0.0`. This is seeming more complicated than I originally thought. I'll just set the environment variable manually. Not a big deal. I appreciate the help.
Author
Owner

@ch0c0l8ra1n commented on GitHub (May 11, 2024):

When you set OLLAMA_HOST=0.0.0.0 in the environment to ensure ollama binds to all interfaces (including the internal WSL network), you need to make sure to reset OLLAMA_HOST appropriately before trying to use any ollama-python calls, otherwise they will fail (both in native windows and in WSL):

@ksylvan Could you clarify what you mean by reset OLLAMA_HOST before trying to use ollama-python calls? My code is currently throwing the "address is not valid in context" error but I managed to solve it by launching an ollama client with an appropriate host.

<!-- gh-comment-id:2105983136 --> @ch0c0l8ra1n commented on GitHub (May 11, 2024): > When you set `OLLAMA_HOST=0.0.0.0` in the environment to ensure ollama binds to all interfaces (including the internal WSL network), you need to make sure to reset `OLLAMA_HOST` appropriately before trying to use any ollama-python calls, otherwise they will fail (both in native windows and in WSL): @ksylvan Could you clarify what you mean by reset `OLLAMA_HOST` before trying to use ollama-python calls? My code is currently throwing the "address is not valid in context" error but I managed to solve it by launching an ollama client with an appropriate host.
Author
Owner

@ksylvan commented on GitHub (May 12, 2024):

When you set OLLAMA_HOST=0.0.0.0 in the environment to ensure ollama binds to all interfaces (including the internal WSL network), you need to make sure to reset OLLAMA_HOST appropriately before trying to use any ollama-python calls, otherwise they will fail (both in native windows and in WSL):

@ksylvan Could you clarify what you mean by reset OLLAMA_HOST before trying to use ollama-python calls? My code is currently throwing the "address is not valid in context" error but I managed to solve it by launching an ollama client with an appropriate host.

@ch0c0l8ra1n The ollama-python client code does not like OLLAMA_HOST being set to 0.0.0.0 - even if that's what you did to make sure the ollama server binds to all interfaces. You must set OLLAMA_HOST to something like localhost before exercising the python bindings.

<!-- gh-comment-id:2106081491 --> @ksylvan commented on GitHub (May 12, 2024): > > When you set `OLLAMA_HOST=0.0.0.0` in the environment to ensure ollama binds to all interfaces (including the internal WSL network), you need to make sure to reset `OLLAMA_HOST` appropriately before trying to use any ollama-python calls, otherwise they will fail (both in native windows and in WSL): > > @ksylvan Could you clarify what you mean by reset `OLLAMA_HOST` before trying to use ollama-python calls? My code is currently throwing the "address is not valid in context" error but I managed to solve it by launching an ollama client with an appropriate host. @ch0c0l8ra1n The ollama-python client code does not like `OLLAMA_HOST` being set to `0.0.0.0` - even if that's what you did to make sure the ollama server binds to all interfaces. You must set `OLLAMA_HOST` to something like `localhost` before exercising the python bindings.
Author
Owner

@isvicy commented on GitHub (May 12, 2024):

we should not edit the ollama daemon service file directly. what we should do is creating a extra config file like what docker did in its doc. in summry, you do the following stuff:

sudo mkdir /etc/systemd/system/ollama.service.d
sudo vim /etc/systemd/system/ollama.service.d/http-host.conf
sudo systemctl daemon-reload
sudo systemctl restart ollama

the content of the http-host.conf file should be:

[Service]
Environment="OLLAMA_HOST=0.0.0.0"

after all this, you can tell ollama is indeed serving on all interfaces by sudo systemctl status ollama, there will be logs like Listening on [::]:11434

<!-- gh-comment-id:2106167911 --> @isvicy commented on GitHub (May 12, 2024): we should not edit the ollama daemon service file directly. what we should do is creating a extra config file like what docker did in [its doc](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy). in summry, you do the following stuff: ``` sudo mkdir /etc/systemd/system/ollama.service.d sudo vim /etc/systemd/system/ollama.service.d/http-host.conf sudo systemctl daemon-reload sudo systemctl restart ollama ``` the content of the http-host.conf file should be: ``` [Service] Environment="OLLAMA_HOST=0.0.0.0" ``` after all this, you can tell ollama is indeed serving on all interfaces by `sudo systemctl status ollama`, there will be logs like `Listening on [::]:11434`
Author
Owner

@airtonix commented on GitHub (May 20, 2024):

we should not edit the ollama daemon service file directly. what we should do is creating a extra config file like what docker did in its doc. in summry, you do the following stuff:

sudo mkdir /etc/systemd/system/ollama.service.d
sudo vim /etc/systemd/system/ollama.service.d/http-host.conf
sudo systemctl daemon-reload
sudo systemctl restart ollama

the content of the http-host.conf file should be:

[Service]
Environment="OLLAMA_HOST=0.0.0.0"

after all this, you can tell ollama is indeed serving on all interfaces by sudo systemctl status ollama, there will be logs like Listening on [::]:11434

No need for alarm; This already happens when you run systemctl edit ollama.service

<!-- gh-comment-id:2121358451 --> @airtonix commented on GitHub (May 20, 2024): > we should not edit the ollama daemon service file directly. what we should do is creating a extra config file like what docker did in [its doc](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy). in summry, you do the following stuff: > > ``` > sudo mkdir /etc/systemd/system/ollama.service.d > sudo vim /etc/systemd/system/ollama.service.d/http-host.conf > sudo systemctl daemon-reload > sudo systemctl restart ollama > ``` > > the content of the http-host.conf file should be: > > ``` > [Service] > Environment="OLLAMA_HOST=0.0.0.0" > ``` > > after all this, you can tell ollama is indeed serving on all interfaces by `sudo systemctl status ollama`, there will be logs like `Listening on [::]:11434` No need for alarm; This already happens when you run `systemctl edit ollama.service`
Author
Owner

@nuaimat commented on GitHub (May 31, 2024):

we should not edit the ollama daemon service file directly. what we should do is creating a extra config file like what docker did in its doc. in summry, you do the following stuff:

sudo mkdir /etc/systemd/system/ollama.service.d
sudo vim /etc/systemd/system/ollama.service.d/http-host.conf
sudo systemctl daemon-reload
sudo systemctl restart ollama

the content of the http-host.conf file should be:

[Service]
Environment="OLLAMA_HOST=0.0.0.0"

after all this, you can tell ollama is indeed serving on all interfaces by sudo systemctl status ollama, there will be logs like Listening on [::]:11434

Listening on [::]:11434 does not mean all interfaces, it means ipv6 interfaces.

compare your output to the one from:
sudo systemctl status sshd.service
the output contains:

May 30 21:14:44 server1 sshd[697]: Server listening on 0.0.0.0 port 22.
May 30 21:14:44 server1 sshd[697]: Server listening on :: port 22.

which is reflected on:

$ sudo netstat -nutlp  | grep '22\|11434'
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      697/sshd: /usr/sbin
tcp6       0      0 :::22                   :::*                    LISTEN      697/sshd: /usr/sbin
tcp6       0      0 :::11434                :::*                    LISTEN      2483/ollama

i believe this is a bug and needs a fix

<!-- gh-comment-id:2141330916 --> @nuaimat commented on GitHub (May 31, 2024): > we should not edit the ollama daemon service file directly. what we should do is creating a extra config file like what docker did in [its doc](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy). in summry, you do the following stuff: > > ``` > sudo mkdir /etc/systemd/system/ollama.service.d > sudo vim /etc/systemd/system/ollama.service.d/http-host.conf > sudo systemctl daemon-reload > sudo systemctl restart ollama > ``` > > the content of the http-host.conf file should be: > > ``` > [Service] > Environment="OLLAMA_HOST=0.0.0.0" > ``` > > after all this, you can tell ollama is indeed serving on all interfaces by `sudo systemctl status ollama`, there will be logs like `Listening on [::]:11434` `Listening on [::]:11434` does not mean all interfaces, it means ipv6 interfaces. compare your output to the one from: `sudo systemctl status sshd.service` the output contains: ``` May 30 21:14:44 server1 sshd[697]: Server listening on 0.0.0.0 port 22. May 30 21:14:44 server1 sshd[697]: Server listening on :: port 22. ``` which is reflected on: ``` $ sudo netstat -nutlp | grep '22\|11434' tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 697/sshd: /usr/sbin tcp6 0 0 :::22 :::* LISTEN 697/sshd: /usr/sbin tcp6 0 0 :::11434 :::* LISTEN 2483/ollama ``` i believe this is a bug and needs a fix
Author
Owner

@grandoth commented on GitHub (May 31, 2024):

If you're running into this and on WSL, take a look at Issue #1431. Turns out despite tcp6 0 0 :::11434 being reported when binding to 0.0.0.0, it was still actually bound to eth0 on ip4 as well (a 17x.x.x.x for the WSL VM) at least in my case. I could put that eth0 ip address:port in my browser and access ollama. I added the suggested firewall rules and port proxy and I can now get to it through my host's IP.

Note: I was able to set Environment="OLLAMA_HOST=0.0.0.0:11434" in the override .conf file (including the port)

<!-- gh-comment-id:2142847835 --> @grandoth commented on GitHub (May 31, 2024): If you're running into this and on WSL, take a look at Issue #1431. Turns out despite `tcp6 0 0 :::11434` being reported when binding to `0.0.0.0`, it was still actually bound to `eth0` on ip4 as well (a `17x.x.x.x` for the WSL VM) at least in my case. I could put that `eth0` ip `address:port` in my browser and access ollama. I added the suggested firewall rules and port proxy and I can now get to it through my host's IP. Note: I was able to set `Environment="OLLAMA_HOST=0.0.0.0:11434"` in the override .conf file (including the port)
Author
Owner

@HitLuca commented on GitHub (Jun 6, 2024):

As @nuaimat mentioned, setting OLLAMA_HOST=0.0.0.0 doesn't make ollama serve requests from the network using ipv4

<!-- gh-comment-id:2152565416 --> @HitLuca commented on GitHub (Jun 6, 2024): As @nuaimat mentioned, setting `OLLAMA_HOST=0.0.0.0` doesn't make ollama serve requests from the network using ipv4
Author
Owner

@nuaimat commented on GitHub (Jun 6, 2024):

@HitLuca a workaround is to disable ipv6 on your machine.

<!-- gh-comment-id:2153099650 --> @nuaimat commented on GitHub (Jun 6, 2024): @HitLuca a workaround is to disable ipv6 on your machine.
Author
Owner

@HitLuca commented on GitHub (Jun 6, 2024):

@HitLuca a workaround is to disable ipv6 on your machine.

Good to know, I'll try out on the Google cloud vm

<!-- gh-comment-id:2153106049 --> @HitLuca commented on GitHub (Jun 6, 2024): > @HitLuca a workaround is to disable ipv6 on your machine. > Good to know, I'll try out on the Google cloud vm
Author
Owner

@oskapt commented on GitHub (Jul 7, 2024):

For everyone freaking out that netstat shows tcp6: unless you specify that something should only listen on IPv6, the tcp6 notation includes IPv4 by default. You can verify this by nc -v x.x.x.x 11434, using your IPv4 address for x.x.x.x.

All IPv4 addresses exist within the IPv6 address space.

A simple search on Google or SO will show questions about this going back to 2014. Do your homework.

And whoever suggested disabling IPv6 as a workaround is wrong. You don't disable something as a workaround when you don't know why something isn't working. That doesn't fix anything. It only tells the world that you don't know what you're doing and leaves your system in a state that you don't understand.

<!-- gh-comment-id:2212501722 --> @oskapt commented on GitHub (Jul 7, 2024): For everyone freaking out that netstat shows `tcp6`: unless you specify that something should _only_ listen on IPv6, the `tcp6` notation _includes_ IPv4 by default. You can verify this by `nc -v x.x.x.x 11434`, using your IPv4 address for `x.x.x.x`. All IPv4 addresses exist _within_ the IPv6 address space. A simple search on Google or SO will show questions about this going back to 2014. Do your homework. And whoever suggested disabling IPv6 as a workaround is wrong. You don't disable something as a workaround when you don't know why something isn't working. That doesn't fix anything. It only tells the world that you don't know what you're doing and leaves your system in a state that you don't understand.
Author
Owner

@lmaddox commented on GitHub (Jul 29, 2024):

I forgot my workstation has a firewall. I'm leaving this here for anyone else who needs a reminder:

sudo iptables -A INPUT -p tcp --dport 11434 -j ACCEPT

<!-- gh-comment-id:2255018126 --> @lmaddox commented on GitHub (Jul 29, 2024): I forgot my workstation has a firewall. I'm leaving this here for anyone else who needs a reminder: `sudo iptables -A INPUT -p tcp --dport 11434 -j ACCEPT`
Author
Owner

@alansenairj commented on GitHub (Aug 14, 2024):

I will relate my experience here.
My Ollama is running as a Linux service at my PC
OpenUI is running at a container in my NAS.

As mentioned above, I put one more variable in service:
image

If ollama get some update I will put it again or do some extra config to maintain this variable for this service.

One thing causing some confusion is netstat put tcp port as ipv6. I am using Fedora and ss is used intead netstat.

image

Then to test I use a request to my local service
curl http://localhost:11434/api/generate -d '{ "model": "llama2", "prompt": "Tell me a joke." }' | jq .

I am using logs of ollama to check it working too:
journalctl -u ollama -f
image

It is working locally and accepting request in it's API.

So I worked at openwebui frontend. It is working at my NAS, not in my local PC.

I am getting error connections and the problem was to configure docker to use it as openwebui was created. It uses a backend to avoid exploitations.

docker run -d -p 777:8080 -e OLLAMA_BASE_URL=http://192.168.129.106:11434 --add-host=host.docker.internal:host-gateway -v open-webui
:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

--add-host=host.docker.internal:host-gateway:
This option adds a new host entry (host.docker.internal) that points to the gateway IP address (host-gateway). This allows the container to access the host machine by its hostname (e.g., host.docker.internal) instead of its IP address.

This is my PC with VGA running ollama service OLLAMA_BASE_URL=http://192.168.129.106:11434

As you can see it is processing using my GPU.
image

<!-- gh-comment-id:2288474958 --> @alansenairj commented on GitHub (Aug 14, 2024): I will relate my experience here. My Ollama is running as a Linux service at my PC OpenUI is running at a container in my NAS. As mentioned above, I put one more variable in service: ![image](https://github.com/user-attachments/assets/9ca1c569-96b3-4410-8e7c-ccc4ad0982f9) If ollama get some update I will put it again or do some extra config to maintain this variable for this service. One thing causing some confusion is netstat put tcp port as ipv6. I am using Fedora and ss is used intead netstat. ![image](https://github.com/user-attachments/assets/ad805558-359b-4ea4-823b-7be31a794a61) Then to test I use a request to my local service curl http://localhost:11434/api/generate -d '{ "model": "llama2", "prompt": "Tell me a joke." }' | jq . I am using logs of ollama to check it working too: journalctl -u ollama -f ![image](https://github.com/user-attachments/assets/e9b51f32-5799-49c3-9f1b-4f284caddbbd) It is working locally and accepting request in it's API. So I worked at openwebui frontend. It is working at my NAS, not in my local PC. I am getting error connections and the problem was to configure docker to use it as openwebui was created. It uses a backend to avoid exploitations. docker run -d -p 777:8080 -e OLLAMA_BASE_URL=http://192.168.129.106:11434 --add-host=host.docker.internal:host-gateway -v open-webui :/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main --add-host=host.docker.internal:host-gateway: This option adds a new host entry (host.docker.internal) that points to the gateway IP address (host-gateway). This allows the container to access the host machine by its hostname (e.g., host.docker.internal) instead of its IP address. This is my PC with VGA running ollama service OLLAMA_BASE_URL=http://192.168.129.106:11434 As you can see it is processing using my GPU. ![image](https://github.com/user-attachments/assets/e1e6450e-5652-44d6-bec5-292d6b91c84b)
Author
Owner

@ThatCoffeeGuy commented on GitHub (Aug 17, 2024):

we should not edit the ollama daemon service file directly. what we should do is creating a extra config file like what docker did in its doc. in summry, you do the following stuff:

sudo mkdir /etc/systemd/system/ollama.service.d
sudo vim /etc/systemd/system/ollama.service.d/http-host.conf
sudo systemctl daemon-reload
sudo systemctl restart ollama

the content of the http-host.conf file should be:

[Service]
Environment="OLLAMA_HOST=0.0.0.0"

after all this, you can tell ollama is indeed serving on all interfaces by sudo systemctl status ollama, there will be logs like Listening on [::]:11434

I've done this months ago, today updated ollama and wasted half an hour troubleshooting this - it seems it just simply rewrote my systemd file.

<!-- gh-comment-id:2294995297 --> @ThatCoffeeGuy commented on GitHub (Aug 17, 2024): > we should not edit the ollama daemon service file directly. what we should do is creating a extra config file like what docker did in [its doc](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy). in summry, you do the following stuff: > > ``` > sudo mkdir /etc/systemd/system/ollama.service.d > sudo vim /etc/systemd/system/ollama.service.d/http-host.conf > sudo systemctl daemon-reload > sudo systemctl restart ollama > ``` > > the content of the http-host.conf file should be: > > ``` > [Service] > Environment="OLLAMA_HOST=0.0.0.0" > ``` > > after all this, you can tell ollama is indeed serving on all interfaces by `sudo systemctl status ollama`, there will be logs like `Listening on [::]:11434` I've done this months ago, today updated ollama and wasted half an hour troubleshooting this - it seems it just simply rewrote my systemd file.
Author
Owner

@ThatCoffeeGuy commented on GitHub (Aug 31, 2024):

Today I upgraded to 0.3.8, once again, it wiped the Environment="OLLAMA_HOST=0.0.0.0" variable from the systemd file. Please make sure the script respects already defined parameters or gives an interactive way to handle it. (Overwrite Y/N?)

<!-- gh-comment-id:2323032121 --> @ThatCoffeeGuy commented on GitHub (Aug 31, 2024): Today I upgraded to 0.3.8, once again, it wiped the Environment="OLLAMA_HOST=0.0.0.0" variable from the systemd file. Please make sure the script respects already defined parameters or gives an interactive way to handle it. (Overwrite Y/N?)
Author
Owner

@mdlmarkham commented on GitHub (Sep 2, 2024):

Same here - it would be great if these settings weren't overwritten.

<!-- gh-comment-id:2325227304 --> @mdlmarkham commented on GitHub (Sep 2, 2024): Same here - it would be great if these settings weren't overwritten.
Author
Owner

@nuaimat commented on GitHub (Sep 2, 2024):

@ThatCoffeeGuy @mdlmarkham there's a problem with your approach, do the following:

  1. sudo systemctl edit ollama.service
  2. In the resulting editor modify the lines like this:
## Editing /etc/systemd/system/ollama.service.d/override.conf                           
### Anything between here and the comment below will become the new contents of the file 
[Service]
Environment="OLLAMA_HOST=0.0.0.0" "OLLAMA_KEEP_ALIVE=-1" "OLLAMA_MAX_LOADED_MODELS=4"
        
### Lines below this comment will be discarded

This will result in an override systemd file, that will survive across ollama upgrades.

Don't ever manually edit the original systemd file.

<!-- gh-comment-id:2325253842 --> @nuaimat commented on GitHub (Sep 2, 2024): @ThatCoffeeGuy @mdlmarkham there's a problem with your approach, do the following: 1. `sudo systemctl edit ollama.service` 2. In the resulting editor modify the lines like this: ``` ## Editing /etc/systemd/system/ollama.service.d/override.conf ### Anything between here and the comment below will become the new contents of the file [Service] Environment="OLLAMA_HOST=0.0.0.0" "OLLAMA_KEEP_ALIVE=-1" "OLLAMA_MAX_LOADED_MODELS=4" ### Lines below this comment will be discarded ``` This will result in an override systemd file, that will survive across ollama upgrades. Don't ever manually edit the original systemd file.
Author
Owner

@liudonghua123 commented on GitHub (Sep 13, 2024):

I added Environment="OLLAMA_HOST=0.0.0.0" line to /etc/systemd/system/ollama.service. And reload the systemd configuration, then it listened on all the network interface.

Also notice that the HOME environment is updated in the daemon of ollama.

Details
(client) root@xxs:~/ollama# vim /etc/systemd/system/ollama.service
(client) root@xxs:~/ollama# cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/root/.venvs/client/bin:/opt/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-11.0/bin"
Environment="OLLAMA_HOST=0.0.0.0"

[Install]
WantedBy=default.target
(client) root@xxs:~/ollama# systemctl daemon-reload
(client) root@xxs:~/ollama# service ollama restart
(client) root@xxs:~/ollama# netstat -nap|grep 11434
tcp6       0      0 :::11434                :::*                    LISTEN      2848/ollama         
(client) root@xxs:~/ollama# 
(client) root@xxs:~/ollama# ollama list
NAME           	ID          	SIZE  	MODIFIED    
qwen2-math:7b  	28cc3a337734	4.4 GB	6 hours ago	
qwen2:0.5b     	6f48b936a09f	352 MB	6 hours ago	
qwen2:7b       	dd314f039b9d	4.4 GB	6 hours ago	
qwen:14b       	80362ced6553	8.2 GB	6 hours ago	
llama3.1:latest	42182419e950	4.7 GB	6 hours ago	
(client) root@xxs:~/ollama# ollama ps
NAME    	ID          	SIZE  	PROCESSOR	UNTIL              
qwen2:7b	dd314f039b9d	5.7 GB	100% GPU 	3 minutes from now	
(client) root@xxs:~/ollama# 
(client) root@xxs:~/ollama# ps -ef|grep "ollama serve"
ollama    2848     1 11 21:37 ?        00:01:09 /usr/local/bin/ollama serve
root      4719  6504  0 21:48 pts/0    00:00:00 grep --color=auto ollama serve
(client) root@xxs:~/ollama# cat /proc/2848/environ 
LANG=en_US.UTF-8PATH=/root/.venvs/client/bin:/opt/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-11.0/binHOME=/usr/share/ollamaLOGNAME=ollamaUSER=ollamaSHELL=/bin/falseINVOCATION_ID=c02e04d1a03c472995319719ca4e4f16JOURNAL_STREAM=9:94334642OLLAMA_HOST=0.0.0.0(client) root@xxs:~/ollama# 
(client) root@xxs:~/ollama# tree /usr/share/ollama/.ollama/
/usr/share/ollama/.ollama/
├── id_ed25519
├── id_ed25519.pub
└── models
    ├── blobs
    │   ├── sha256-007d4e6a46af30a52ff3266d9aa1ac66926949c6f5bd4bd155eabfa43085138c
    │   ├── sha256-029b87c88d24e9c879df2090f9fd5c88d9290860d37f481faef4bcb9e6077057
    │   ├── sha256-0ba8f0e314b4264dfd19df045cde9d4c394a52474bf92ed6a3de22a4ca31a177
    │   ├── sha256-1a4c3c319823fdabddb22479d0b10820a7a39fe49e45c40bae28fbe83926dc14
    │   ├── sha256-1da0581fd4ce92dcf5a66b1da737cf215d8dcf25aa1b98b44443aaf7173155f5
    │   ├── sha256-2184ab82477bc33a5e08fa209df88f0631a19e686320cce2cfe9e00695b2f0e6
    │   ├── sha256-43070e2d4e532684de521b885f385d0841030efa2b1a20bafb76133a5e1379c1
    │   ├── sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5
    │   ├── sha256-56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb
    │   ├── sha256-62fbfd9ed093d6e5ac83190c86eec5369317919f4b149598d2dbb38900e9faef
    │   ├── sha256-648f809ced2bdb9f26780f2f1cd9b4787804a4796b256ac5c7da05f4fa1729e6
    │   ├── sha256-75357d685f238b6afd7738be9786fdafde641eb6ca9a3be7471939715a68a4de
    │   ├── sha256-77c91b422cc9fce701d401b0ecd74a2d242dafd84983aa13f0766e9e71936db2
    │   ├── sha256-7c7b8e244f6aa1ac8c32b74f56d42c41a0364dd2dabed8d9c6030a862e805b54
    │   ├── sha256-857e2f21d3ffef28100d6799ae3fc8d5c9125d5434a041b6a741dd123ba2b0fa
    │   ├── sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8
    │   ├── sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
    │   ├── sha256-948af2743fc78a328dcb3b0f5a31b3d75f415840fdb699e8b1235978392ecf85
    │   ├── sha256-c156170b718ec29139d3653d40ed1986fd92fb7e0959b5c71f3c48f62e6636f4
    │   ├── sha256-cf595c6f48406e8d26b9e8168ba439b36ac0792ae97906affcc6e3e1fc00f762
    │   ├── sha256-de0334402b975e19dd48eb43a13f7534772fb5b4a054447f8f6a861b87ec5799
    │   └── sha256-f02dd72bb2423204352eabc5637b44d79d17f109fdb510a7c51455892aa2d216
    └── manifests
        └── registry.ollama.ai
            └── library
                ├── llama3.1
                │   └── latest
                ├── qwen
                │   └── 14b
                ├── qwen2
                │   ├── 0.5b
                │   └── 7b
                └── qwen2-math
                    └── 7b

9 directories, 29 files
(client) root@xxs:~/ollama#
<!-- gh-comment-id:2348999357 --> @liudonghua123 commented on GitHub (Sep 13, 2024): I added `Environment="OLLAMA_HOST=0.0.0.0"` line to `/etc/systemd/system/ollama.service`. And reload the systemd configuration, then it listened on all the network interface. Also notice that the HOME environment is updated in the daemon of ollama. <details><summary>Details</summary> ``` (client) root@xxs:~/ollama# vim /etc/systemd/system/ollama.service (client) root@xxs:~/ollama# cat /etc/systemd/system/ollama.service [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/root/.venvs/client/bin:/opt/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-11.0/bin" Environment="OLLAMA_HOST=0.0.0.0" [Install] WantedBy=default.target (client) root@xxs:~/ollama# systemctl daemon-reload (client) root@xxs:~/ollama# service ollama restart (client) root@xxs:~/ollama# netstat -nap|grep 11434 tcp6 0 0 :::11434 :::* LISTEN 2848/ollama (client) root@xxs:~/ollama# (client) root@xxs:~/ollama# ollama list NAME ID SIZE MODIFIED qwen2-math:7b 28cc3a337734 4.4 GB 6 hours ago qwen2:0.5b 6f48b936a09f 352 MB 6 hours ago qwen2:7b dd314f039b9d 4.4 GB 6 hours ago qwen:14b 80362ced6553 8.2 GB 6 hours ago llama3.1:latest 42182419e950 4.7 GB 6 hours ago (client) root@xxs:~/ollama# ollama ps NAME ID SIZE PROCESSOR UNTIL qwen2:7b dd314f039b9d 5.7 GB 100% GPU 3 minutes from now (client) root@xxs:~/ollama# (client) root@xxs:~/ollama# ps -ef|grep "ollama serve" ollama 2848 1 11 21:37 ? 00:01:09 /usr/local/bin/ollama serve root 4719 6504 0 21:48 pts/0 00:00:00 grep --color=auto ollama serve (client) root@xxs:~/ollama# cat /proc/2848/environ LANG=en_US.UTF-8PATH=/root/.venvs/client/bin:/opt/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-11.0/binHOME=/usr/share/ollamaLOGNAME=ollamaUSER=ollamaSHELL=/bin/falseINVOCATION_ID=c02e04d1a03c472995319719ca4e4f16JOURNAL_STREAM=9:94334642OLLAMA_HOST=0.0.0.0(client) root@xxs:~/ollama# (client) root@xxs:~/ollama# tree /usr/share/ollama/.ollama/ /usr/share/ollama/.ollama/ ├── id_ed25519 ├── id_ed25519.pub └── models ├── blobs │   ├── sha256-007d4e6a46af30a52ff3266d9aa1ac66926949c6f5bd4bd155eabfa43085138c │   ├── sha256-029b87c88d24e9c879df2090f9fd5c88d9290860d37f481faef4bcb9e6077057 │   ├── sha256-0ba8f0e314b4264dfd19df045cde9d4c394a52474bf92ed6a3de22a4ca31a177 │   ├── sha256-1a4c3c319823fdabddb22479d0b10820a7a39fe49e45c40bae28fbe83926dc14 │   ├── sha256-1da0581fd4ce92dcf5a66b1da737cf215d8dcf25aa1b98b44443aaf7173155f5 │   ├── sha256-2184ab82477bc33a5e08fa209df88f0631a19e686320cce2cfe9e00695b2f0e6 │   ├── sha256-43070e2d4e532684de521b885f385d0841030efa2b1a20bafb76133a5e1379c1 │   ├── sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5 │   ├── sha256-56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb │   ├── sha256-62fbfd9ed093d6e5ac83190c86eec5369317919f4b149598d2dbb38900e9faef │   ├── sha256-648f809ced2bdb9f26780f2f1cd9b4787804a4796b256ac5c7da05f4fa1729e6 │   ├── sha256-75357d685f238b6afd7738be9786fdafde641eb6ca9a3be7471939715a68a4de │   ├── sha256-77c91b422cc9fce701d401b0ecd74a2d242dafd84983aa13f0766e9e71936db2 │   ├── sha256-7c7b8e244f6aa1ac8c32b74f56d42c41a0364dd2dabed8d9c6030a862e805b54 │   ├── sha256-857e2f21d3ffef28100d6799ae3fc8d5c9125d5434a041b6a741dd123ba2b0fa │   ├── sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 │   ├── sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe │   ├── sha256-948af2743fc78a328dcb3b0f5a31b3d75f415840fdb699e8b1235978392ecf85 │   ├── sha256-c156170b718ec29139d3653d40ed1986fd92fb7e0959b5c71f3c48f62e6636f4 │   ├── sha256-cf595c6f48406e8d26b9e8168ba439b36ac0792ae97906affcc6e3e1fc00f762 │   ├── sha256-de0334402b975e19dd48eb43a13f7534772fb5b4a054447f8f6a861b87ec5799 │   └── sha256-f02dd72bb2423204352eabc5637b44d79d17f109fdb510a7c51455892aa2d216 └── manifests └── registry.ollama.ai └── library ├── llama3.1 │   └── latest ├── qwen │   └── 14b ├── qwen2 │   ├── 0.5b │   └── 7b └── qwen2-math └── 7b 9 directories, 29 files (client) root@xxs:~/ollama# ``` </details>
Author
Owner

@LiMingchen159 commented on GitHub (Sep 19, 2024):

I added Environment="OLLAMA_HOST=0.0.0.0" line to /etc/systemd/system/ollama.service. And reload the systemd configuration, then it listened on all the network interface.

Also notice that the HOME environment is updated in the daemon of ollama.

Details

(client) root@xxs:~/ollama# vim /etc/systemd/system/ollama.service
(client) root@xxs:~/ollama# cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/root/.venvs/client/bin:/opt/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-11.0/bin"
Environment="OLLAMA_HOST=0.0.0.0"

[Install]
WantedBy=default.target
(client) root@xxs:~/ollama# systemctl daemon-reload
(client) root@xxs:~/ollama# service ollama restart
(client) root@xxs:~/ollama# netstat -nap|grep 11434
tcp6       0      0 :::11434                :::*                    LISTEN      2848/ollama         
(client) root@xxs:~/ollama# 
(client) root@xxs:~/ollama# ollama list
NAME           	ID          	SIZE  	MODIFIED    
qwen2-math:7b  	28cc3a337734	4.4 GB	6 hours ago	
qwen2:0.5b     	6f48b936a09f	352 MB	6 hours ago	
qwen2:7b       	dd314f039b9d	4.4 GB	6 hours ago	
qwen:14b       	80362ced6553	8.2 GB	6 hours ago	
llama3.1:latest	42182419e950	4.7 GB	6 hours ago	
(client) root@xxs:~/ollama# ollama ps
NAME    	ID          	SIZE  	PROCESSOR	UNTIL              
qwen2:7b	dd314f039b9d	5.7 GB	100% GPU 	3 minutes from now	
(client) root@xxs:~/ollama# 
(client) root@xxs:~/ollama# ps -ef|grep "ollama serve"
ollama    2848     1 11 21:37 ?        00:01:09 /usr/local/bin/ollama serve
root      4719  6504  0 21:48 pts/0    00:00:00 grep --color=auto ollama serve
(client) root@xxs:~/ollama# cat /proc/2848/environ 
LANG=en_US.UTF-8PATH=/root/.venvs/client/bin:/opt/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-11.0/binHOME=/usr/share/ollamaLOGNAME=ollamaUSER=ollamaSHELL=/bin/falseINVOCATION_ID=c02e04d1a03c472995319719ca4e4f16JOURNAL_STREAM=9:94334642OLLAMA_HOST=0.0.0.0(client) root@xxs:~/ollama# 
(client) root@xxs:~/ollama# tree /usr/share/ollama/.ollama/
/usr/share/ollama/.ollama/
├── id_ed25519
├── id_ed25519.pub
└── models
    ├── blobs
    │   ├── sha256-007d4e6a46af30a52ff3266d9aa1ac66926949c6f5bd4bd155eabfa43085138c
    │   ├── sha256-029b87c88d24e9c879df2090f9fd5c88d9290860d37f481faef4bcb9e6077057
    │   ├── sha256-0ba8f0e314b4264dfd19df045cde9d4c394a52474bf92ed6a3de22a4ca31a177
    │   ├── sha256-1a4c3c319823fdabddb22479d0b10820a7a39fe49e45c40bae28fbe83926dc14
    │   ├── sha256-1da0581fd4ce92dcf5a66b1da737cf215d8dcf25aa1b98b44443aaf7173155f5
    │   ├── sha256-2184ab82477bc33a5e08fa209df88f0631a19e686320cce2cfe9e00695b2f0e6
    │   ├── sha256-43070e2d4e532684de521b885f385d0841030efa2b1a20bafb76133a5e1379c1
    │   ├── sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5
    │   ├── sha256-56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb
    │   ├── sha256-62fbfd9ed093d6e5ac83190c86eec5369317919f4b149598d2dbb38900e9faef
    │   ├── sha256-648f809ced2bdb9f26780f2f1cd9b4787804a4796b256ac5c7da05f4fa1729e6
    │   ├── sha256-75357d685f238b6afd7738be9786fdafde641eb6ca9a3be7471939715a68a4de
    │   ├── sha256-77c91b422cc9fce701d401b0ecd74a2d242dafd84983aa13f0766e9e71936db2
    │   ├── sha256-7c7b8e244f6aa1ac8c32b74f56d42c41a0364dd2dabed8d9c6030a862e805b54
    │   ├── sha256-857e2f21d3ffef28100d6799ae3fc8d5c9125d5434a041b6a741dd123ba2b0fa
    │   ├── sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8
    │   ├── sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
    │   ├── sha256-948af2743fc78a328dcb3b0f5a31b3d75f415840fdb699e8b1235978392ecf85
    │   ├── sha256-c156170b718ec29139d3653d40ed1986fd92fb7e0959b5c71f3c48f62e6636f4
    │   ├── sha256-cf595c6f48406e8d26b9e8168ba439b36ac0792ae97906affcc6e3e1fc00f762
    │   ├── sha256-de0334402b975e19dd48eb43a13f7534772fb5b4a054447f8f6a861b87ec5799
    │   └── sha256-f02dd72bb2423204352eabc5637b44d79d17f109fdb510a7c51455892aa2d216
    └── manifests
        └── registry.ollama.ai
            └── library
                ├── llama3.1
                │   └── latest
                ├── qwen
                │   └── 14b
                ├── qwen2
                │   ├── 0.5b
                │   └── 7b
                └── qwen2-math
                    └── 7b

9 directories, 29 files
(client) root@xxs:~/ollama#

It seems that your ollama service only listening the ipv6 port? Can you use your ipv4 IP and port to use ollama service?

<!-- gh-comment-id:2362119537 --> @LiMingchen159 commented on GitHub (Sep 19, 2024): > I added `Environment="OLLAMA_HOST=0.0.0.0"` line to `/etc/systemd/system/ollama.service`. And reload the systemd configuration, then it listened on all the network interface. > > Also notice that the HOME environment is updated in the daemon of ollama. > > Details > ``` > (client) root@xxs:~/ollama# vim /etc/systemd/system/ollama.service > (client) root@xxs:~/ollama# cat /etc/systemd/system/ollama.service > [Unit] > Description=Ollama Service > After=network-online.target > > [Service] > ExecStart=/usr/local/bin/ollama serve > User=ollama > Group=ollama > Restart=always > RestartSec=3 > Environment="PATH=/root/.venvs/client/bin:/opt/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-11.0/bin" > Environment="OLLAMA_HOST=0.0.0.0" > > [Install] > WantedBy=default.target > (client) root@xxs:~/ollama# systemctl daemon-reload > (client) root@xxs:~/ollama# service ollama restart > (client) root@xxs:~/ollama# netstat -nap|grep 11434 > tcp6 0 0 :::11434 :::* LISTEN 2848/ollama > (client) root@xxs:~/ollama# > (client) root@xxs:~/ollama# ollama list > NAME ID SIZE MODIFIED > qwen2-math:7b 28cc3a337734 4.4 GB 6 hours ago > qwen2:0.5b 6f48b936a09f 352 MB 6 hours ago > qwen2:7b dd314f039b9d 4.4 GB 6 hours ago > qwen:14b 80362ced6553 8.2 GB 6 hours ago > llama3.1:latest 42182419e950 4.7 GB 6 hours ago > (client) root@xxs:~/ollama# ollama ps > NAME ID SIZE PROCESSOR UNTIL > qwen2:7b dd314f039b9d 5.7 GB 100% GPU 3 minutes from now > (client) root@xxs:~/ollama# > (client) root@xxs:~/ollama# ps -ef|grep "ollama serve" > ollama 2848 1 11 21:37 ? 00:01:09 /usr/local/bin/ollama serve > root 4719 6504 0 21:48 pts/0 00:00:00 grep --color=auto ollama serve > (client) root@xxs:~/ollama# cat /proc/2848/environ > LANG=en_US.UTF-8PATH=/root/.venvs/client/bin:/opt/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-11.0/binHOME=/usr/share/ollamaLOGNAME=ollamaUSER=ollamaSHELL=/bin/falseINVOCATION_ID=c02e04d1a03c472995319719ca4e4f16JOURNAL_STREAM=9:94334642OLLAMA_HOST=0.0.0.0(client) root@xxs:~/ollama# > (client) root@xxs:~/ollama# tree /usr/share/ollama/.ollama/ > /usr/share/ollama/.ollama/ > ├── id_ed25519 > ├── id_ed25519.pub > └── models > ├── blobs > │   ├── sha256-007d4e6a46af30a52ff3266d9aa1ac66926949c6f5bd4bd155eabfa43085138c > │   ├── sha256-029b87c88d24e9c879df2090f9fd5c88d9290860d37f481faef4bcb9e6077057 > │   ├── sha256-0ba8f0e314b4264dfd19df045cde9d4c394a52474bf92ed6a3de22a4ca31a177 > │   ├── sha256-1a4c3c319823fdabddb22479d0b10820a7a39fe49e45c40bae28fbe83926dc14 > │   ├── sha256-1da0581fd4ce92dcf5a66b1da737cf215d8dcf25aa1b98b44443aaf7173155f5 > │   ├── sha256-2184ab82477bc33a5e08fa209df88f0631a19e686320cce2cfe9e00695b2f0e6 > │   ├── sha256-43070e2d4e532684de521b885f385d0841030efa2b1a20bafb76133a5e1379c1 > │   ├── sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5 > │   ├── sha256-56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb > │   ├── sha256-62fbfd9ed093d6e5ac83190c86eec5369317919f4b149598d2dbb38900e9faef > │   ├── sha256-648f809ced2bdb9f26780f2f1cd9b4787804a4796b256ac5c7da05f4fa1729e6 > │   ├── sha256-75357d685f238b6afd7738be9786fdafde641eb6ca9a3be7471939715a68a4de > │   ├── sha256-77c91b422cc9fce701d401b0ecd74a2d242dafd84983aa13f0766e9e71936db2 > │   ├── sha256-7c7b8e244f6aa1ac8c32b74f56d42c41a0364dd2dabed8d9c6030a862e805b54 > │   ├── sha256-857e2f21d3ffef28100d6799ae3fc8d5c9125d5434a041b6a741dd123ba2b0fa > │   ├── sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 > │   ├── sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe > │   ├── sha256-948af2743fc78a328dcb3b0f5a31b3d75f415840fdb699e8b1235978392ecf85 > │   ├── sha256-c156170b718ec29139d3653d40ed1986fd92fb7e0959b5c71f3c48f62e6636f4 > │   ├── sha256-cf595c6f48406e8d26b9e8168ba439b36ac0792ae97906affcc6e3e1fc00f762 > │   ├── sha256-de0334402b975e19dd48eb43a13f7534772fb5b4a054447f8f6a861b87ec5799 > │   └── sha256-f02dd72bb2423204352eabc5637b44d79d17f109fdb510a7c51455892aa2d216 > └── manifests > └── registry.ollama.ai > └── library > ├── llama3.1 > │   └── latest > ├── qwen > │   └── 14b > ├── qwen2 > │   ├── 0.5b > │   └── 7b > └── qwen2-math > └── 7b > > 9 directories, 29 files > (client) root@xxs:~/ollama# It seems that your ollama service only listening the ipv6 port? Can you use your ipv4 IP and port to use ollama service?
Author
Owner

@liudonghua123 commented on GitHub (Sep 19, 2024):

I added Environment="OLLAMA_HOST=0.0.0.0" line to /etc/systemd/system/ollama.service. And reload the systemd configuration, then it listened on all the network interface.

Also notice that the HOME environment is updated in the daemon of ollama.

Details

(client) root@xxs:~/ollama# vim /etc/systemd/system/ollama.service
(client) root@xxs:~/ollama# cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/root/.venvs/client/bin:/opt/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-11.0/bin"
Environment="OLLAMA_HOST=0.0.0.0"

[Install]
WantedBy=default.target
(client) root@xxs:~/ollama# systemctl daemon-reload
(client) root@xxs:~/ollama# service ollama restart
(client) root@xxs:~/ollama# netstat -nap|grep 11434
tcp6       0      0 :::11434                :::*                    LISTEN      2848/ollama         
(client) root@xxs:~/ollama# 
(client) root@xxs:~/ollama# ollama list
NAME           	ID          	SIZE  	MODIFIED    
qwen2-math:7b  	28cc3a337734	4.4 GB	6 hours ago	
qwen2:0.5b     	6f48b936a09f	352 MB	6 hours ago	
qwen2:7b       	dd314f039b9d	4.4 GB	6 hours ago	
qwen:14b       	80362ced6553	8.2 GB	6 hours ago	
llama3.1:latest	42182419e950	4.7 GB	6 hours ago	
(client) root@xxs:~/ollama# ollama ps
NAME    	ID          	SIZE  	PROCESSOR	UNTIL              
qwen2:7b	dd314f039b9d	5.7 GB	100% GPU 	3 minutes from now	
(client) root@xxs:~/ollama# 
(client) root@xxs:~/ollama# ps -ef|grep "ollama serve"
ollama    2848     1 11 21:37 ?        00:01:09 /usr/local/bin/ollama serve
root      4719  6504  0 21:48 pts/0    00:00:00 grep --color=auto ollama serve
(client) root@xxs:~/ollama# cat /proc/2848/environ 
LANG=en_US.UTF-8PATH=/root/.venvs/client/bin:/opt/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-11.0/binHOME=/usr/share/ollamaLOGNAME=ollamaUSER=ollamaSHELL=/bin/falseINVOCATION_ID=c02e04d1a03c472995319719ca4e4f16JOURNAL_STREAM=9:94334642OLLAMA_HOST=0.0.0.0(client) root@xxs:~/ollama# 
(client) root@xxs:~/ollama# tree /usr/share/ollama/.ollama/
/usr/share/ollama/.ollama/
├── id_ed25519
├── id_ed25519.pub
└── models
    ├── blobs
    │   ├── sha256-007d4e6a46af30a52ff3266d9aa1ac66926949c6f5bd4bd155eabfa43085138c
    │   ├── sha256-029b87c88d24e9c879df2090f9fd5c88d9290860d37f481faef4bcb9e6077057
    │   ├── sha256-0ba8f0e314b4264dfd19df045cde9d4c394a52474bf92ed6a3de22a4ca31a177
    │   ├── sha256-1a4c3c319823fdabddb22479d0b10820a7a39fe49e45c40bae28fbe83926dc14
    │   ├── sha256-1da0581fd4ce92dcf5a66b1da737cf215d8dcf25aa1b98b44443aaf7173155f5
    │   ├── sha256-2184ab82477bc33a5e08fa209df88f0631a19e686320cce2cfe9e00695b2f0e6
    │   ├── sha256-43070e2d4e532684de521b885f385d0841030efa2b1a20bafb76133a5e1379c1
    │   ├── sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5
    │   ├── sha256-56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb
    │   ├── sha256-62fbfd9ed093d6e5ac83190c86eec5369317919f4b149598d2dbb38900e9faef
    │   ├── sha256-648f809ced2bdb9f26780f2f1cd9b4787804a4796b256ac5c7da05f4fa1729e6
    │   ├── sha256-75357d685f238b6afd7738be9786fdafde641eb6ca9a3be7471939715a68a4de
    │   ├── sha256-77c91b422cc9fce701d401b0ecd74a2d242dafd84983aa13f0766e9e71936db2
    │   ├── sha256-7c7b8e244f6aa1ac8c32b74f56d42c41a0364dd2dabed8d9c6030a862e805b54
    │   ├── sha256-857e2f21d3ffef28100d6799ae3fc8d5c9125d5434a041b6a741dd123ba2b0fa
    │   ├── sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8
    │   ├── sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
    │   ├── sha256-948af2743fc78a328dcb3b0f5a31b3d75f415840fdb699e8b1235978392ecf85
    │   ├── sha256-c156170b718ec29139d3653d40ed1986fd92fb7e0959b5c71f3c48f62e6636f4
    │   ├── sha256-cf595c6f48406e8d26b9e8168ba439b36ac0792ae97906affcc6e3e1fc00f762
    │   ├── sha256-de0334402b975e19dd48eb43a13f7534772fb5b4a054447f8f6a861b87ec5799
    │   └── sha256-f02dd72bb2423204352eabc5637b44d79d17f109fdb510a7c51455892aa2d216
    └── manifests
        └── registry.ollama.ai
            └── library
                ├── llama3.1
                │   └── latest
                ├── qwen
                │   └── 14b
                ├── qwen2
                │   ├── 0.5b
                │   └── 7b
                └── qwen2-math
                    └── 7b

9 directories, 29 files
(client) root@xxs:~/ollama#

It seems that your ollama service only listening the ipv6 port? Can you use your ipv4 IP and port to use ollama service?

Even netstat only shows ipv
6 listening info, the ipv4 also works for me actually.

(client) root@xxs:~/ollama# netstat -nap|grep 11434
tcp6       0      0 :::11434                :::*                    LISTEN      2848/ollama         
(client) root@xxs:~/ollama#
<!-- gh-comment-id:2362377357 --> @liudonghua123 commented on GitHub (Sep 19, 2024): > > I added `Environment="OLLAMA_HOST=0.0.0.0"` line to `/etc/systemd/system/ollama.service`. And reload the systemd configuration, then it listened on all the network interface. > > > > Also notice that the HOME environment is updated in the daemon of ollama. > > > > Details > > ``` > > (client) root@xxs:~/ollama# vim /etc/systemd/system/ollama.service > > (client) root@xxs:~/ollama# cat /etc/systemd/system/ollama.service > > [Unit] > > Description=Ollama Service > > After=network-online.target > > > > [Service] > > ExecStart=/usr/local/bin/ollama serve > > User=ollama > > Group=ollama > > Restart=always > > RestartSec=3 > > Environment="PATH=/root/.venvs/client/bin:/opt/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-11.0/bin" > > Environment="OLLAMA_HOST=0.0.0.0" > > > > [Install] > > WantedBy=default.target > > (client) root@xxs:~/ollama# systemctl daemon-reload > > (client) root@xxs:~/ollama# service ollama restart > > (client) root@xxs:~/ollama# netstat -nap|grep 11434 > > tcp6 0 0 :::11434 :::* LISTEN 2848/ollama > > (client) root@xxs:~/ollama# > > (client) root@xxs:~/ollama# ollama list > > NAME ID SIZE MODIFIED > > qwen2-math:7b 28cc3a337734 4.4 GB 6 hours ago > > qwen2:0.5b 6f48b936a09f 352 MB 6 hours ago > > qwen2:7b dd314f039b9d 4.4 GB 6 hours ago > > qwen:14b 80362ced6553 8.2 GB 6 hours ago > > llama3.1:latest 42182419e950 4.7 GB 6 hours ago > > (client) root@xxs:~/ollama# ollama ps > > NAME ID SIZE PROCESSOR UNTIL > > qwen2:7b dd314f039b9d 5.7 GB 100% GPU 3 minutes from now > > (client) root@xxs:~/ollama# > > (client) root@xxs:~/ollama# ps -ef|grep "ollama serve" > > ollama 2848 1 11 21:37 ? 00:01:09 /usr/local/bin/ollama serve > > root 4719 6504 0 21:48 pts/0 00:00:00 grep --color=auto ollama serve > > (client) root@xxs:~/ollama# cat /proc/2848/environ > > LANG=en_US.UTF-8PATH=/root/.venvs/client/bin:/opt/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-11.0/binHOME=/usr/share/ollamaLOGNAME=ollamaUSER=ollamaSHELL=/bin/falseINVOCATION_ID=c02e04d1a03c472995319719ca4e4f16JOURNAL_STREAM=9:94334642OLLAMA_HOST=0.0.0.0(client) root@xxs:~/ollama# > > (client) root@xxs:~/ollama# tree /usr/share/ollama/.ollama/ > > /usr/share/ollama/.ollama/ > > ├── id_ed25519 > > ├── id_ed25519.pub > > └── models > > ├── blobs > > │   ├── sha256-007d4e6a46af30a52ff3266d9aa1ac66926949c6f5bd4bd155eabfa43085138c > > │   ├── sha256-029b87c88d24e9c879df2090f9fd5c88d9290860d37f481faef4bcb9e6077057 > > │   ├── sha256-0ba8f0e314b4264dfd19df045cde9d4c394a52474bf92ed6a3de22a4ca31a177 > > │   ├── sha256-1a4c3c319823fdabddb22479d0b10820a7a39fe49e45c40bae28fbe83926dc14 > > │   ├── sha256-1da0581fd4ce92dcf5a66b1da737cf215d8dcf25aa1b98b44443aaf7173155f5 > > │   ├── sha256-2184ab82477bc33a5e08fa209df88f0631a19e686320cce2cfe9e00695b2f0e6 > > │   ├── sha256-43070e2d4e532684de521b885f385d0841030efa2b1a20bafb76133a5e1379c1 > > │   ├── sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5 > > │   ├── sha256-56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb > > │   ├── sha256-62fbfd9ed093d6e5ac83190c86eec5369317919f4b149598d2dbb38900e9faef > > │   ├── sha256-648f809ced2bdb9f26780f2f1cd9b4787804a4796b256ac5c7da05f4fa1729e6 > > │   ├── sha256-75357d685f238b6afd7738be9786fdafde641eb6ca9a3be7471939715a68a4de > > │   ├── sha256-77c91b422cc9fce701d401b0ecd74a2d242dafd84983aa13f0766e9e71936db2 > > │   ├── sha256-7c7b8e244f6aa1ac8c32b74f56d42c41a0364dd2dabed8d9c6030a862e805b54 > > │   ├── sha256-857e2f21d3ffef28100d6799ae3fc8d5c9125d5434a041b6a741dd123ba2b0fa > > │   ├── sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 > > │   ├── sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe > > │   ├── sha256-948af2743fc78a328dcb3b0f5a31b3d75f415840fdb699e8b1235978392ecf85 > > │   ├── sha256-c156170b718ec29139d3653d40ed1986fd92fb7e0959b5c71f3c48f62e6636f4 > > │   ├── sha256-cf595c6f48406e8d26b9e8168ba439b36ac0792ae97906affcc6e3e1fc00f762 > > │   ├── sha256-de0334402b975e19dd48eb43a13f7534772fb5b4a054447f8f6a861b87ec5799 > > │   └── sha256-f02dd72bb2423204352eabc5637b44d79d17f109fdb510a7c51455892aa2d216 > > └── manifests > > └── registry.ollama.ai > > └── library > > ├── llama3.1 > > │   └── latest > > ├── qwen > > │   └── 14b > > ├── qwen2 > > │   ├── 0.5b > > │   └── 7b > > └── qwen2-math > > └── 7b > > > > 9 directories, 29 files > > (client) root@xxs:~/ollama# > > It seems that your ollama service only listening the ipv6 port? Can you use your ipv4 IP and port to use ollama service? Even netstat only shows ipv 6 listening info, the ipv4 also works for me actually. ``` (client) root@xxs:~/ollama# netstat -nap|grep 11434 tcp6 0 0 :::11434 :::* LISTEN 2848/ollama (client) root@xxs:~/ollama# ```
Author
Owner

@oskapt commented on GitHub (Sep 30, 2024):

One thing causing some confusion is netstat put tcp port as ipv6.

all IPv4 space fits within IPv6 space, so if you have IPv6 enabled on your system, it will list tcp6 ports for everything that's listening.

<!-- gh-comment-id:2383734423 --> @oskapt commented on GitHub (Sep 30, 2024): > One thing causing some confusion is netstat put tcp port as ipv6. all IPv4 space fits within IPv6 space, so if you have IPv6 enabled on your system, it will list tcp6 ports for everything that's listening.
Author
Owner

@ajfriesen commented on GitHub (Nov 11, 2024):

I ran into the Netstat confusion twice as well.
One time I wrote it down in a blog post.
The second time I googled years after and found my blog post.

TLDR:

  • It turns out the socket itself is an ipv6 socket.
  • there is a special ipv6 address range that maps to ipv4 addresses
  • That means all ipv4 addresses are also ipv6 addresses.
  • Sockets work with ipv4 but since they are ipv6 sockets but are listed as tcp6 in netstat.

Source:

https://www.ajfriesen.com/netstat-shows-tcp6-on-ipv4-only-host/

<!-- gh-comment-id:2468873550 --> @ajfriesen commented on GitHub (Nov 11, 2024): I ran into the Netstat confusion twice as well. One time I wrote it down in a blog post. The second time I googled years after and found my blog post. TLDR: - It turns out the socket itself is an ipv6 socket. - there is a special ipv6 address range that maps to ipv4 addresses - That means all ipv4 addresses are also ipv6 addresses. - Sockets work with ipv4 but since they are ipv6 sockets but are listed as tcp6 in netstat. Source: https://www.ajfriesen.com/netstat-shows-tcp6-on-ipv4-only-host/
Author
Owner

@VishwaS-22 commented on GitHub (Dec 14, 2024):

I'm using ubuntu ec2, I tried adding env for 0.0.0.0, but only ipv6 is opened, I couldn't able to
send req from Postman of my local machine.

ubuntu@ip-172-31-1-71:~$ sudo cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
Environment="OLLAMA_HOST=0.0.0.0"


[Install]
WantedBy=default.target
ubuntu@ip-172-31-1-71:~$ sudo netstat -tuln | grep 11434
tcp6       0      0 :::11434                :::*                    LISTEN
ubuntu@ip-172-31-1-71:~$
<!-- gh-comment-id:2543026152 --> @VishwaS-22 commented on GitHub (Dec 14, 2024): I'm using ubuntu ec2, I tried adding env for 0.0.0.0, but only ipv6 is opened, I couldn't able to send req from Postman of my local machine. ``` ubuntu@ip-172-31-1-71:~$ sudo cat /etc/systemd/system/ollama.service [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" Environment="OLLAMA_HOST=0.0.0.0" [Install] WantedBy=default.target ubuntu@ip-172-31-1-71:~$ sudo netstat -tuln | grep 11434 tcp6 0 0 :::11434 :::* LISTEN ubuntu@ip-172-31-1-71:~$ ```
Author
Owner

@bonyiii commented on GitHub (Dec 31, 2024):

I'm using ubuntu ec2, I tried adding env for 0.0.0.0, but only ipv6 is opened, I couldn't able to send req from Postman of my local machine.

ubuntu@ip-172-31-1-71:~$ sudo cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
Environment="OLLAMA_HOST=0.0.0.0"


[Install]
WantedBy=default.target
ubuntu@ip-172-31-1-71:~$ sudo netstat -tuln | grep 11434
tcp6       0      0 :::11434                :::*                    LISTEN
ubuntu@ip-172-31-1-71:~$

On my machine, it began functioning properly once I opened port 11434 in the firewall.

<!-- gh-comment-id:2566561532 --> @bonyiii commented on GitHub (Dec 31, 2024): > I'm using ubuntu ec2, I tried adding env for 0.0.0.0, but only ipv6 is opened, I couldn't able to send req from Postman of my local machine. > > ``` > ubuntu@ip-172-31-1-71:~$ sudo cat /etc/systemd/system/ollama.service > [Unit] > Description=Ollama Service > After=network-online.target > > [Service] > ExecStart=/usr/local/bin/ollama serve > User=ollama > Group=ollama > Restart=always > RestartSec=3 > Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" > Environment="OLLAMA_HOST=0.0.0.0" > > > [Install] > WantedBy=default.target > ubuntu@ip-172-31-1-71:~$ sudo netstat -tuln | grep 11434 > tcp6 0 0 :::11434 :::* LISTEN > ubuntu@ip-172-31-1-71:~$ > ``` On my machine, it began functioning properly once I opened port 11434 in the firewall.
Author
Owner

@Verizane commented on GitHub (Jan 11, 2025):

In case someone gets here and ask themselves, how to make ollama serve to the network when starting from terminal without using a service on linux debian, in my case simply setting OLLAMA_HOST via

OLLAMA_HOST=0.0.0.0

did not work. I had to set it this way:

set OLLAMA_HOST "0.0.0.0"

Edit: in another attempt this did not work, so I had to try this and it worked:

OLLAMA_HOST="http://0.0.0.0:11434" ollama serve
<!-- gh-comment-id:2585325864 --> @Verizane commented on GitHub (Jan 11, 2025): In case someone gets here and ask themselves, how to make ollama serve to the network when starting from terminal without using a service on linux debian, in my case simply setting OLLAMA_HOST via ``` OLLAMA_HOST=0.0.0.0 ``` did not work. I had to set it this way: ``` set OLLAMA_HOST "0.0.0.0" ``` Edit: in another attempt this did not work, so I had to try this and it worked: ``` OLLAMA_HOST="http://0.0.0.0:11434" ollama serve ```
Author
Owner

@Ghania-Sarwar commented on GitHub (Jan 31, 2025):

I am trying to link my app and ollama model using its prebuilt in docker but I am having this error how ever locally my app is linked with ollama and everything is working fine but in docker this issue persist.

requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe4c0193310>: Failed to establish a new connection: [Errno 111] Connection refused'))
Traceback:
File "/app/locallama.py", line 57, in
summary = chain.invoke({'issues': issues_text}).strip()
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3024, in invoke
input = context.run(step.invoke, input, config)
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 390, in invoke
self.generate_prompt(
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 755, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 950, in generate
output = self._generate_helper(

<!-- gh-comment-id:2628059930 --> @Ghania-Sarwar commented on GitHub (Jan 31, 2025): I am trying to link my app and ollama model using its prebuilt in docker but I am having this error how ever locally my app is linked with ollama and everything is working fine but in docker this issue persist. requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe4c0193310>: Failed to establish a new connection: [Errno 111] Connection refused')) Traceback: File "/app/locallama.py", line 57, in <module> summary = chain.invoke({'issues': issues_text}).strip() File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3024, in invoke input = context.run(step.invoke, input, config) File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 390, in invoke self.generate_prompt( File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 755, in generate_prompt return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs) File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 950, in generate output = self._generate_helper(
Author
Owner

@xelemorf commented on GitHub (May 24, 2025):

The above settings were good, but would run Ollama as a process under command prompt. To still use the desktop app the following worked for me under Windows OS:

  • Exit Ollama desktop all
  • Execute the following in elevated command prompt.
    setx OLLAMA_HOST "0.0.0.0" /M
  • Start Ollama desktop
<!-- gh-comment-id:2906965304 --> @xelemorf commented on GitHub (May 24, 2025): The above settings were good, but would run Ollama as a process under command prompt. To still use the desktop app the following worked for me under Windows OS: - Exit Ollama desktop all - Execute the following in elevated command prompt. `setx OLLAMA_HOST "0.0.0.0" /M` - Start Ollama desktop
Author
Owner

@meraklimaymun commented on GitHub (May 27, 2025):

To allow listening on all local interfaces, you can follow these steps:

  1. If you’re running Ollama directly from the command line, use the
    OLLAMA_HOST=0.0.0.0 ollama serve command to specify that it should listen on all local interfaces

Or

  1. Edit the service file: Open /etc/systemd/system/ollama.service and add the following line inside the [Service] section:

Environment="OLLAMA_HOST=0.0.0.0"

Once you’ve made your changes, reload the daemons using the command sudo systemctl daemon-reload , and then restart the service with sudo systemctl restart ollama.

For a Docker container, add the following to your docker-compose.yml file:

yaml


extra_hosts:
  - "host.docker.internal:host-gateway"

This will allow the Ollama instance to be accessible on any of the host’s networks interfaces. Once your container is running, you can check if it’s accessible from other containers or the host machine using the command: curl http://host.docker.internal:11434 .

I'm using OpenWebUI with a Docker on Raspberry Pi 5. And I somehow couldn't find where the docker-compose.yml file is located.

<!-- gh-comment-id:2911893447 --> @meraklimaymun commented on GitHub (May 27, 2025): > To allow listening on all local interfaces, you can follow these steps: > > 1. If you’re running Ollama directly from the command line, use the > `OLLAMA_HOST=0.0.0.0 ollama serve` command to specify that it should listen on all local interfaces > > Or > > 3. Edit the service file: Open /etc/systemd/system/ollama.service and add the following line inside the [Service] section: > > `Environment="OLLAMA_HOST=0.0.0.0"` > > Once you’ve made your changes, reload the daemons using the command `sudo systemctl daemon-reload` , and then restart the service with `sudo systemctl restart ollama.` > > For a Docker container, add the following to your docker-compose.yml file: > > ``` > yaml > > > extra_hosts: > - "host.docker.internal:host-gateway" > ``` > > This will allow the Ollama instance to be accessible on any of the host’s networks interfaces. Once your container is running, you can check if it’s accessible from other containers or the host machine using the command: `curl http://host.docker.internal:11434` . I'm using OpenWebUI with a Docker on Raspberry Pi 5. And I somehow couldn't find where the docker-compose.yml file is located.
Author
Owner

@AnsenIO commented on GitHub (Jul 13, 2025):

To allow listening on all local interfaces, you can follow these steps:

  1. If you’re running Ollama directly from the command line, use the
    OLLAMA_HOST=0.0.0.0 ollama serve command to specify that it should listen on all local interfaces

Or

  1. Edit the service file: Open /etc/systemd/system/ollama.service and add the following line inside the [Service] section:

Environment="OLLAMA_HOST=0.0.0.0"
Once you’ve made your changes, reload the daemons using the command sudo systemctl daemon-reload , and then restart the service with sudo systemctl restart ollama.
For a Docker container, add the following to your docker-compose.yml file:

yaml


extra_hosts:
  - "host.docker.internal:host-gateway"

This will allow the Ollama instance to be accessible on any of the host’s networks interfaces. Once your container is running, you can check if it’s accessible from other containers or the host machine using the command: curl http://host.docker.internal:11434 .

I'm using OpenWebUI with a Docker on Raspberry Pi 5. And I somehow couldn't find where the docker-compose.yml file is located.

how did you start the openwebui ? I guess you did git clone of the openwebui docker git. inside it there is the docker compose file. you can tweak it.

<!-- gh-comment-id:3067020349 --> @AnsenIO commented on GitHub (Jul 13, 2025): > > To allow listening on all local interfaces, you can follow these steps: > > > > 1. If you’re running Ollama directly from the command line, use the > > `OLLAMA_HOST=0.0.0.0 ollama serve` command to specify that it should listen on all local interfaces > > > > Or > > > > 3. Edit the service file: Open /etc/systemd/system/ollama.service and add the following line inside the [Service] section: > > > > `Environment="OLLAMA_HOST=0.0.0.0"` > > Once you’ve made your changes, reload the daemons using the command `sudo systemctl daemon-reload` , and then restart the service with `sudo systemctl restart ollama.` > > For a Docker container, add the following to your docker-compose.yml file: > > ``` > > yaml > > > > > > extra_hosts: > > - "host.docker.internal:host-gateway" > > ``` > > > > > > > > > > > > > > > > > > > > > > > > This will allow the Ollama instance to be accessible on any of the host’s networks interfaces. Once your container is running, you can check if it’s accessible from other containers or the host machine using the command: `curl http://host.docker.internal:11434` . > > I'm using OpenWebUI with a Docker on Raspberry Pi 5. And I somehow couldn't find where the docker-compose.yml file is located. how did you start the openwebui ? I guess you did git clone of the openwebui docker git. inside it there is the docker compose file. you can tweak it.
Author
Owner

@meghuizen commented on GitHub (Jul 21, 2025):

If you're using Ubuntu / systemd and want to keep the changes when upgrading ollama, create an override file, like the following, with the example content:

File location: /etc/systemd/system/ollama.service.d/listen-all.conf
File content:

[Service]
Environment="OLLAMA_HOST=0.0.0.0"
<!-- gh-comment-id:3096059128 --> @meghuizen commented on GitHub (Jul 21, 2025): If you're using Ubuntu / systemd and want to keep the changes when upgrading ollama, create an override file, like the following, with the example content: File location: **/etc/systemd/system/ollama.service.d/listen-all.conf** File content: ``` [Service] Environment="OLLAMA_HOST=0.0.0.0" ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62361