[GH-ISSUE #1431] [WSL 2] Exposing ollama via 0.0.0.0 on local network #762

Closed
opened 2026-04-12 10:26:33 -05:00 by GiteaMirror · 23 comments
Owner

Originally created by @bocklucas on GitHub (Dec 8, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1431

Hello! Just spent the last 3 or so hours struggling to figure this out and thought I'd leave my solution here to spare the next person who tries this out as well.

Basically, I was trying to run ollama serve in WSL 2 (setup was insanely quick and easy) and then access it on my local network.

However, when I tried to do this, it wouldn't access ollama in WSL 2, I was able to access it via 127.0.0.1:11434, but not 0.0.0.0:11434, despite following the excellent documentation and setting the OLLAMA_HOST and OLLAMA_ORIGINS environment variables didn't help me.

After much digging and debugging, I discovered that by default, WSL 2 has a virtualized ethernet adapter with its own unique IP address. - Microsoft Documentation

NOTE
Its important to keep in mind that I haven't actually tried this solution myself from scratch, this is my recollection of steps I took over the last several hours to get this to work, anyone encountering the same problem I did please feel free to post what did / didn't work.

My solution to get this working and accessible on my network was as follows:

  1. Get the IP of the WSL 2 virtualized ethernet adapter which can be done by running ifconfig in WSL 2 and getting the IP from the eth0 field, it should be under inet,
$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 170.20.138.60

in this case, the IP address we'll be using is 170.20.138.60
2. In /etc/systemd/system/ollama.service.d/environment.conf, set OLLAMA_HOST to this new IP address, in this example it should look something like this,
/etc/systemd/system/ollama.service.d/environment.conf

[Service]
Environment="OLLAMA_HOST=170.20.138.60:11434"
Environment="OLLAMA_ORIGINS=*"

You'll want to restart your ollama service at this point with

sudo systemctl daemon-reload
sudo systemctl restart ollama
  1. At this point, your ollama service should be pointed at your WSL 2 virtualized ethernet adapter and the next step is to create a port proxy in order to talk to the WSL 2 virtual machine over your network. Open a Powershell window in administrator mode. For reference, serverfault thread
New-NetFireWallRule -DisplayName 'WSL firewall unlock' -Direction Outbound -LocalPort 11434 -Action Allow -Protocol TCP

New-NetFireWallRule -DisplayName 'WSL firewall unlock' -Direction Inbound -LocalPort 11434 -Action Allow -Protocol TCP

and with the WSL firewall rules in place you should be able to run the following to make a port proxy

netsh interface portproxy add v4tov4 listenport=11434 listenaddress=0.0.0.0 connectport=11434 connectaddress=170.20.138.60

and BAM! You should now be able to access the ollama instance on your network!

One caveat I should note, for some weird reason, when I go to http://0.0.0.0:11434 in my machine's browser that's running ollama, I'm not able to connect to the instance, however if I go to my machine's IP, http://192.168.1.123:11434 in the browser, I can access it no problem.

Anyway, hope others find this to be helpful 😁

Originally created by @bocklucas on GitHub (Dec 8, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1431 Hello! Just spent the last 3 or so hours struggling to figure this out and thought I'd leave my solution here to spare the next person who tries this out as well. Basically, I was trying to run `ollama serve` in WSL 2 (setup was insanely quick and easy) and then access it on my local network. However, when I tried to do this, it wouldn't access ollama in WSL 2, I was able to access it via `127.0.0.1:11434`, but not `0.0.0.0:11434`, despite following the [excellent documentation](https://github.com/jmorganca/ollama/blob/main/docs/faq.md) and setting the `OLLAMA_HOST` and `OLLAMA_ORIGINS` environment variables didn't help me. After much digging and debugging, I discovered that by default, `WSL 2 has a virtualized ethernet adapter with its own unique IP address.` - [Microsoft Documentation](https://learn.microsoft.com/en-us/windows/wsl/networking) **NOTE** Its important to keep in mind that I haven't actually tried this solution myself from scratch, this is my recollection of steps I took over the last several hours to get this to work, anyone encountering the same problem I did please feel free to post what did / didn't work. My solution to get this working and accessible on my network was as follows: 1. Get the IP of the WSL 2 virtualized ethernet adapter which can be done by running `ifconfig` in WSL 2 and getting the IP from the `eth0` field, it should be under `inet`, ``` $ ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 170.20.138.60 ``` in this case, the IP address we'll be using is **170.20.138.60** 2. In `/etc/systemd/system/ollama.service.d/environment.conf`, set `OLLAMA_HOST` to this new IP address, in this example it should look something like this, `/etc/systemd/system/ollama.service.d/environment.conf` ``` [Service] Environment="OLLAMA_HOST=170.20.138.60:11434" Environment="OLLAMA_ORIGINS=*" ``` You'll want to restart your ollama service at this point with ``` sudo systemctl daemon-reload sudo systemctl restart ollama ``` 3. At this point, your ollama service should be pointed at your WSL 2 virtualized ethernet adapter and the next step is to create a port proxy in order to talk to the WSL 2 virtual machine over your network. Open a Powershell window in administrator mode. For reference, [serverfault thread](https://serverfault.com/questions/1088746/how-to-access-service-running-on-host-from-wsl2-connection-refused) ``` New-NetFireWallRule -DisplayName 'WSL firewall unlock' -Direction Outbound -LocalPort 11434 -Action Allow -Protocol TCP New-NetFireWallRule -DisplayName 'WSL firewall unlock' -Direction Inbound -LocalPort 11434 -Action Allow -Protocol TCP ``` and with the WSL firewall rules in place you should be able to run the following to make a port proxy ``` netsh interface portproxy add v4tov4 listenport=11434 listenaddress=0.0.0.0 connectport=11434 connectaddress=170.20.138.60 ``` and BAM! You should now be able to access the ollama instance on your network! One caveat I should note, for some weird reason, when I go to `http://0.0.0.0:11434` in my machine's browser that's running ollama, I'm not able to connect to the instance, however if I go to my machine's IP, `http://192.168.1.123:11434` in the browser, I can access it no problem. Anyway, hope others find this to be helpful 😁
Author
Owner

@BruceMacD commented on GitHub (Dec 12, 2023):

Resolving this for now, since there is no outstanding work to be done but it will still be searchable. Thanks for documenting this.

<!-- gh-comment-id:1852321242 --> @BruceMacD commented on GitHub (Dec 12, 2023): Resolving this for now, since there is no outstanding work to be done but it will still be searchable. Thanks for documenting this.
Author
Owner

@easp commented on GitHub (Dec 12, 2023):

@bocklucas

FYI, 0.0.0.0 isn't a host address, it's basically a wildcard for the entire IPv4 Internet. Telling Ollama to listen on that address is telling it to accept connections on any network interface on your computer with an IPv4 address configured, rather than just localhost (127.0.0.1). Trying to open a connection to 0.0.0.0 doesn't work because it's not actually a host address.

<!-- gh-comment-id:1852728440 --> @easp commented on GitHub (Dec 12, 2023): @bocklucas FYI, 0.0.0.0 isn't a host address, it's basically a wildcard for the entire IPv4 Internet. Telling Ollama to listen on that address is telling it to accept connections on any network interface on your computer with an IPv4 address configured, rather than just localhost (127.0.0.1). Trying to open a connection to 0.0.0.0 doesn't work because it's not actually a host address.
Author
Owner

@bocklucas commented on GitHub (Dec 12, 2023):

@bocklucas

FYI, 0.0.0.0 isn't a host address, it's basically a wildcard for the entire IPv4 Internet. Telling Ollama to listen on that address is telling it to accept connections on any network interface on your computer with an IPv4 address configured, rather than just localhost (127.0.0.1). Trying to open a connection to 0.0.0.0 doesn't work because it's not actually a host address.

@easp thanks for the clarification, yeah that makes sense 👍

<!-- gh-comment-id:1852730964 --> @bocklucas commented on GitHub (Dec 12, 2023): > @bocklucas > > FYI, 0.0.0.0 isn't a host address, it's basically a wildcard for the entire IPv4 Internet. Telling Ollama to listen on that address is telling it to accept connections on any network interface on your computer with an IPv4 address configured, rather than just localhost (127.0.0.1). Trying to open a connection to 0.0.0.0 doesn't work because it's not actually a host address. @easp thanks for the clarification, yeah that makes sense 👍
Author
Owner

@djtuBIG-MaliceX commented on GitHub (Jan 7, 2024):

One word of warning: For this command:

netsh interface portproxy add v4tov4 listenport=11434 listenaddress=0.0.0.0 connectport=11434 connectaddress=(wsl hostname -I)

Sometimes wsl hostname -I will return more than 1 IP address, to which this port proxy entry will not assign correctly.

In this instance, you will need to instead put in the relevant IP address of WSL's eth0 virtual NIC and it will work.

<!-- gh-comment-id:1880087421 --> @djtuBIG-MaliceX commented on GitHub (Jan 7, 2024): One word of warning: For this command: ``` netsh interface portproxy add v4tov4 listenport=11434 listenaddress=0.0.0.0 connectport=11434 connectaddress=(wsl hostname -I) ``` Sometimes `wsl hostname -I` will return more than 1 IP address, to which this port proxy entry will not assign correctly. In this instance, you will need to instead put in the relevant IP address of WSL's eth0 virtual NIC and it will work.
Author
Owner

@bocklucas commented on GitHub (Jan 8, 2024):

Thanks for raising awareness to this @djtuBIG-MaliceX 🙏

<!-- gh-comment-id:1880241310 --> @bocklucas commented on GitHub (Jan 8, 2024): Thanks for raising awareness to this @djtuBIG-MaliceX :pray:
Author
Owner

@bocklucas commented on GitHub (Feb 17, 2024):

@djtuBIG-MaliceX thanks again for posting that, just saved me hours of grief. To add further detail to the post, make sure to use the eth0 inet value from the ifconig command in place of (wsl hostname -l), i.e.

:~$ ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:77ff:fe39:a9a9  prefixlen 64  scopeid 0x20<link>
        ether 02:42:77:39:a9:a9  txqueuelen 0  (Ethernet)
        RX packets 209  bytes 2534508 (2.5 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 181  bytes 31326 (31.3 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.30.140.95  netmask 255.255.240.0  broadcast 172.29.143.255
        inet6 fe80::215:5dff:fe3f:c485  prefixlen 64  scopeid 0x20<link>
        ether 00:15:5d:3f:c4:85  txqueuelen 1000  (Ethernet)
        RX packets 587984  bytes 885013124 (885.0 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 160492  bytes 10732616 (10.7 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

then you'll put

netsh interface portproxy add v4tov4 listenport=11434 listenaddress=0.0.0.0 connectport=11434 connectaddress=172.30.140.95

<!-- gh-comment-id:1950234367 --> @bocklucas commented on GitHub (Feb 17, 2024): @djtuBIG-MaliceX thanks again for posting that, just saved me hours of grief. To add further detail to the post, make sure to use the `eth0` `inet` value from the `ifconig` command in place of `(wsl hostname -l)`, i.e. ``` :~$ ifconfig docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 inet6 fe80::42:77ff:fe39:a9a9 prefixlen 64 scopeid 0x20<link> ether 02:42:77:39:a9:a9 txqueuelen 0 (Ethernet) RX packets 209 bytes 2534508 (2.5 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 181 bytes 31326 (31.3 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.30.140.95 netmask 255.255.240.0 broadcast 172.29.143.255 inet6 fe80::215:5dff:fe3f:c485 prefixlen 64 scopeid 0x20<link> ether 00:15:5d:3f:c4:85 txqueuelen 1000 (Ethernet) RX packets 587984 bytes 885013124 (885.0 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 160492 bytes 10732616 (10.7 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ``` then you'll put `netsh interface portproxy add v4tov4 listenport=11434 listenaddress=0.0.0.0 connectport=11434 connectaddress=172.30.140.95`
Author
Owner

@bocklucas commented on GitHub (Feb 17, 2024):

Original description updated to reflect changes above ☝️

<!-- gh-comment-id:1950236153 --> @bocklucas commented on GitHub (Feb 17, 2024): Original description updated to reflect changes above ☝️
Author
Owner

@ethanlchristensen commented on GitHub (Mar 3, 2024):

You're the goat, thanks @bocklucas !

<!-- gh-comment-id:1975282895 --> @ethanlchristensen commented on GitHub (Mar 3, 2024): You're the goat, thanks @bocklucas !
Author
Owner

@LumberJ commented on GitHub (May 11, 2024):

If anyone is having issues with connecting to open webui on your LAN. How I solved my connectivity issues was from the following:

Background:
Ollama - running on wsl2 ubuntu
Open WebUI - running inside a docker container within wsl2 Ubuntu

Edit /etc/systemd/system/ollama.service (I used nano)
Add (under the service category):

Environment="OLLAMA_HOST=0.0.0.0"

Start Ollama with the command:
ollama serve

Now create the docker run command for open webui (assuming you already have the docker engine installed.)
I used this docker run command:

docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name ollama-webui --restart always ghcr.io/open-webui/open-webui:main

If it doesn't work try(this makes sure docker is running):
sudo service docker start

Next, create an inbound firewall rule on the host machine using windows defender firewall, in my case my server.

Name: ollama-webui

  • (inbound) TCP allow port:8080
  • private network

Lastly, create a portproxy on the host machine:
With your wsl 2 instance use the command:
ifconfig eth0

Note the inet IP address. Mine was something like 172.30.167.24

On the host machine open admin powershell and type in:

netsh interface portproxy add v4tov4 listenport=8080 listenaddress=0.0.0.0 connectport=8080 connectaddress=172.30.167.24

Now you should be able to connect to open webui on any computer on your local network using your hosts device IP:
ex: 192.168.1.10:8080

Note: This does not allow you to access the webpage outside your network.

<!-- gh-comment-id:2105983676 --> @LumberJ commented on GitHub (May 11, 2024): If anyone is having issues with connecting to open webui on your LAN. How I solved my connectivity issues was from the following: Background: Ollama - running on wsl2 ubuntu Open WebUI - running inside a docker container within wsl2 Ubuntu Edit /etc/systemd/system/ollama.service (I used nano) Add (under the service category): Environment="OLLAMA_HOST=0.0.0.0" Start Ollama with the command: ollama serve Now create the docker run command for open webui (assuming you already have the docker engine installed.) I used this docker run command: docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name ollama-webui --restart always ghcr.io/open-webui/open-webui:main If it doesn't work try(this makes sure docker is running): sudo service docker start Next, create an inbound firewall rule on the host machine using windows defender firewall, in my case my server. Name: ollama-webui - (inbound) TCP allow port:8080 - private network Lastly, create a portproxy on the host machine: With your wsl 2 instance use the command: ifconfig eth0 Note the inet IP address. Mine was something like 172.30.167.24 On the host machine open admin powershell and type in: netsh interface portproxy add v4tov4 listenport=8080 listenaddress=0.0.0.0 connectport=8080 connectaddress=172.30.167.24 Now you should be able to connect to open webui on any computer on your local network using your hosts device IP: ex: 192.168.1.10:8080 Note: This does not allow you to access the webpage outside your network.
Author
Owner

@alex-pradas commented on GitHub (Sep 6, 2024):

Huge thanks! 👏🎉

<!-- gh-comment-id:2333533221 --> @alex-pradas commented on GitHub (Sep 6, 2024): Huge thanks! ✨👏🎉
Author
Owner

@ncridlig commented on GitHub (Sep 20, 2024):

Really amazing, thanks! Have others experienced this issue with running ollama in the wsl shell afterwards?

Error: could not connect to ollama app, is it running?

I think it does not automatically map from 0.0.0.0 to 127.0.0.1? For now, when I want to run in shell, I am starting a new server withollama serve and this works.

<!-- gh-comment-id:2364211373 --> @ncridlig commented on GitHub (Sep 20, 2024): Really amazing, thanks! Have others experienced this issue with running ollama in the wsl shell afterwards? > Error: could not connect to ollama app, is it running? I think it does not automatically map from 0.0.0.0 to 127.0.0.1? For now, when I want to run in shell, I am starting a new server with`ollama serve` and this works.
Author
Owner

@sanketss84 commented on GitHub (Oct 8, 2024):

Thanks a ton , I followed the steps exactly mentioned above and I was able to finally get the api endpoint working in local network.

However now I face a different problem
inside wsl2 i cant run

ollama serve
ollama list

these give the following error all the time

Error: could not connect to ollama app, is it running?

EDIT: found a way to work with this.
since now we specify a OLLAMA_HOST inside /etc/systemd/system/ollama.service.d/environment.conf

so now on the wsl machine we need to add an environement variable OLLAMA_HOST as well for the ollama cli to connect to this host-ip:port

run the following shell command to check if this would work for you

# since you already have the wsl 2 ip addr and port number from what you set in environment.conf file above use that same here
export OLLAMA_HOST=<your-wsl2-ip-addr>:11434
ollama list # should now list your installed llms 

to make sure that this environment variable is persisted lets add it to .bashrc

# for .bashrc
echo 'export OLLAMA_HOST=<your-wsl2-ip-addr>:11434' >> ~/.bashrc # append to bashrc 
source ~/.bashrc
ollama list # should now work even when you reboot your machine or open a new window with a connection to your wsl 2 machine
# for .zshrc
echo 'export OLLAMA_HOST=<your-wsl2-ip-addr>:11434' >> ~/.zshrc# append to bashrc 
source ~/.zshrc
ollama list # should now work even when you reboot your machine or open a new window with a connection to your wsl 2 machine

now I am able to use the ollama cli on wsl 2 and also have the api accesible on local network for open webui to interact with it.

<!-- gh-comment-id:2399857391 --> @sanketss84 commented on GitHub (Oct 8, 2024): Thanks a ton , I followed the steps exactly mentioned above and I was able to finally get the api endpoint working in local network. However now I face a different problem inside wsl2 i cant run ```sh ollama serve ollama list ``` these give the following error all the time ``` Error: could not connect to ollama app, is it running? ``` EDIT: found a way to work with this. since now we specify a OLLAMA_HOST inside /etc/systemd/system/ollama.service.d/environment.conf so now on the wsl machine we need to add an environement variable OLLAMA_HOST as well for the ollama cli to connect to this host-ip:port run the following shell command to check if this would work for you ```sh # since you already have the wsl 2 ip addr and port number from what you set in environment.conf file above use that same here export OLLAMA_HOST=<your-wsl2-ip-addr>:11434 ollama list # should now list your installed llms ``` to make sure that this environment variable is persisted lets add it to .bashrc ```sh # for .bashrc echo 'export OLLAMA_HOST=<your-wsl2-ip-addr>:11434' >> ~/.bashrc # append to bashrc source ~/.bashrc ollama list # should now work even when you reboot your machine or open a new window with a connection to your wsl 2 machine ``` ```sh # for .zshrc echo 'export OLLAMA_HOST=<your-wsl2-ip-addr>:11434' >> ~/.zshrc# append to bashrc source ~/.zshrc ollama list # should now work even when you reboot your machine or open a new window with a connection to your wsl 2 machine ``` now I am able to use the ollama cli on wsl 2 and also have the api accesible on local network for open webui to interact with it.
Author
Owner

@sanketss84 commented on GitHub (Oct 9, 2024):

Really amazing, thanks! Have others experienced this issue with running ollama in the wsl shell afterwards?

Error: could not connect to ollama app, is it running?

I think it does not automatically map from 0.0.0.0 to 127.0.0.1? For now, when I want to run in shell, I am starting a new server withollama serve and this works.

check my response above it should help you

<!-- gh-comment-id:2401432456 --> @sanketss84 commented on GitHub (Oct 9, 2024): > Really amazing, thanks! Have others experienced this issue with running ollama in the wsl shell afterwards? > > > Error: could not connect to ollama app, is it running? > > I think it does not automatically map from 0.0.0.0 to 127.0.0.1? For now, when I want to run in shell, I am starting a new server with`ollama serve` and this works. check my response above it should help you
Author
Owner

@samjespersen commented on GitHub (Nov 4, 2024):

run the following shell command to check if this would work for you

# since you already have the wsl 2 ip addr and port number from what you set in environment.conf file above use that same here
export OLLAMA_HOST=<your-wsl2-ip-addr>:11434
ollama list # should now list your installed llms 

I got this to work with v0.3.14 but I've now done a fresh install of v0.4.0 and I'm back to the dreaded Error: could not connect to ollama app, is it running?

<!-- gh-comment-id:2455554778 --> @samjespersen commented on GitHub (Nov 4, 2024): > run the following shell command to check if this would work for you > > ```shell > # since you already have the wsl 2 ip addr and port number from what you set in environment.conf file above use that same here > export OLLAMA_HOST=<your-wsl2-ip-addr>:11434 > ollama list # should now list your installed llms > ``` I got this to work with v0.3.14 but I've now done a fresh install of v0.4.0 and I'm back to the dreaded `Error: could not connect to ollama app, is it running?`
Author
Owner

@JohnWilliston commented on GitHub (Nov 5, 2024):

Hello! Just spent the last 3 or so hours struggling to figure this out and thought I'd leave my solution here to spare the next person who tries this out as well.

Thank you so much for posting your solution! This helped me get the Ollama API integrated with my favorite editor (Neovim) tonight after beating my head against it for a while. Cheers!

<!-- gh-comment-id:2456149682 --> @JohnWilliston commented on GitHub (Nov 5, 2024): > Hello! Just spent the last 3 or so hours struggling to figure this out and thought I'd leave my solution here to spare the next person who tries this out as well. Thank you so much for posting your solution! This helped me get the Ollama API integrated with my favorite editor (Neovim) tonight after beating my head against it for a while. Cheers!
Author
Owner

@drosanda commented on GitHub (Nov 8, 2024):

here is my setup using Windows 11 and WSL (Ubuntu-20.04)

  1. add the environment to systemd by editing /etc/systemd/system/ollama.service
sudo nano /etc/systemd/system/ollama.service
  1. Edit and add this code under [Service] section:
[Service]
...
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_ORIGINS=*"
...
  1. Apply the new systemd
sudo systemctl daemon-reload
  1. Install the WSL Expose
npx -y expose-wsl@latest
  1. Restart the ollama service
sudo systemctl restart ollama

then wait for couple minutes to start

  1. Access your local IP from other computer in same network to make sure the ollama is exposed
curl -v http://10.10.0.70:11434

NB:
Everytime you want to expose the ollama, you need to repeat from 4 and 5.

<!-- gh-comment-id:2463691029 --> @drosanda commented on GitHub (Nov 8, 2024): here is my setup using Windows 11 and WSL (Ubuntu-20.04) 1. add the environment to systemd by editing `/etc/systemd/system/ollama.service` ```shell sudo nano /etc/systemd/system/ollama.service ``` 2. Edit and add this code under `[Service]` section: ```shell [Service] ... Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="OLLAMA_ORIGINS=*" ... ``` 3. Apply the new systemd ```shell sudo systemctl daemon-reload ``` 4. Install the WSL Expose ```shell npx -y expose-wsl@latest ``` 5. Restart the ollama service ```shell sudo systemctl restart ollama ``` then wait for couple minutes to start 6. Access your local IP from other computer in same network to make sure the ollama is exposed ``` curl -v http://10.10.0.70:11434 ``` NB: Everytime you want to expose the ollama, you need to repeat from 4 and 5.
Author
Owner

@djdance commented on GitHub (Dec 19, 2024):

does anyone know how to do the same in Windows? o_0
I mean, where is that ollama.service file

<!-- gh-comment-id:2553079424 --> @djdance commented on GitHub (Dec 19, 2024): does anyone know how to do the same in Windows? o_0 I mean, where is that _ollama.service_ file
Author
Owner

@JohnWilliston commented on GitHub (Dec 19, 2024):

If you're running Ollama on Windows, you're likely doing so using WSL2, a virtual machine, etc. It should thus be in the same place as mentioned above. That's exactly what I've done FWIW.

<!-- gh-comment-id:2554575939 --> @JohnWilliston commented on GitHub (Dec 19, 2024): If you're running Ollama on Windows, you're likely doing so using WSL2, a virtual machine, etc. It should thus be in the same place as mentioned above. That's exactly what I've done FWIW.
Author
Owner

@AkbarTheGreat commented on GitHub (Apr 6, 2025):

If you're running Ollama on Windows, you're likely doing so using WSL2, a virtual machine, etc. It should thus be in the same place as mentioned above. That's exactly what I've done FWIW.

The Windows native app exited preview in October of last year, and was available for months before that as a preview release -- it's probably no longer wise to assume they're running on WSL2. To actually answer djdance's question, you have to set the environment variables as a user's environment variable (https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-windows)

I came upon this because I've been trying to figure out the magic to get WSL2 to be able to see the native Windows app (no luck so far?) and was hoping this had an answer.

Anyway, in case djdance or someone else sees this for that particular problem, there's the answer.

<!-- gh-comment-id:2781720391 --> @AkbarTheGreat commented on GitHub (Apr 6, 2025): > If you're running Ollama on Windows, you're likely doing so using WSL2, a virtual machine, etc. It should thus be in the same place as mentioned above. That's exactly what I've done FWIW. The Windows native app exited preview in October of last year, and was available for months before that as a preview release -- it's probably no longer wise to assume they're running on WSL2. To actually answer djdance's question, you have to set the environment variables as a user's environment variable (https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-windows) I came upon this because I've been trying to figure out the magic to get WSL2 to be able to see the native Windows app (no luck so far?) and was hoping this had an answer. Anyway, in case djdance or someone else sees this for that particular problem, there's the answer.
Author
Owner

@JohnWilliston commented on GitHub (Apr 7, 2025):

The Windows native app exited preview in October of last year, and was available for months before that as a preview release -- it's probably no longer wise to assume they're running on WSL2.

I didn't assume they're running on WSL2, I said it's 'likely', and I think that's still likely. I also answered the question by saying the environment variables should be defined as above. Both of which worked out well for me at least.

Regarding your issue, I'm not sure precisely what the problem is in light of your description, but what you do say makes me wonder if it's a networking thing if that's what you mean by "see the native Windows app". In my case, for example, I have exposed both the Ollama service running in WSL2 and the OpenWeb UI so they can be used independently by any machine on my local network. The former required bridging the WSL2's network to my local machine with a command like the following:

netsh interface portproxy add v4tov4 listenport=11434 listenaddress=0.0.0.0 connectport=11434 connectaddress=172.20.55.233

That sets up a proxy to connect port 11434 on my machine's local adapter to the same port on the network interface IP of the WSL2 distribution. That made it possible for me to connect Neovim and any other machine on my local network to the Ollama AI service. The following PowerShell script (which I got from somewhere on the web and tweaked a bit but can no longer find the original link) handled making the networking good for me:

$remoteport = bash.exe -c "ip -4 addr show eth0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}'"
$found = $remoteport -match '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}';

if( $found ){
  $remoteport = $matches[0];
} else{
  echo "The Script Exited, the ip address of WSL 2 cannot be found";
  exit;
}

#[Ports]

#All the ports you want to forward separated by coma
$ports=@(8080,11434);


#[Static ip]
#You can change the addr to your ip config to listen to a specific address
$addr='0.0.0.0';
$ports_a = $ports -join ",";


#Remove Firewall Exception Rules
iex "Remove-NetFireWallRule -DisplayName 'WSL 2 Firewall Unlock' ";

#adding Exception Rules for inbound and outbound Rules
iex "New-NetFireWallRule -DisplayName 'WSL 2 Firewall Unlock' -Direction Outbound -LocalPort $ports_a -Action Allow -Protocol TCP";
iex "New-NetFireWallRule -DisplayName 'WSL 2 Firewall Unlock' -Direction Inbound -LocalPort $ports_a -Action Allow -Protocol TCP";

for( $i = 0; $i -lt $ports.length; $i++ ){
  $port = $ports[$i];
  iex "netsh interface portproxy delete v4tov4 listenport=$port listenaddress=$addr";
  iex "netsh interface portproxy add v4tov4 listenport=$port listenaddress=$addr connectport=$port connectaddress=$remoteport";
}
<!-- gh-comment-id:2782067011 --> @JohnWilliston commented on GitHub (Apr 7, 2025): > The Windows native app exited preview in October of last year, and was available for months before that as a preview release -- it's probably no longer wise to assume they're running on WSL2. I didn't assume they're running on WSL2, I said it's 'likely', and I think that's still likely. I also answered the question by saying the environment variables should be defined as above. Both of which worked out well for me at least. Regarding your issue, I'm not sure precisely what the problem is in light of your description, but what you do say makes me wonder if it's a networking thing if that's what you mean by "see the native Windows app". In my case, for example, I have exposed both the Ollama service running in WSL2 and the OpenWeb UI so they can be used independently by any machine on my local network. The former required bridging the WSL2's network to my local machine with a command like the following: ``` netsh interface portproxy add v4tov4 listenport=11434 listenaddress=0.0.0.0 connectport=11434 connectaddress=172.20.55.233 ``` That sets up a proxy to connect port 11434 on my machine's local adapter to the same port on the network interface IP of the WSL2 distribution. That made it possible for me to connect Neovim and any other machine on my local network to the Ollama AI service. The following PowerShell script (which I got from somewhere on the web and tweaked a bit but can no longer find the original link) handled making the networking good for me: ``` $remoteport = bash.exe -c "ip -4 addr show eth0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}'" $found = $remoteport -match '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}'; if( $found ){ $remoteport = $matches[0]; } else{ echo "The Script Exited, the ip address of WSL 2 cannot be found"; exit; } #[Ports] #All the ports you want to forward separated by coma $ports=@(8080,11434); #[Static ip] #You can change the addr to your ip config to listen to a specific address $addr='0.0.0.0'; $ports_a = $ports -join ","; #Remove Firewall Exception Rules iex "Remove-NetFireWallRule -DisplayName 'WSL 2 Firewall Unlock' "; #adding Exception Rules for inbound and outbound Rules iex "New-NetFireWallRule -DisplayName 'WSL 2 Firewall Unlock' -Direction Outbound -LocalPort $ports_a -Action Allow -Protocol TCP"; iex "New-NetFireWallRule -DisplayName 'WSL 2 Firewall Unlock' -Direction Inbound -LocalPort $ports_a -Action Allow -Protocol TCP"; for( $i = 0; $i -lt $ports.length; $i++ ){ $port = $ports[$i]; iex "netsh interface portproxy delete v4tov4 listenport=$port listenaddress=$addr"; iex "netsh interface portproxy add v4tov4 listenport=$port listenaddress=$addr connectport=$port connectaddress=$remoteport"; } ```
Author
Owner

@AkbarTheGreat commented on GitHub (Apr 7, 2025):

Sorry for being somewhat snappy in my message yesterday -- I was a bit under the weather and poking at a project I'd been meaning to get working (hooking up Open WebUI to my existing ollama installation) because it seemed low-lift. But my feeling ill probably led to me being a bit more terse than I normally would.

With that out of the way, I'm (as I alluded to in my snappier email) running the native Windows ollama app. I do a lot of my dev-style work on WSL2, but if I can avoid trying to figure out GPU driver translation down to the WSL2 layer for something like ollama, I very much prefer to run that on the base OS. It works fine, I've been using it for Continue.dev for months now, and it's a super useful product.

My issue was then getting a docker container (or anything, really) on the WSL2 side to see the port opened on native windows. I started my troubleshooting thinking it was a docker problem, but by the time I wound up here, I'd isolated my issue to just WSL2-native host IP shenanigans. Something like:

❯ curl 127.0.0.1:11434
curl: (7) Failed to connect to 127.0.0.1 port 11434 after 0 ms: Connection refused

That said, your comment led me to think of the obvious question -- if I open up ollama to listen on 0.0.0.0, there's no reason it wouldn't respond to calls on the host OS's LAN IP (for the sake of demonstration, 192.168.0.5 -- not my actual machine's internal IP), and lo and behold, that was enough to get me up and running:

❯ curl 192.168.0.5:11434
Ollama is running%

I plumbed up the docker container for Open WebUI to use that port and everything is working perfectly fine now.

I'm still not sure how to forward unknown traffic from WSL2 to the host OS, or if there's a reserved "this is my host OS" IP in WSL2. This machine is connected to the LAN over ethernet, so the IP never changes, but I could see this solution being a nightmare on a laptop that connects to several different networks. Because of that, I imagine there's a better option for roaming devices, but this at least solves my problem.

Thanks for your help -- I'm not sure exactly which part of your message got me there, but waking up this morning and reading your reply was the step I needed to get my thoughts in a workable direction.

<!-- gh-comment-id:2783099330 --> @AkbarTheGreat commented on GitHub (Apr 7, 2025): Sorry for being somewhat snappy in my message yesterday -- I was a bit under the weather and poking at a project I'd been meaning to get working (hooking up Open WebUI to my existing ollama installation) because it seemed low-lift. But my feeling ill probably led to me being a bit more terse than I normally would. With that out of the way, I'm (as I alluded to in my snappier email) running the native Windows ollama app. I do a lot of my dev-style work on WSL2, but if I can avoid trying to figure out GPU driver translation down to the WSL2 layer for something like ollama, I very much prefer to run that on the base OS. It works fine, I've been using it for Continue.dev for months now, and it's a super useful product. My issue was then getting a docker container (or anything, really) on the WSL2 side to see the port opened on native windows. I started my troubleshooting thinking it was a docker problem, but by the time I wound up here, I'd isolated my issue to just WSL2-native host IP shenanigans. Something like: ``` ❯ curl 127.0.0.1:11434 curl: (7) Failed to connect to 127.0.0.1 port 11434 after 0 ms: Connection refused ``` That said, your comment led me to think of the obvious question -- if I open up ollama to listen on 0.0.0.0, there's no reason it wouldn't respond to calls on the host OS's LAN IP (for the sake of demonstration, 192.168.0.5 -- not my actual machine's internal IP), and lo and behold, that was enough to get me up and running: ``` ❯ curl 192.168.0.5:11434 Ollama is running% ``` I plumbed up the docker container for Open WebUI to use that port and everything is working perfectly fine now. I'm still not sure how to forward unknown traffic from WSL2 to the host OS, or if there's a reserved "this is my host OS" IP in WSL2. This machine is connected to the LAN over ethernet, so the IP never changes, but I could see this solution being a nightmare on a laptop that connects to several different networks. Because of that, I imagine there's a better option for roaming devices, but this at least solves my problem. Thanks for your help -- I'm not sure exactly which part of your message got me there, but waking up this morning and reading your reply was the step I needed to get my thoughts in a workable direction.
Author
Owner

@JohnWilliston commented on GitHub (Apr 7, 2025):

Thanks for your help -- I'm not sure exactly which part of your message got me there, but waking up this morning and reading your reply was the step I needed to get my thoughts in a workable direction.

You're welcome! And thanks for the kind apology. I didn't take it personally for what it's worth. The important bit is that you got it working, so I'm happy to have helped in that. I'm not a networking expert, but I believe the answer to your remaining question about how to "forward unknown traffic" would be related to the proxy commands that I found worked for me. I don't know how to generalize that (or if it even can be generalized), but that seems to be the preferred way to get WSL networking guests to talk to their hosts and local network segments.

<!-- gh-comment-id:2784156596 --> @JohnWilliston commented on GitHub (Apr 7, 2025): > Thanks for your help -- I'm not sure exactly which part of your message got me there, but waking up this morning and reading your reply was the step I needed to get my thoughts in a workable direction. You're welcome! And thanks for the kind apology. I didn't take it personally for what it's worth. The important bit is that you got it working, so I'm happy to have helped in that. I'm not a networking expert, but I believe the answer to your remaining question about how to "forward unknown traffic" would be related to the proxy commands that I found worked for me. I don't know how to generalize that (or if it even can be generalized), but that seems to be _the_ preferred way to get WSL networking guests to talk to their hosts and local network segments.
Author
Owner

@francescor commented on GitHub (Jan 29, 2026):

AI took me here, but a note for Ubuntu (linux) users:

When you edit

sudo systemctl edit ollama

content should not be added

 Edit and add this code under [Service] section:
Add (under the service category):

but afer the first two comments

### Editing /etc/systemd/system/ollama.service.d/override.conf
### Anything between here and the comment below will become the new contents of the file

[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"

### Lines below this comment will be discarded
<!-- gh-comment-id:3815936466 --> @francescor commented on GitHub (Jan 29, 2026): AI took me here, but a note for Ubuntu (linux) users: When you edit ``` sudo systemctl edit ollama ``` content should *not* be added > Edit and add this code under [Service] section: > Add (under the service category): but afer the first two comments ``` ### Editing /etc/systemd/system/ollama.service.d/override.conf ### Anything between here and the comment below will become the new contents of the file [Service] Environment="OLLAMA_HOST=0.0.0.0:11434" ### Lines below this comment will be discarded ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#762