[GH-ISSUE #3300] docker container only listens on ipv6 by default #27790

Open
opened 2026-04-22 05:22:52 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @nopoz on GitHub (Mar 22, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3300

What is the issue?

The docker container only listens on ipv6 by default. This causes connection failures for other containers in the same stack that are trying to communicate with ollama via ipv4.

What did you expect to see?

For the container to listen on both ipv4 and ipv6.

Steps to reproduce

Deploy ollama:

version: '3.3'
services:
    ollama:
      container_name: ollama
      volumes:
          - "/docker/ollama:/root/.ollama"
      restart: unless-stopped
      image: ollama/ollama:0.1.29
      deploy:
        resources:
          reservations:
            devices:
              - driver: nvidia
                count: all
                capabilities: [gpu]

Verify port 11434 is only listening on ipv6 inside the container:

~$ docker exec -t -i ollama /bin/bash
root@28988fb7b322:/# apt update
[...]
root@28988fb7b322:/# apt install net-utils
[...]
root@28988fb7b322:/# netstat -nap | grep LISTEN
tcp        0      0 127.0.0.11:35687        0.0.0.0:*               LISTEN      -
tcp6       0      0 :::11434                :::*                    LISTEN      1/ollama

You can workaround this issue by adding the following lines to the docker compose file:

      ports:
          - '127.0.0.1:11434:11434'

However this is problematic as it exposes the port to the entire docker host which is unnecessary. The port only needs to be exposed to the other containers in the same compose stack.

Are there any recent changes that introduced the issue?

No response

OS

Linux

Architecture

amd64

Platform

Docker, WSL2

Ollama version

0.1.29

GPU

Nvidia

GPU info

No response

CPU

No response

Other software

No response

Originally created by @nopoz on GitHub (Mar 22, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3300 ### What is the issue? The docker container only listens on ipv6 by default. This causes connection failures for other containers in the same stack that are trying to communicate with ollama via ipv4. ### What did you expect to see? For the container to listen on both ipv4 and ipv6. ### Steps to reproduce Deploy ollama: ``` version: '3.3' services: ollama: container_name: ollama volumes: - "/docker/ollama:/root/.ollama" restart: unless-stopped image: ollama/ollama:0.1.29 deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: [gpu] ``` Verify port 11434 is only listening on ipv6 inside the container: ``` ~$ docker exec -t -i ollama /bin/bash root@28988fb7b322:/# apt update [...] root@28988fb7b322:/# apt install net-utils [...] root@28988fb7b322:/# netstat -nap | grep LISTEN tcp 0 0 127.0.0.11:35687 0.0.0.0:* LISTEN - tcp6 0 0 :::11434 :::* LISTEN 1/ollama ``` You can workaround this issue by adding the following lines to the docker compose file: ``` ports: - '127.0.0.1:11434:11434' ``` However this is problematic as it exposes the port to the entire docker host which is unnecessary. The port only needs to be exposed to the other containers in the same compose stack. ### Are there any recent changes that introduced the issue? _No response_ ### OS Linux ### Architecture amd64 ### Platform Docker, WSL2 ### Ollama version 0.1.29 ### GPU Nvidia ### GPU info _No response_ ### CPU _No response_ ### Other software _No response_
GiteaMirror added the bug label 2026-04-22 05:22:52 -05:00
Author
Owner

@dillfrescott commented on GitHub (Apr 2, 2024):

[cross@cross-pc ~]$ sudo netstat -tunlp | grep 11434
tcp6       0      0 :::11434                :::*                    LISTEN      565114/ollama   

It's doing it natively (non docker) too. Just tcp6...

<!-- gh-comment-id:2031319920 --> @dillfrescott commented on GitHub (Apr 2, 2024): ```shell [cross@cross-pc ~]$ sudo netstat -tunlp | grep 11434 tcp6 0 0 :::11434 :::* LISTEN 565114/ollama ``` It's doing it natively (non docker) too. Just tcp6...
Author
Owner

@Bloomberg-zhong commented on GitHub (Jun 18, 2024):

i have same question

root@Bloomberg-AI-B760M:~# cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"

[Install]
WantedBy=default.target
root@Bloomberg-AI-B760M:~# netstat -antp | grep 11434
tcp6       0      0 :::11434          

if use this is same

root@Bloomberg-AI-B760M:~# cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
Environment="OLLAMA_HOST=0.0.0.0"
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"

[Install]
WantedBy=default.target
root@Bloomberg-AI-B760M:~# netstat -antp | grep 11434
tcp6       0      0 :::11434                :::*                    LISTEN      10373/ollama        
tcp6       0      0 192.168.198.209:11434   192.168.198.206:37518   ESTABLISHED 10373/ollama        
<!-- gh-comment-id:2175066787 --> @Bloomberg-zhong commented on GitHub (Jun 18, 2024): i have same question ```bash root@Bloomberg-AI-B760M:~# cat /etc/systemd/system/ollama.service [Unit] Description=Ollama Service After=network-online.target [Service] Environment="OLLAMA_HOST=0.0.0.0:11434" ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" [Install] WantedBy=default.target root@Bloomberg-AI-B760M:~# netstat -antp | grep 11434 tcp6 0 0 :::11434 ``` if use this is same ```bash root@Bloomberg-AI-B760M:~# cat /etc/systemd/system/ollama.service [Unit] Description=Ollama Service After=network-online.target [Service] Environment="OLLAMA_HOST=0.0.0.0" ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" [Install] WantedBy=default.target root@Bloomberg-AI-B760M:~# netstat -antp | grep 11434 tcp6 0 0 :::11434 :::* LISTEN 10373/ollama tcp6 0 0 192.168.198.209:11434 192.168.198.206:37518 ESTABLISHED 10373/ollama ```
Author
Owner

@empyrials commented on GitHub (Jun 20, 2024):

My only fix is to enable ipv6 on my docker network, then things work as planned between openweb and llama using the hostnames.

<!-- gh-comment-id:2180815407 --> @empyrials commented on GitHub (Jun 20, 2024): My only fix is to enable ipv6 on my docker network, then things work as planned between openweb and llama using the hostnames.
Author
Owner

@LiMingchen159 commented on GitHub (Sep 19, 2024):

[cross@cross-pc ~]$ sudo netstat -tunlp | grep 11434
tcp6       0      0 :::11434                :::*                    LISTEN      565114/ollama   

It's doing it natively (non docker) too. Just tcp6...

Could you let me know if you solved this issue? Same problem

<!-- gh-comment-id:2361836895 --> @LiMingchen159 commented on GitHub (Sep 19, 2024): > ```shell > [cross@cross-pc ~]$ sudo netstat -tunlp | grep 11434 > tcp6 0 0 :::11434 :::* LISTEN 565114/ollama > ``` > > It's doing it natively (non docker) too. Just tcp6... Could you let me know if you solved this issue? Same problem
Author
Owner

@LiMingchen159 commented on GitHub (Sep 19, 2024):

i have same question

root@Bloomberg-AI-B760M:~# cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"

[Install]
WantedBy=default.target
root@Bloomberg-AI-B760M:~# netstat -antp | grep 11434
tcp6       0      0 :::11434          

if use this is same

root@Bloomberg-AI-B760M:~# cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
Environment="OLLAMA_HOST=0.0.0.0"
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"

[Install]
WantedBy=default.target
root@Bloomberg-AI-B760M:~# netstat -antp | grep 11434
tcp6       0      0 :::11434                :::*                    LISTEN      10373/ollama        
tcp6       0      0 192.168.198.209:11434   192.168.198.206:37518   ESTABLISHED 10373/ollama        

Could you let me know if you solved this issue? Same problem

<!-- gh-comment-id:2361841953 --> @LiMingchen159 commented on GitHub (Sep 19, 2024): > i have same question > > ```shell > root@Bloomberg-AI-B760M:~# cat /etc/systemd/system/ollama.service > [Unit] > Description=Ollama Service > After=network-online.target > > [Service] > Environment="OLLAMA_HOST=0.0.0.0:11434" > ExecStart=/usr/local/bin/ollama serve > User=ollama > Group=ollama > Restart=always > RestartSec=3 > Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" > > [Install] > WantedBy=default.target > root@Bloomberg-AI-B760M:~# netstat -antp | grep 11434 > tcp6 0 0 :::11434 > ``` > > if use this is same > > ```shell > root@Bloomberg-AI-B760M:~# cat /etc/systemd/system/ollama.service > [Unit] > Description=Ollama Service > After=network-online.target > > [Service] > Environment="OLLAMA_HOST=0.0.0.0" > ExecStart=/usr/local/bin/ollama serve > User=ollama > Group=ollama > Restart=always > RestartSec=3 > Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" > > [Install] > WantedBy=default.target > root@Bloomberg-AI-B760M:~# netstat -antp | grep 11434 > tcp6 0 0 :::11434 :::* LISTEN 10373/ollama > tcp6 0 0 192.168.198.209:11434 192.168.198.206:37518 ESTABLISHED 10373/ollama > ``` Could you let me know if you solved this issue? Same problem
Author
Owner

@empyrials commented on GitHub (Sep 19, 2024):

i have same question

root@Bloomberg-AI-B760M:~# cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"

[Install]
WantedBy=default.target
root@Bloomberg-AI-B760M:~# netstat -antp | grep 11434
tcp6       0      0 :::11434          

if use this is same

root@Bloomberg-AI-B760M:~# cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
Environment="OLLAMA_HOST=0.0.0.0"
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"

[Install]
WantedBy=default.target
root@Bloomberg-AI-B760M:~# netstat -antp | grep 11434
tcp6       0      0 :::11434                :::*                    LISTEN      10373/ollama        
tcp6       0      0 192.168.198.209:11434   192.168.198.206:37518   ESTABLISHED 10373/ollama        

Could you let me know if you solved this issue? Same problem

The easiest solution i found was to throw it in docker with nginx, both on the same virtual docker network with ipv6 enabled, then have nginx redirect by name. Works out well but it was super frustrating until i figured this out

<!-- gh-comment-id:2361854207 --> @empyrials commented on GitHub (Sep 19, 2024): > > i have same question > > ```shell > > root@Bloomberg-AI-B760M:~# cat /etc/systemd/system/ollama.service > > [Unit] > > Description=Ollama Service > > After=network-online.target > > > > [Service] > > Environment="OLLAMA_HOST=0.0.0.0:11434" > > ExecStart=/usr/local/bin/ollama serve > > User=ollama > > Group=ollama > > Restart=always > > RestartSec=3 > > Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" > > > > [Install] > > WantedBy=default.target > > root@Bloomberg-AI-B760M:~# netstat -antp | grep 11434 > > tcp6 0 0 :::11434 > > ``` > > > > > > > > > > > > > > > > > > > > > > > > if use this is same > > ```shell > > root@Bloomberg-AI-B760M:~# cat /etc/systemd/system/ollama.service > > [Unit] > > Description=Ollama Service > > After=network-online.target > > > > [Service] > > Environment="OLLAMA_HOST=0.0.0.0" > > ExecStart=/usr/local/bin/ollama serve > > User=ollama > > Group=ollama > > Restart=always > > RestartSec=3 > > Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" > > > > [Install] > > WantedBy=default.target > > root@Bloomberg-AI-B760M:~# netstat -antp | grep 11434 > > tcp6 0 0 :::11434 :::* LISTEN 10373/ollama > > tcp6 0 0 192.168.198.209:11434 192.168.198.206:37518 ESTABLISHED 10373/ollama > > ``` > > Could you let me know if you solved this issue? Same problem The easiest solution i found was to throw it in docker with nginx, both on the same virtual docker network with ipv6 enabled, then have nginx redirect by name. Works out well but it was super frustrating until i figured this out
Author
Owner

@lucian-nightwalker commented on GitHub (Mar 23, 2025):

This issue is Ollama wide from what I can tell. I'm not even using Docker, I'm using WSL, running it as a direct service in Ubuntu and still having the same problem. Ended up having to use socat to break Ollama's fingers.

<!-- gh-comment-id:2746002915 --> @lucian-nightwalker commented on GitHub (Mar 23, 2025): This issue is Ollama wide from what I can tell. I'm not even using Docker, I'm using WSL, running it as a direct service in Ubuntu and still having the same problem. Ended up having to use socat to break Ollama's fingers.
Author
Owner

@lucian-nightwalker commented on GitHub (Mar 23, 2025):

This issue is Ollama wide from what I can tell. I'm not even using Docker, I'm using WSL, running it as a direct service in Ubuntu and still having the same problem. Ended up having to use socat to break Ollama's fingers.

For whatever reason, running socat appears to fix this issue.

socat TCP4-LISTEN:11434,bind=172.28.120.221,fork TCP4:127.0.0.1:11434 &

Working theory is socat has higher binding authority than ollama, and ollama falls back onto IPV4 once socat snags 172.28.120.11:11434. Ollama then falls back to 127.0.0.1:11434 with IPV4 rather than obsessing over :::1143

So, setting up a port forward via socat enables IPV4 to hit Ollama rather than the dread "no response" from a curl.

I don't understand it, nor could I find any information on it. But, figured I'd share just in case it helps someone else. And yes, before anyone says it, nothing about this makes any kind of sense to me. But, I got pissed off enough to try something weird and it worked. I imagine you could use a similar "sneak in the back door" approach with docker as well.

If you use this method, I would suggest setting socat as a service to run at boot up before ollama starts. It seems to be holding thus far. I'll report back if it breaks.

Also, obviously, substitute your own ip address for my 172.28.120.221. Have a good night.

<!-- gh-comment-id:2746026860 --> @lucian-nightwalker commented on GitHub (Mar 23, 2025): > This issue is Ollama wide from what I can tell. I'm not even using Docker, I'm using WSL, running it as a direct service in Ubuntu and still having the same problem. Ended up having to use socat to break Ollama's fingers. For whatever reason, running socat appears to fix this issue. `socat TCP4-LISTEN:11434,bind=172.28.120.221,fork TCP4:127.0.0.1:11434 &` Working theory is socat has higher binding authority than ollama, and ollama falls back onto IPV4 once socat snags 172.28.120.11:11434. Ollama then falls back to 127.0.0.1:11434 with IPV4 rather than obsessing over :::1143 So, setting up a port forward via socat enables IPV4 to hit Ollama rather than the dread "no response" from a curl. I don't understand it, nor could I find any information on it. But, figured I'd share just in case it helps someone else. And yes, before anyone says it, nothing about this makes any kind of sense to me. But, I got pissed off enough to try something weird and it worked. I imagine you could use a similar "sneak in the back door" approach with docker as well. If you use this method, I would suggest setting socat as a service to run at boot up before ollama starts. It seems to be holding thus far. I'll report back if it breaks. Also, obviously, substitute your own ip address for my 172.28.120.221. Have a good night.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#27790