[GH-ISSUE #7540] ollama blocking itself from binding port it's already using...? #4795

Closed
opened 2026-04-12 15:46:00 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @gearskullguy on GitHub (Nov 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7540

What is the issue?

I've just been going through the current instructions, and this is a really weird error to get after seeing all the "Pulled" messages:

$ sudo docker compose --profile gpu-nvidia up
[+] Running 38/38
... Pulled etc ... 0.0s
Attaching to n8n, n8n-import, ollama, ollama-pull-llama, qdrant, self-hosted-ai-starter-kit-postgres-1
Error response from daemon: driver failed programming external connectivity on endpoint ollama (e4cf1d...dc2cf1): Error starting userland proxy: listen tcp4 0.0.0.0:11434: bind: address already in use
$ sudo lsof -i :11434
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ollama 226 ollama 3u IPv4 22777 0t0 TCP localhost:11434 (LISTEN)

It appears the compose instructions tried to bind ollama to a port that ollama is already using... maybe I did something to attach it before (I ran into different errors previously), and I'm just being redundant.

I apologize if I'm just being redundant. I may just be being redundant.

OS

WSL2

GPU

Intel

CPU

Intel

Ollama version

0.3.12

Originally created by @gearskullguy on GitHub (Nov 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7540 ### What is the issue? I've just been going through the current instructions, and this is a really weird error to get after seeing all the "Pulled" messages: $ sudo docker compose --profile gpu-nvidia up [+] Running 38/38 ... Pulled etc ... 0.0s Attaching to n8n, n8n-import, ollama, ollama-pull-llama, qdrant, self-hosted-ai-starter-kit-postgres-1 Error response from daemon: driver failed programming external connectivity on endpoint ollama (e4cf1d...dc2cf1): Error starting userland proxy: listen tcp4 0.0.0.0:11434: bind: address already in use $ sudo lsof -i :11434 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ollama 226 ollama 3u IPv4 22777 0t0 TCP localhost:11434 (LISTEN) It appears the compose instructions tried to bind ollama to a port that ollama is already using... maybe I did something to attach it before (I ran into different errors previously), and I'm just being redundant. I apologize if I'm just being redundant. I may just be being redundant. ### OS WSL2 ### GPU Intel ### CPU Intel ### Ollama version 0.3.12
GiteaMirror added the dockerwslquestion labels 2026-04-12 15:46:00 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 7, 2024):

You have two ollama related containers, are they both starting an ollama server?

<!-- gh-comment-id:2462164720 --> @rick-github commented on GitHub (Nov 7, 2024): You have two ollama related containers, are they both starting an ollama server?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4795