[GH-ISSUE #14947] [WSL2] Ollama on Windows host unreachable from Docker container/sandbox after proxy bypass (connections timeout) #9614

Closed
opened 2026-04-12 22:31:04 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @ranga291257 on GitHub (Mar 18, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14947

Summary

Ollama is installed on a Windows 11 host with OLLAMA_HOST=0.0.0.0:11434. It is healthy and reachable from the WSL2 Ubuntu 24.04 host shell. However, from inside a Docker container (NemoClaw sandbox) running inside WSL2, direct connections to the Windows-hosted Ollama instance time out after proxy bypass. This makes it impossible to use Ollama as a local inference backend from containerized workloads on the same machine.

Environment

  • Host OS: Windows 11
  • WSL2 distro: Ubuntu 24.04 LTS
  • GPU: NVIDIA RTX 4060 Ti 8 GB
  • Ollama: Installed on Windows host (not inside WSL or Docker)
  • OLLAMA_HOST: 0.0.0.0:11434
  • Docker: Engine installed inside WSL2 Ubuntu (not Docker Desktop)
  • Container: NemoClaw sandbox (Docker container running inside WSL2)

What Works

  • Ollama is healthy on Windows: curl http://127.0.0.1:11434/api/tags returns HTTP 200 on Windows
  • netstat on Windows shows Ollama listening on 0.0.0.0:11434 and [::]:11434
  • From the WSL2 host shell, both of these succeed:
curl http://<WSL-host-IP>:11434/api/tags   # HTTP 200, returns model JSON
curl http://<Windows-LAN-IP>:11434/api/tags  # HTTP 200, returns model JSON

What Fails

From inside the Docker container (running inside WSL2), after bypassing proxy settings with --noproxy '*':

# Inside container:
curl --noproxy '*' http://<WSL-host-IP>:11434/api/tags    # TIMEOUT
curl --noproxy '*' http://<Windows-LAN-IP>:11434/api/tags   # TIMEOUT
curl --noproxy '*' http://host.docker.internal:11434/api/tags  # No response

Without --noproxy '*', requests go through the container's HTTP_PROXY / HTTPS_PROXY / ALL_PROXY settings and return HTTP 403 from the proxy.

DNS resolution inside the container works (e.g. host.docker.internal resolves to 172.17.0.1), but TCP connections to port 11434 on Windows host IPs time out.

What I Already Ruled Out

  • Ollama is not down and not bound only to localhost
  • WSL2 host-to-Windows networking for Ollama is not broken
  • Docker installation is not broken
  • The container itself is not broken (cloud-backed inference works fine from the same container)
  • Windows Firewall was checked — port 11434 appears open for WSL

Questions for Ollama Maintainers

  1. Is there a known issue with Ollama on Windows being unreachable from Docker containers running inside WSL2 (not Docker Desktop)?
  2. Is host.docker.internal expected to route to the Windows host when Docker Engine is installed natively inside WSL2 (not via Docker Desktop)?
  3. Is there a recommended addressing pattern for reaching Windows-hosted Ollama from a Docker container inside WSL2?
    • WSL host gateway IP?
    • Windows LAN IP?
    • A specific hostname?
  4. Are there any known proxy variable interactions (HTTP_PROXY, HTTPS_PROXY) that intercept Ollama traffic inside containers even after --noproxy is set?
  5. Is there a recommended OLLAMA_HOST configuration or Windows Firewall rule to support this topology?

Additional Context

This issue surfaced while trying to use Ollama as a local inference provider from a NemoClaw (NVIDIA) sandbox. The same sandbox reaches the NVIDIA Cloud API without issue, which confirms the container networking itself is functional for outbound connections. The failure is specific to reaching the Windows-hosted Ollama instance from inside the Docker network namespace on WSL2.

Any guidance on the supported networking path or a known workaround (e.g., host-side relay, socat bridge, Docker --network host flag) would be very helpful.

Cross-reference: This same problem has been filed on the NemoClaw (NVIDIA) side as NVIDIA/NemoClaw#336[WSL2] NemoClaw sandbox cannot reach Windows-hosted Ollama — local inference path blocked. That issue documents the full end-to-end troubleshooting trail from the NemoClaw/sandbox perspective. The Ollama-side question is specifically: does Ollama on Windows have any known limitation or recommended configuration for being reached from Docker containers running inside WSL2 (native Docker Engine, not Docker Desktop)?

Originally created by @ranga291257 on GitHub (Mar 18, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14947 ## Summary Ollama is installed on a Windows 11 host with `OLLAMA_HOST=0.0.0.0:11434`. It is healthy and reachable from the WSL2 Ubuntu 24.04 host shell. However, from inside a Docker container (NemoClaw sandbox) running inside WSL2, direct connections to the Windows-hosted Ollama instance time out after proxy bypass. This makes it impossible to use Ollama as a local inference backend from containerized workloads on the same machine. ## Environment - **Host OS:** Windows 11 - **WSL2 distro:** Ubuntu 24.04 LTS - **GPU:** NVIDIA RTX 4060 Ti 8 GB - **Ollama:** Installed on Windows host (not inside WSL or Docker) - **OLLAMA_HOST:** `0.0.0.0:11434` - **Docker:** Engine installed inside WSL2 Ubuntu (not Docker Desktop) - **Container:** NemoClaw sandbox (Docker container running inside WSL2) ## What Works - Ollama is healthy on Windows: `curl http://127.0.0.1:11434/api/tags` returns HTTP 200 on Windows - `netstat` on Windows shows Ollama listening on `0.0.0.0:11434` and `[::]:11434` - From the WSL2 host shell, both of these succeed: ```bash curl http://<WSL-host-IP>:11434/api/tags # HTTP 200, returns model JSON curl http://<Windows-LAN-IP>:11434/api/tags # HTTP 200, returns model JSON ``` ## What Fails From inside the Docker container (running inside WSL2), after bypassing proxy settings with `--noproxy '*'`: ```bash # Inside container: curl --noproxy '*' http://<WSL-host-IP>:11434/api/tags # TIMEOUT curl --noproxy '*' http://<Windows-LAN-IP>:11434/api/tags # TIMEOUT curl --noproxy '*' http://host.docker.internal:11434/api/tags # No response ``` Without `--noproxy '*'`, requests go through the container's `HTTP_PROXY` / `HTTPS_PROXY` / `ALL_PROXY` settings and return **HTTP 403** from the proxy. DNS resolution inside the container works (e.g. `host.docker.internal` resolves to `172.17.0.1`), but TCP connections to port 11434 on Windows host IPs time out. ## What I Already Ruled Out - Ollama is not down and not bound only to localhost - WSL2 host-to-Windows networking for Ollama is not broken - Docker installation is not broken - The container itself is not broken (cloud-backed inference works fine from the same container) - Windows Firewall was checked — port 11434 appears open for WSL ## Questions for Ollama Maintainers 1. Is there a known issue with Ollama on Windows being unreachable from Docker containers running inside WSL2 (not Docker Desktop)? 2. Is `host.docker.internal` expected to route to the Windows host when Docker Engine is installed natively inside WSL2 (not via Docker Desktop)? 3. Is there a recommended addressing pattern for reaching Windows-hosted Ollama from a Docker container inside WSL2? - WSL host gateway IP? - Windows LAN IP? - A specific hostname? 4. Are there any known proxy variable interactions (`HTTP_PROXY`, `HTTPS_PROXY`) that intercept Ollama traffic inside containers even after `--noproxy` is set? 5. Is there a recommended `OLLAMA_HOST` configuration or Windows Firewall rule to support this topology? ## Additional Context This issue surfaced while trying to use Ollama as a local inference provider from a NemoClaw (NVIDIA) sandbox. The same sandbox reaches the NVIDIA Cloud API without issue, which confirms the container networking itself is functional for outbound connections. The failure is specific to reaching the Windows-hosted Ollama instance from inside the Docker network namespace on WSL2. Any guidance on the supported networking path or a known workaround (e.g., host-side relay, `socat` bridge, Docker `--network host` flag) would be very helpful. **Cross-reference:** This same problem has been filed on the NemoClaw (NVIDIA) side as [NVIDIA/NemoClaw#336](https://github.com/NVIDIA/NemoClaw/issues/336) — *[WSL2] NemoClaw sandbox cannot reach Windows-hosted Ollama — local inference path blocked*. That issue documents the full end-to-end troubleshooting trail from the NemoClaw/sandbox perspective. The Ollama-side question is specifically: does Ollama on Windows have any known limitation or recommended configuration for being reached from Docker containers running inside WSL2 (native Docker Engine, not Docker Desktop)?
Author
Owner

@ranga291257 commented on GitHub (Mar 18, 2026):

please review
<!-- gh-comment-id:4086268057 --> @ranga291257 commented on GitHub (Mar 18, 2026): please review
Author
Owner

@rick-github commented on GitHub (Mar 19, 2026):

What happens if you run the following in WSL2:

docker run --rm  curlimages/curl -s <WSL-host-IP>:11434
docker run --rm  curlimages/curl -s <Windows-LAN-IP>:11434
<!-- gh-comment-id:4086556898 --> @rick-github commented on GitHub (Mar 19, 2026): What happens if you run the following in WSL2: ``` docker run --rm curlimages/curl -s <WSL-host-IP>:11434 docker run --rm curlimages/curl -s <Windows-LAN-IP>:11434 ```
Author
Owner

@Chillyagi commented on GitHub (Mar 19, 2026):

Since you've already confirmed that the WSL2 host shell can reach Ollama on Windows, the most straightforward fix is to bypass the Docker network stack entirely. This allows the container to share the WSL2 host's network namespace.

  • Try Launch your NemoClaw container with the --network host flag.

  • Accessing Ollama: Inside the container, you can then reach Ollama via http://localhost:11434 or the Windows LAN IP.

if this doesn't work , try socat relay.

<!-- gh-comment-id:4087447152 --> @Chillyagi commented on GitHub (Mar 19, 2026): Since you've already confirmed that the WSL2 host shell can reach Ollama on Windows, the most straightforward fix is to bypass the Docker network stack entirely. This allows the container to share the WSL2 host's network namespace. - Try Launch your NemoClaw container with the `--network host` flag. - Accessing Ollama: Inside the container, you can then reach Ollama via http://localhost:11434 or the Windows LAN IP. if this doesn't work , try socat relay.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#9614