[GH-ISSUE #1370] Inability to Forward Traffic on OpenWrt When Using --accept-clients #1878

Closed
opened 2026-04-16 08:44:15 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @burjuyz on GitHub (Aug 28, 2025).
Original GitHub issue: https://github.com/fosrl/pangolin/issues/1370

I am trying to use newt on an OpenWrt router to provide access to my local LAN (192.168.10.0/24) for remote clients. This is often referred to as a "site-to-site" or "subnet router" configuration.

The newt client on the OpenWrt router is configured as a "site" (gateway) and runs with the --accept-clients flag. The local subnet is correctly configured as a "Resource" on the Pangolin server, associated with the OpenWrt client.

Remote clients (e.g., a Windows machine running newt) connect successfully. The remote client correctly receives the route for 192.168.10.0/24.

The Problem

While the tunnel is established, there is a fundamental issue with traffic forwarding:

  1. Pinging the Gateway's Virtual IP works: The remote client can successfully ping the virtual IP address of the OpenWrt newt instance (e.g., 100.90.128.1). This confirms that the tunnel itself is up and the newt process on the router is responsive.
  2. Pinging the LAN Fails: The remote client cannot ping any device on the local LAN (e.g., 192.168.10.200). The packets are lost.

Root Cause Analysis & Diagnostics

After extensive debugging, we have determined that the issue stems from the fact that newt operates entirely in userspace and does not create a kernel TUN/TAP interface.

This creates a disconnect between the newt process and the OpenWrt's kernel networking stack (which uses nftables).

Here’s what we've confirmed through diagnostics:

  • Encrypted traffic arrives: tcpdump on the router's WAN interface shows encrypted UDP packets arriving from the Pangolin server when the remote client sends a ping.
  • Decrypted packets never enter the kernel's forwarding path:
    • tcpdump on the br-lan interface (or any interface) never shows the decrypted ICMP packets.
    • Packet counters on the nftables FORWARD chain rules remain at zero.
    • This proves that the decrypted packet is never handed off from the newt process to the kernel for routing and firewall processing.
  • Kernel is correctly configured for forwarding: We have verified that net.ipv4.ip_forward = 1 and net.ipv4.conf.all.rp_filter = 0.
  • Firewall rules have no effect: We have tried multiple firewall configurations (both iptables-compat and native nftables via uci), including creating dedicated zones, masquerading (NAT), and explicit ACCEPT rules for the traffic. None of these rules are ever triggered because the packets never reach the firewall subsystem.

Conclusion

The core issue is a lack of integration between the userspace newt process and the kernel's routing and firewall engine. For traffic forwarding to work, the decrypted packets must be injected into the kernel's networking stack. The most standard and robust way to achieve this is by using a TUN virtual network interface.

Without the ability for newt to create or bind to a TUN interface, it appears impossible to use it as a proper subnet gateway on OpenWrt (and likely other Linux-based systems). The --accept-clients functionality seems to be broken for any scenario that requires kernel-level packet forwarding.

Suggested Solution / Feature Request

Please consider adding a feature to the newt client, possibly via a new command-line flag (e.g., --create-tun), that creates a TUN interface and forwards all site-to-site traffic through it. This would allow for seamless and standard integration with the host operating system's networking and firewall capabilities, making the subnet routing feature truly functional.

Originally created by @burjuyz on GitHub (Aug 28, 2025). Original GitHub issue: https://github.com/fosrl/pangolin/issues/1370 I am trying to use `newt` on an OpenWrt router to provide access to my local LAN (`192.168.10.0/24`) for remote clients. This is often referred to as a "site-to-site" or "subnet router" configuration. The `newt` client on the OpenWrt router is configured as a "site" (gateway) and runs with the `--accept-clients` flag. The local subnet is correctly configured as a "Resource" on the Pangolin server, associated with the OpenWrt client. Remote clients (e.g., a Windows machine running `newt`) connect successfully. The remote client correctly receives the route for `192.168.10.0/24`. #### **The Problem** While the tunnel is established, there is a fundamental issue with traffic forwarding: 1. **Pinging the Gateway's Virtual IP works:** The remote client can successfully ping the virtual IP address of the OpenWrt `newt` instance (e.g., `100.90.128.1`). This confirms that the tunnel itself is up and the `newt` process on the router is responsive. 2. **Pinging the LAN Fails:** The remote client **cannot** ping any device on the local LAN (e.g., `192.168.10.200`). The packets are lost. #### **Root Cause Analysis & Diagnostics** After extensive debugging, we have determined that the issue stems from the fact that **`newt` operates entirely in userspace and does not create a kernel TUN/TAP interface.** This creates a disconnect between the `newt` process and the OpenWrt's kernel networking stack (which uses `nftables`). Here’s what we've confirmed through diagnostics: * **Encrypted traffic arrives:** `tcpdump` on the router's WAN interface shows encrypted UDP packets arriving from the Pangolin server when the remote client sends a ping. * **Decrypted packets never enter the kernel's forwarding path:** * `tcpdump` on the `br-lan` interface (or `any` interface) **never** shows the decrypted ICMP packets. * Packet counters on the `nftables` `FORWARD` chain rules remain at zero. * This proves that the decrypted packet is never handed off from the `newt` process to the kernel for routing and firewall processing. * **Kernel is correctly configured for forwarding:** We have verified that `net.ipv4.ip_forward = 1` and `net.ipv4.conf.all.rp_filter = 0`. * **Firewall rules have no effect:** We have tried multiple firewall configurations (both `iptables-compat` and native `nftables` via `uci`), including creating dedicated zones, masquerading (NAT), and explicit `ACCEPT` rules for the traffic. None of these rules are ever triggered because the packets never reach the firewall subsystem. #### **Conclusion** The core issue is a lack of integration between the userspace `newt` process and the kernel's routing and firewall engine. For traffic forwarding to work, the decrypted packets must be injected into the kernel's networking stack. The most standard and robust way to achieve this is by using a **TUN virtual network interface**. Without the ability for `newt` to create or bind to a TUN interface, it appears impossible to use it as a proper subnet gateway on OpenWrt (and likely other Linux-based systems). The `--accept-clients` functionality seems to be broken for any scenario that requires kernel-level packet forwarding. #### **Suggested Solution / Feature Request** Please consider adding a feature to the `newt` client, possibly via a new command-line flag (e.g., `--create-tun`), that creates a TUN interface and forwards all site-to-site traffic through it. This would allow for seamless and standard integration with the host operating system's networking and firewall capabilities, making the subnet routing feature truly functional.
Author
Owner

@oschwartz10612 commented on GitHub (Sep 1, 2025):

Hi! I think you can do what you are after with --native on newt.

https://docs.digpangolin.com/manage/clients/add-client#native-mode
https://github.com/fosrl/newt?tab=readme-ov-file#cli-args

This will create a native interface. In this mode though the resource created in pangolin will not do anything and instead it is up to you on openwrt to handle the packets in the kernel - but I think this is what you want. Using the remote subnets in the dashboard will make sure the subnet is forwarded for the olm clients.

<!-- gh-comment-id:3240804841 --> @oschwartz10612 commented on GitHub (Sep 1, 2025): Hi! I think you can do what you are after with `--native` on newt. https://docs.digpangolin.com/manage/clients/add-client#native-mode https://github.com/fosrl/newt?tab=readme-ov-file#cli-args This will create a native interface. In this mode though the resource created in pangolin will not do anything and instead it is up to you on openwrt to handle the packets in the kernel - but I think this is what you want. Using the remote subnets in the dashboard will make sure the subnet is forwarded for the olm clients.
Author
Owner

@burjuyz commented on GitHub (Sep 2, 2025):

When using Newt on an OpenWrt router behind a Carrier-Grade NAT (CGNAT) with a "grey" IP address, the default Relay Mode fails to establish a functional WireGuard tunnel.

The Newt service on the router successfully connects to the control server websocket and receives a configuration. However, the server does not provide the necessary endpoint information for the remote peers (Olm clients). This results in a complete failure of the WireGuard handshake process, rendering the tunnel non-functional, even though all devices appear to be configured correctly.

The issue is fully resolved by manually setting the relay server's IP address as the endpoint for the peer on the Newt device.

Log Analysis & Evidence

1. Newt Service Log on OpenWrt Router (at startup)

The router log clearly shows the Newt service connecting successfully but then failing to receive endpoint information for the peers.

Key Log Entries:

INFO: ... Websocket connected
INFO: ... Tunnel connection to server established successfully!
INFO: ... Received WireGuard clients configuration from remote server
INFO: ... Assigning IP address 100.90.128.0/24 to interface newt  <-- (Note: Also assigns an incorrect network address instead of a host address)
...
INFO: ... Added peer with no endpoint!
INFO: ... Peer [PEER_PUBLIC_KEY_1] added successfully
INFO: ... Added peer with no endpoint!
INFO: ... Peer [PEER_PUBLIC_KEY_2] added successfully
INFO: ... Added peer with no endpoint!
INFO: ... Peer [PEER_PUBLIC_KEY_3] added successfully

2. wg show command output on OpenWrt Router

This output confirms the missing configuration. The peer corresponding to the Olm client has no endpoint and, critically, no latest handshake.

interface: newt
  public key: [ROUTER_PUBLIC_KEY]
  private key: (hidden)
  listening port: [SOME_PORT]

peer: [OLM_CLIENT_PUBLIC_KEY]
  endpoint: (none)  <-- PROBLEM: Endpoint is missing
  allowed ips: 100.90.128.2/32
  latest handshake: (none)  <-- SYMPTOM: No handshake is possible
  transfer: 0 B received, 0 B sent
  persistent keepalive: every 1 second

3. Olm Client Log on Windows

The client log shows that it connects to the control server and attempts to configure the peer, but ultimately fails to connect, timing out with a "disconnected" warning.

Key Log Entries:

INFO: ... Websocket Connected
INFO: ... Sent registration message
...
INFO: ... Configured peer [ROUTER_PUBLIC_KEY]
INFO: ... Started monitoring peer 1
INFO: ... WireGuard device created.
WARN: ... Peer 1 is disconnected  <-- FINAL RESULT: Connection fails

Steps to Reproduce

  1. Install newt on an OpenWrt router that is behind a CGNAT (no public IP).
  2. Configure newt in the default Relay Mode.
  3. Connect an olm client from an external network.
  4. Observe the logs on the router: the Added peer with no endpoint! message will appear.
  5. Check the WireGuard status with wg show: the peer will have no endpoint and no latest handshake.
  6. All traffic (e.g., ping) through the tunnel will fail.

Workaround (Manual Fix)

The connection can be made to work perfectly by manually applying the correct configuration on the router after the newt service has started. This proves that the underlying WireGuard relay mechanism is functional and the issue lies solely with the configuration pushed by the server.

  1. Correct the incorrect IP assignment:
    ip addr del 100.90.128.0/24 dev newt ; ip addr add 100.90.128.1/24 dev newt
    
  2. Manually set the relay server as the endpoint for the peer:
    # The IP ###########  is the relay server discovered from logs.
    # The port 21820 is the assumed default for the Gerbil relay service.
    wg set newt peer [OLM_CLIENT_PUBLIC_KEY] endpoint  IP ########:21820
    

Conclusion

The control server (Hidden domain) is not correctly providing the relay server's IP address as the endpoint to Newt devices when operating in Relay Mode. This prevents the WireGuard handshake from ever occurring.

Suggested Solution

The server-side logic should be updated to ensure that when a peer connects via the relay, the IP and port of the relay service are pushed as the endpoint in the WireGuard configuration to all other peers in that site.

<!-- gh-comment-id:3244657709 --> @burjuyz commented on GitHub (Sep 2, 2025): When using Newt on an OpenWrt router behind a Carrier-Grade NAT (CGNAT) with a "grey" IP address, the default Relay Mode fails to establish a functional WireGuard tunnel. The Newt service on the router successfully connects to the control server websocket and receives a configuration. However, the server does not provide the necessary `endpoint` information for the remote peers (Olm clients). This results in a complete failure of the WireGuard handshake process, rendering the tunnel non-functional, even though all devices appear to be configured correctly. The issue is fully resolved by manually setting the relay server's IP address as the `endpoint` for the peer on the Newt device. **Log Analysis & Evidence** **1. Newt Service Log on OpenWrt Router (at startup)** The router log clearly shows the Newt service connecting successfully but then failing to receive endpoint information for the peers. *Key Log Entries:* ``` INFO: ... Websocket connected INFO: ... Tunnel connection to server established successfully! INFO: ... Received WireGuard clients configuration from remote server INFO: ... Assigning IP address 100.90.128.0/24 to interface newt <-- (Note: Also assigns an incorrect network address instead of a host address) ... INFO: ... Added peer with no endpoint! INFO: ... Peer [PEER_PUBLIC_KEY_1] added successfully INFO: ... Added peer with no endpoint! INFO: ... Peer [PEER_PUBLIC_KEY_2] added successfully INFO: ... Added peer with no endpoint! INFO: ... Peer [PEER_PUBLIC_KEY_3] added successfully ``` **2. `wg show` command output on OpenWrt Router** This output confirms the missing configuration. The peer corresponding to the Olm client has **no `endpoint`** and, critically, **no `latest handshake`**. ``` interface: newt public key: [ROUTER_PUBLIC_KEY] private key: (hidden) listening port: [SOME_PORT] peer: [OLM_CLIENT_PUBLIC_KEY] endpoint: (none) <-- PROBLEM: Endpoint is missing allowed ips: 100.90.128.2/32 latest handshake: (none) <-- SYMPTOM: No handshake is possible transfer: 0 B received, 0 B sent persistent keepalive: every 1 second ``` **3. Olm Client Log on Windows** The client log shows that it connects to the control server and attempts to configure the peer, but ultimately fails to connect, timing out with a "disconnected" warning. *Key Log Entries:* ``` INFO: ... Websocket Connected INFO: ... Sent registration message ... INFO: ... Configured peer [ROUTER_PUBLIC_KEY] INFO: ... Started monitoring peer 1 INFO: ... WireGuard device created. WARN: ... Peer 1 is disconnected <-- FINAL RESULT: Connection fails ``` **Steps to Reproduce** 1. Install `newt` on an OpenWrt router that is behind a CGNAT (no public IP). 2. Configure `newt` in the default Relay Mode. 3. Connect an `olm` client from an external network. 4. Observe the logs on the router: the `Added peer with no endpoint!` message will appear. 5. Check the WireGuard status with `wg show`: the peer will have no `endpoint` and no `latest handshake`. 6. All traffic (e.g., `ping`) through the tunnel will fail. **Workaround (Manual Fix)** The connection can be made to work perfectly by manually applying the correct configuration on the router after the `newt` service has started. This proves that the underlying WireGuard relay mechanism is functional and the issue lies solely with the configuration pushed by the server. 1. **Correct the incorrect IP assignment:** ```bash ip addr del 100.90.128.0/24 dev newt ; ip addr add 100.90.128.1/24 dev newt ``` 2. **Manually set the relay server as the endpoint for the peer:** ```bash # The IP ########### is the relay server discovered from logs. # The port 21820 is the assumed default for the Gerbil relay service. wg set newt peer [OLM_CLIENT_PUBLIC_KEY] endpoint IP ########:21820 ``` **Conclusion** The control server (`Hidden domain`) is not correctly providing the relay server's IP address as the `endpoint` to Newt devices when operating in Relay Mode. This prevents the WireGuard handshake from ever occurring. **Suggested Solution** The server-side logic should be updated to ensure that when a peer connects via the relay, the IP and port of the relay service are pushed as the `endpoint` in the WireGuard configuration to all other peers in that site.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/pangolin#1878