[GH-ISSUE #284] Newt loses connection to server and will only reconnect on service restart #2067

Open
opened 2026-05-03 05:47:21 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @imoBooze on GitHub (Mar 28, 2026).
Original GitHub issue: https://github.com/fosrl/newt/issues/284

Describe the Bug

Whenever the wifi of a site disconnects for a period of time then reconnects, the newt of the site is unable to reconnect to pangolin. The logs of newt mentions several periodic ping failures.

Environment

  • OS Type & Version: Debian Trixie
  • Pangolin Version: 1.16.2
  • Gerbil Version: 1.3.0
  • Traefik Version: 2.11.41
  • Newt Version: 1.10.3
  • Olm Version: (if applicable)

To Reproduce

  1. Set up site with newt
  2. Disconnect wifi of site
  3. Wait around 10 minutes
  4. Reconnect wifi of site
  5. Newt loses connection

Expected Behavior

Newt reconnects after wifi outage.

Originally created by @imoBooze on GitHub (Mar 28, 2026). Original GitHub issue: https://github.com/fosrl/newt/issues/284 ### Describe the Bug Whenever the wifi of a site disconnects for a period of time then reconnects, the newt of the site is unable to reconnect to pangolin. The logs of newt mentions several periodic ping failures. ### Environment - OS Type & Version: Debian Trixie - Pangolin Version: 1.16.2 - Gerbil Version: 1.3.0 - Traefik Version: 2.11.41 - Newt Version: 1.10.3 - Olm Version: (if applicable) ### To Reproduce 1. Set up site with newt 2. Disconnect wifi of site 3. Wait around 10 minutes 4. Reconnect wifi of site 5. Newt loses connection ### Expected Behavior Newt reconnects after wifi outage.
Author
Owner

@github-actions[bot] commented on GitHub (Apr 12, 2026):

This issue has been automatically marked as stale due to 14 days of inactivity. It will be closed in 14 days if no further activity occurs.

<!-- gh-comment-id:4230471140 --> @github-actions[bot] commented on GitHub (Apr 12, 2026): This issue has been automatically marked as stale due to 14 days of inactivity. It will be closed in 14 days if no further activity occurs.
Author
Owner

@imoBooze commented on GitHub (Apr 12, 2026):

As of newt version 1.11.0, this issue appears to have been fixed. Could others also confirm?

<!-- gh-comment-id:4230702997 --> @imoBooze commented on GitHub (Apr 12, 2026): As of newt version 1.11.0, this issue appears to have been fixed. Could others also confirm?
Author
Owner

@Jdsx74 commented on GitHub (Apr 16, 2026):

I have the same problem with Pangolin 1.17.0 and Newt 1.11.0.
I start Newt service, but after 4/5 days the connection drop and i have to restart the newt service to be able to access to site again..

Thank you !

<!-- gh-comment-id:4259193032 --> @Jdsx74 commented on GitHub (Apr 16, 2026): I have the same problem with Pangolin 1.17.0 and Newt 1.11.0. I start Newt service, but after 4/5 days the connection drop and i have to restart the newt service to be able to access to site again.. Thank you !
Author
Owner

@Kornelius777 commented on GitHub (Apr 17, 2026):

Same here.

Would it be possible to implement a health check so that a script could reboot the newt server?

<!-- gh-comment-id:4270389933 --> @Kornelius777 commented on GitHub (Apr 17, 2026): Same here. Would it be possible to implement a health check so that a script could reboot the newt server?
Author
Owner

@svillar commented on GitHub (Apr 18, 2026):

As of newt version 1.11.0, this issue appears to have been fixed. Could others also confirm?

I've newt 1.11.0 and I can easily reproduce this issue. Not following the same steps. In my case newt is runnning in a docker container in a NAS (ARM) which is connected via ethernet, so unless something very wrong happens it's always online. At some point it disconnects:

newt  | WARN: 2026/04/18 01:01:55 Periodic ping failed (2 consecutive failures): all 2 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | INFO: 2026/04/18 01:02:05 Target 17 status changed: unhealthy -> healthy
newt  | WARN: 2026/04/18 01:02:09 Periodic ping failed (3 consecutive failures): all 2 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:02:25 Periodic ping failed (4 consecutive failures): all 2 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:02:43 Periodic ping failed (5 consecutive failures): all 2 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:03:30 Periodic ping failed (6 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:04:31 Periodic ping failed (7 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:05:35 Periodic ping failed (8 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:06:38 Periodic ping failed (9 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:07:42 Periodic ping failed (10 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:08:45 Periodic ping failed (11 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:09:49 Periodic ping failed (12 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:10:52 Periodic ping failed (13 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:11:56 Periodic ping failed (14 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:12:59 Periodic ping failed (15 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:14:03 Periodic ping failed (16 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:15:06 Periodic ping failed (17 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:16:10 Periodic ping failed (18 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:17:14 Periodic ping failed (19 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | INFO: 2026/04/18 01:17:52 Server version: 1.16.2
newt  | INFO: 2026/04/18 01:17:52 Websocket connected
newt  | WARN: 2026/04/18 01:18:17 Periodic ping failed (20 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout
newt  | WARN: 2026/04/18 01:19:20 Periodic ping failed (21 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout

and then it never recovers (when I got the log it was reporting >1000 consecutive failures). Only restarting it makes it work again. It's a serious issue as it makes resources lose all connectivity

<!-- gh-comment-id:4274257380 --> @svillar commented on GitHub (Apr 18, 2026): > As of newt version 1.11.0, this issue appears to have been fixed. Could others also confirm? I've newt 1.11.0 and I can easily reproduce this issue. Not following the same steps. In my case newt is runnning in a docker container in a NAS (ARM) which is connected via ethernet, so unless something very wrong happens it's always online. At some point it disconnects: ``` newt | WARN: 2026/04/18 01:01:55 Periodic ping failed (2 consecutive failures): all 2 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | INFO: 2026/04/18 01:02:05 Target 17 status changed: unhealthy -> healthy newt | WARN: 2026/04/18 01:02:09 Periodic ping failed (3 consecutive failures): all 2 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:02:25 Periodic ping failed (4 consecutive failures): all 2 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:02:43 Periodic ping failed (5 consecutive failures): all 2 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:03:30 Periodic ping failed (6 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:04:31 Periodic ping failed (7 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:05:35 Periodic ping failed (8 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:06:38 Periodic ping failed (9 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:07:42 Periodic ping failed (10 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:08:45 Periodic ping failed (11 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:09:49 Periodic ping failed (12 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:10:52 Periodic ping failed (13 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:11:56 Periodic ping failed (14 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:12:59 Periodic ping failed (15 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:14:03 Periodic ping failed (16 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:15:06 Periodic ping failed (17 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:16:10 Periodic ping failed (18 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:17:14 Periodic ping failed (19 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | INFO: 2026/04/18 01:17:52 Server version: 1.16.2 newt | INFO: 2026/04/18 01:17:52 Websocket connected newt | WARN: 2026/04/18 01:18:17 Periodic ping failed (20 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout newt | WARN: 2026/04/18 01:19:20 Periodic ping failed (21 consecutive failures): all 4 ping attempts failed, last error: failed to read ICMP packet: i/o timeout ``` and then it never recovers (when I got the log it was reporting >1000 consecutive failures). Only restarting it makes it work again. It's a serious issue as it makes resources lose all connectivity
Author
Owner

@svillar commented on GitHub (Apr 18, 2026):

I'm BTW available for further testing, using an image with extra debugging or the like, just ping me back.

<!-- gh-comment-id:4274262728 --> @svillar commented on GitHub (Apr 18, 2026): I'm BTW available for further testing, using an image with extra debugging or the like, just ping me back.
Author
Owner

@AstralDestiny commented on GitHub (Apr 18, 2026):

@svillar Mind poking me on discord and throw me a site config?

<!-- gh-comment-id:4274758679 --> @AstralDestiny commented on GitHub (Apr 18, 2026): @svillar Mind poking me on discord and throw me a site config?
Author
Owner

@v3rm1n0 commented on GitHub (Apr 24, 2026):

Same issue here. My router restarts frequently for updates, resulting in exactly this. Would be possible to have newt exit or kill itself after x consecutive failures? That way systemds restart on failure could work

Edit: I am on newt 1.11.0

<!-- gh-comment-id:4311825938 --> @v3rm1n0 commented on GitHub (Apr 24, 2026): Same issue here. My router restarts frequently for updates, resulting in exactly this. Would be possible to have newt exit or kill itself after x consecutive failures? That way systemds restart on failure could work Edit: I am on newt 1.11.0
Author
Owner

@sandroshu commented on GitHub (Apr 24, 2026):

It is not resolved in the latest available release.

Workaround still works for systemd service but not optimal mentioned in my issue for the time being. https://github.com/fosrl/newt/issues/310

Create a service for it or run it in a screen session in the background. For eg. I have a separate service newt-healer:


[Unit]
Description=Newt Auto-Restart when Issues detected
After=newt.service
Partof=newt.service

[Service]
Type=simple
ExecStart=/usr/local/bin/newt-healer.sh
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target
<!-- gh-comment-id:4311869607 --> @sandroshu commented on GitHub (Apr 24, 2026): It is not resolved in the latest available release. Workaround still works for systemd service but not optimal mentioned in my issue for the time being. https://github.com/fosrl/newt/issues/310 Create a service for it or run it in a screen session in the background. For eg. I have a separate service newt-healer: ``` [Unit] Description=Newt Auto-Restart when Issues detected After=newt.service Partof=newt.service [Service] Type=simple ExecStart=/usr/local/bin/newt-healer.sh Restart=always RestartSec=5 [Install] WantedBy=multi-user.target ```
Author
Owner

@AstralDestiny commented on GitHub (Apr 24, 2026):

Wouldn't really use the latest RC right now honestly, We've got a few critical bugs.

<!-- gh-comment-id:4315903000 --> @AstralDestiny commented on GitHub (Apr 24, 2026): Wouldn't really use the latest RC right now honestly, We've got a few critical bugs.
Author
Owner

@Wireheadbe commented on GitHub (Apr 27, 2026):

I had tried to work around this by using the "healthy" file, but Newt apparently doesn't remove that file when its status is unhealthy... that could simply be fixed in code when a connectivity issue exists, that /var/healthy gets removed.
At that point, the container would restart, and that's that.

services:
  newt:
    image: fosrl/newt
    container_name: newt
    restart: unless-stopped
    environment:
      - PANGOLIN_ENDPOINT=https://xxxx.xxxx
      - NEWT_ID=xxxx
      - NEWT_SECRET=xxxx
      - HEALTH_FILE=/var/healthy
    healthcheck:
      test: ["CMD-SHELL", "[ -f /var/healthy ]"]
      interval: 30s
      timeout: 5s
      retries: 5
      start_period: 30s
<!-- gh-comment-id:4329916546 --> @Wireheadbe commented on GitHub (Apr 27, 2026): I had tried to work around this by using the "healthy" file, but Newt apparently doesn't remove that file when its status is unhealthy... that could simply be fixed in code when a connectivity issue exists, that /var/healthy gets removed. At that point, the container would restart, and that's that. ``` services: newt: image: fosrl/newt container_name: newt restart: unless-stopped environment: - PANGOLIN_ENDPOINT=https://xxxx.xxxx - NEWT_ID=xxxx - NEWT_SECRET=xxxx - HEALTH_FILE=/var/healthy healthcheck: test: ["CMD-SHELL", "[ -f /var/healthy ]"] interval: 30s timeout: 5s retries: 5 start_period: 30s ```
Author
Owner

@bajo commented on GitHub (Apr 29, 2026):

Thanks @sandroshu for pointing me to this issue.
Copying my comment from the closed ticket here, maybe it is useful for somebody else in here. :)

I faced a similar problem when my cheap VPS ran out of memory and OOM killer did its job on pangolin. After the service restarted, newt failed to automatically reconnect. That is on newt version 1.10.4 on NixOS.

Thus, I added a simple systemd watchdog service and timer on the host on which newt is running.

systemctl cat newt-watchdog.timer
[Unit]
Description=Run newt-watchdog every 5 minutes

[Timer]
OnBootSec=2min
OnUnitActiveSec=5min
Unit=newt-watchdog.service

[Install]
WantedBy=timers.target

and

systemctl cat newt-watchdog.service
[Unit]
Description=Restarts Newt if the tunnel is dead
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
# We use /bin/bash -c to allow for the 'if' logic and curl pipe
ExecStart=/bin/bash -c 'if ! /usr/bin/curl -fs --max-time 10 --resolve your-app.example.com:443:1.2.3.4 https://your-app.example.com > /dev/null; then echo "Tunnel check failed. Restarting Newt..."; /usr/bin/systemctl restart newt; else echo "Tunnel is healthy."; fi'

[Install]
WantedBy=multi-user.target

The --resolve your-app.example.com:443:1.2.3.4 part is only useful for me because I'm using the same fqdn internally as I do on the public Internet, so I need to make sure curl actually checks the app in the Internet and not on my LAN.

<!-- gh-comment-id:4342697439 --> @bajo commented on GitHub (Apr 29, 2026): Thanks @sandroshu for pointing me to this issue. Copying my comment from the closed ticket here, maybe it is useful for somebody else in here. :) I faced a similar problem when my cheap VPS ran out of memory and OOM killer did its job on pangolin. After the service restarted, newt failed to automatically reconnect. That is on newt version 1.10.4 on NixOS. Thus, I added a simple systemd watchdog service and timer on the host on which newt is running. ``` systemctl cat newt-watchdog.timer [Unit] Description=Run newt-watchdog every 5 minutes [Timer] OnBootSec=2min OnUnitActiveSec=5min Unit=newt-watchdog.service [Install] WantedBy=timers.target ``` and ``` systemctl cat newt-watchdog.service [Unit] Description=Restarts Newt if the tunnel is dead After=network-online.target Wants=network-online.target [Service] Type=oneshot # We use /bin/bash -c to allow for the 'if' logic and curl pipe ExecStart=/bin/bash -c 'if ! /usr/bin/curl -fs --max-time 10 --resolve your-app.example.com:443:1.2.3.4 https://your-app.example.com > /dev/null; then echo "Tunnel check failed. Restarting Newt..."; /usr/bin/systemctl restart newt; else echo "Tunnel is healthy."; fi' [Install] WantedBy=multi-user.target ``` The `--resolve your-app.example.com:443:1.2.3.4` part is only useful for me because I'm using the same fqdn internally as I do on the public Internet, so I need to make sure curl actually checks the app in the Internet and not on my LAN.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/newt#2067