mirror of
https://github.com/fosrl/gerbil.git
synced 2026-05-07 04:09:58 -05:00
[GH-ISSUE #51] Gerbil having several identical panics then proceeds to start #232
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @wittypixel on GitHub (Jan 19, 2026).
Original GitHub issue: https://github.com/fosrl/gerbil/issues/51
Originally assigned to: @oschwartz10612 on GitHub.
Describe the Bug
After spinning up pangolin container stack (pangolin, traefik, gerbil, crowdsec) connections are stopped and a panic can be seeing in the gerbil logs.
shortly after gerbil continues to go about it's business and seems to establish connection with the remote host.
Environment
To Reproduce
I typically notice this behavior occurring when restarting the stack related to pangolin. I restart the container stack directly using the terminal and
sudo docker composecommands.Attempted to pull the latest container also to no avail, the error was still present.
Expected Behavior
i believe this one is pretty clear, the should be no panic.
@wittypixel commented on GitHub (Jan 19, 2026):
possibly important, I also just noticed the newt network connection looks to be having a hard time according to the logs on the remote server
That said I don't believe the 2 issues are mutually related and may just be coincidentally happening at the same time
@icezar commented on GitHub (Mar 19, 2026):
Running Pangolin 1.x with Gerbil on the same stack. Seeing the identical panic:
In my case the panic doesn't always coincide with a stack restart — it also occurs randomly during normal operation (no obvious external trigger). After Gerbil recovers and re-registers with Pangolin, some resources end up in a broken state: health checks fail and traffic returns "no server available" via Traefik. The resources still exist in the Pangolin UI but are no longer functional. Deleting and recreating the resource entry in Pangolin fixes it, which suggests Pangolin doesn't fully re-push all resource mappings to Gerbil after the crash recovery cycle.
Workaround I'm testing: restarting the Pangolin container after a Gerbil crash forces a clean re-registration of all resources. Still confirming whether this is reliable.
The core bug is in relay.go:237 — a channel is being closed twice, likely due to a race between the shutdown signal and the UDP proxy server's own cleanup. A nil-check or sync.Once guard on that close would likely fix it.