mirror of
https://github.com/fosrl/pangolin.git
synced 2026-05-06 12:55:03 -05:00
[GH-ISSUE #2364] Traefik cannot fetch configuration data (Client.Timeout exceeded while awaiting headers) #2152
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @pizzaandcheese on GitHub (Jan 28, 2026).
Original GitHub issue: https://github.com/fosrl/pangolin/issues/2364
Describe the Bug
When opening the "Request Logs" section the following error appears in the traefik logs:
ERR Provider error, retrying in 586.330963ms error="cannot fetch configuration data: do fetch request: Get \"http://pangolin-app:3001/api/v1/traefik-config\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" providerName=httpWhen the error appears in the logs the Dashboard also spits out a Modal that states: "Error: failed to filter logs"
On one of my larger instances the "Request Logs" page never loads and ends up locking up the dashboard for a time.
Environment
To Reproduce
Open "Request Logs" section
https://github.com/user-attachments/assets/8680cd9c-1265-40d4-9bd3-8ed90324d1be
Expected Behavior
Request Logs appear without error
@pizzaandcheese commented on GitHub (Jan 28, 2026):
I am running everything with podman quadlets.
Here are my configs:
Pangolin Dashboard:
Traefik:
Gerbil:
Traefik Static Config:
Traefik Dynamic Config:
Pangolin Config:
@ErroneousBosch commented on GitHub (Feb 4, 2026):
Seeing the same error running via Docker compose on Truenas.
@Viceman256 commented on GitHub (Feb 5, 2026):
Same issues with Newt in Docker for Windows.
@github-actions[bot] commented on GitHub (Feb 20, 2026):
This issue has been automatically marked as stale due to 14 days of inactivity. It will be closed in 14 days if no further activity occurs.
@ErroneousBosch commented on GitHub (Feb 20, 2026):
Still happening, not stale
@intari commented on GitHub (Feb 21, 2026):
Still happens for me.
even with EE
system details:
``
root@sarri:
# uname -a# cat /etc/os-releaseLinux sarri.intari.net 6.1.0-41-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.158-1 (2025-11-09) x86_64 GNU/Linux
root@sarri:
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
root@sarri:
# hostnamectl# uname -rStatic hostname: sarri.intari.net
Icon name: computer-vm
Chassis: vm 🖴
Machine ID: 4a22df7f51a9497ce4cb6e8c04f2813f
Boot ID: 24728e16214a442b9be764060a63fa3c
Virtualization: kvm
Operating System: Debian GNU/Linux 12 (bookworm)
Kernel: Linux 6.1.0-41-amd64
Architecture: x86-64
Hardware Vendor: Red Hat
Hardware Model: KVM
Firmware Version: 1.16.0-4.module_el8.9.0+3659+9c8643f3
root@sarri:
6.1.0-41-amd64
root@sarri:
# cat /proc/version# lscpuLinux version 6.1.0-41-amd64 (debian-kernel@lists.debian.org) (gcc-12 (Debian 12.2.0-14+deb12u1) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_DYNAMIC Debian 6.1.158-1 (2025-11-09)
root@sarri:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
BIOS Vendor ID: Red Hat
Model name: QEMU Virtual CPU version 2.5+
BIOS Model name: RHEL 7.6.0 PC (i440FX + PIIX, 1996) CPU @ 2.0GHz
BIOS CPU family: 1
CPU family: 6
Model: 6
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
Stepping: 3
BogoMIPS: 5199.99
Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx1
6 x2apic hypervisor lahf_lm cpuid_fault pti
Virtualization features:
Hypervisor vendor: KVM
Virtualization type: full
Caches (sum of all):
L1d: 64 KiB (2 instances)
L1i: 64 KiB (2 instances)
L2: 8 MiB (2 instances)
L3: 16 MiB (1 instance)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerabilities:
Gather data sampling: Not affected
Indirect target selection: Mitigation; Aligned branch/return thunks
Itlb multihit: KVM: Mitigation: VMX unsupported
L1tf: Mitigation; PTE Inversion
Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Meltdown: Mitigation; PTI
Mmio stale data: Unknown: No mitigations
Reg file data sampling: Not affected
Retbleed: Not affected
Spec rstack overflow: Not affected
Spec store bypass: Vulnerable
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Srbds: Not affected
Tsa: Not affected
Tsx async abort: Not affected
Vmscape: Not affected
root@sarri:
# cat /proc/cpuinfo | head -20# free -hprocessor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 6
model name : QEMU Virtual CPU version 2.5+
stepping : 3
microcode : 0x1
cpu MHz : 2599.996
cache size : 16384 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm cpuid_fault pti
root@sarri:
total used free shared buff/cache available
Mem: 960Mi 867Mi 72Mi 2.3Mi 165Mi 92Mi
Swap: 0B 0B 0B
root@sarri:
# cat /proc/meminfo | grep -E "MemTotal|MemFree|MemAvailable"# df -hMemTotal: 983488 kB
MemFree: 73740 kB
MemAvailable: 95352 kB
root@sarri:
Filesystem Size Used Avail Use% Mounted on
udev 462M 0 462M 0% /dev
tmpfs 97M 832K 96M 1% /run
/dev/vda2 20G 5.8G 13G 31% /
tmpfs 481M 0 481M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
overlay 20G 5.8G 13G 31% /var/lib/docker/overlay2/3b3e34ddf008f62d92bb64f79f528ce66603503c1232f7a84c249fea65ffec49/merged
overlay 20G 5.8G 13G 31% /var/lib/docker/overlay2/a9d773c2e97e990499716c3b0df9fdc4786e7556c616f76f609fe3c3e8efaf22/merged
overlay 20G 5.8G 13G 31% /var/lib/docker/overlay2/ee9ec06245f0671244df180d9089d17dacde0fa5e26f23963f0036c4af54e7b5/merged
overlay 20G 5.8G 13G 31% /var/lib/docker/overlay2/7df33bc536d2b10666dcd67e7bae4bc9d040a137f6d5ecb7d4ab5914dff08874/merged
tmpfs 97M 0 97M 0% /run/user/0
root@sarri:
# lsblk# fdisk -lNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sr0 11:0 1 1024M 0 rom
vda 254:0 0 20G 0 disk
├─vda1 254:1 0 1M 0 part
└─vda2 254:2 0 20G 0 part /
root@sarri:
Disk /dev/vda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 268BA30D-8C1A-4C4C-9AB0-506F234B569C
Device Start End Sectors Size Type
/dev/vda1 2048 4095 2048 1M BIOS boot
/dev/vda2 4096 41940607 41936512 20G Linux filesystem
root@sarri:
# systemd-detect-virt# dmesg | grep -i virtualkvm
root@sarri:
[ 0.011780] Booting paravirtualized kernel on KVM
[ 0.175592] smpboot: CPU0: Intel QEMU Virtual CPU version 2.5+ (family: 0x6, model: 0x6, stepping: 0x3)
[ 0.175849] Performance Events: PMU not available due to virtualization, using software events only.
[ 1.035752] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
[ 1.039233] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input2
[ 2.048576] systemd[1]: Detected virtualization kvm.
root@sarri:
# lspci | grep -i virtual# cat /proc/cpuinfo | grep -i hypervisorroot@sarri:
flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm cpuid_fault pti
flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm cpuid_fault pti
root@sarri:
# dmesg | grep -i kvm# docker --version[ 0.000000] DMI: Red Hat KVM, BIOS 1.16.0-4.module_el8.9.0+3659+9c8643f3 04/01/2014
[ 0.000000] Hypervisor detected: KVM
[ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
[ 0.000001] kvm-clock: using sched offset of 16661692755771130 cycles
[ 0.000003] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
[ 0.011780] Booting paravirtualized kernel on KVM
[ 0.016251] kvm-guest: PV spinlocks enabled
[ 0.275098] clocksource: Switched to clocksource kvm-clock
[ 2.048576] systemd[1]: Detected virtualization kvm.
root@sarri:
Docker version 29.1.0, build 360952c
root@sarri:~# docker info
Client: Docker Engine - Community
Version: 29.1.0
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.30.1
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.40.3
Path: /usr/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 4
Running: 4
Paused: 0
Stopped: 0
Images: 4
Server Version: 29.1.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
CDI spec directories:
/etc/cdi
/var/run/cdi
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: fcd43222d6b07379a4be9786bda52438f0dd16a1
runc version: v1.3.3-0-gd842d771
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.1.0-41-amd64
Operating System: Debian GNU/Linux 12 (bookworm)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 960.4MiB
Name: sarri.intari.net
ID: ad391b35-c3a6-4cdf-a944-95107a1b5b9e
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
::1/128
127.0.0.0/8
Live Restore Enabled: false
Firewall Backend: iptables
root@sarri:~# docker system info
Client: Docker Engine - Community
Version: 29.1.0
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.30.1
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.40.3
Path: /usr/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 4
Running: 4
Paused: 0
Stopped: 0
Images: 4
Server Version: 29.1.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
CDI spec directories:
/etc/cdi
/var/run/cdi
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: fcd43222d6b07379a4be9786bda52438f0dd16a1
runc version: v1.3.3-0-gd842d771
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.1.0-41-amd64
Operating System: Debian GNU/Linux 12 (bookworm)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 960.4MiB
Name: sarri.intari.net
ID: ad391b35-c3a6-4cdf-a944-95107a1b5b9e
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
::1/128
127.0.0.0/8
Live Restore Enabled: false
Firewall Backend: iptables
root@sarri:~# docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 4 4 1.555GB 1.555GB (100%)
Containers 4 4 4.356kB 0B (0%)
Local Volumes 0 0 0B 0B
Build Cache 0 0 0B 0B
root@sarri:
# docker ps -a# root@sarri:CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0a24408919fd traefik:v3.6.8 "/entrypoint.sh --co…" 2 hours ago Up 2 hours traefik
65e308e4d405 fosrl/gerbil:1.3.0 "/entrypoint.sh --re…" 2 hours ago Up 2 hours 0.0.0.0:25->25/tcp, [::]:25->25/tcp, 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp, 0.0.0.0:21820->21820/udp, [::]:21820->21820/udp, 0.0.0.0:3118-3119->3118-3119/tcp, [::]:3118-3119->3118-3119/tcp, 0.0.0.0:51820->51820/udp, [::]:51820->51820/udp gerbil
2997b9424b2d fosrl/pangolin:ee-latest "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) pangolin
root@sarri:
# docker container ls -a --format "table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}"# docker imagesNAMES IMAGE STATUS PORTS
traefik traefik:v3.6.8 Up 2 hours
gerbil fosrl/gerbil:1.3.0 Up 2 hours 0.0.0.0:25->25/tcp, [::]:25->25/tcp, 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp, 0.0.0.0:21820->21820/udp, [::]:21820->21820/udp, 0.0.0.0:3118-3119->3118-3119/tcp, [::]:3118-3119->3118-3119/tcp, 0.0.0.0:51820->51820/udp, [::]:51820->51820/udp
pangolin fosrl/pangolin:ee-latest Up 2 hours (healthy)
root@sarri:
i Info → U In Use
IMAGE ID DISK USAGE CONTENT SIZE EXTRA
amnezia-awg:latest f780740a359d 26.8MB 0B
fosrl/gerbil:1.3.0 5fe045b02895 24.3MB 0B U
fosrl/pangolin:ee-latest 04f047cf2512 1.33GB 0B U
traefik:v3.6.8 3de33707981b 186MB 0B U
root@sarri:
# docker image ls --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"# docker network lsREPOSITORY TAG SIZE
fosrl/pangolin ee-latest 1.33GB
traefik v3.6.8 186MB
fosrl/gerbil 1.3.0 24.3MB
root@sarri:
NETWORK ID NAME DRIVER SCOPE
f1425979ba3a bridge bridge local
3b2ccec9aaa6 host host local
c5b651d619fe none null local
855d771dc17b pangolin bridge local
root@sarri:~#
``
it doesn't matter if it's personal EE or not. docker-based setup
@intari commented on GitHub (Feb 28, 2026):
I was able to fix this issues in my specific setup by ... requesting RAM increase for Pangolin's VM from 1 Gb to 2 Gb. May be be Pangolin just OOMs?
@ErroneousBosch commented on GitHub (Mar 1, 2026):
I am on Docker, not currently limiting resources on those containers
@Elsuuxi commented on GitHub (Mar 7, 2026):
Same as @intari
I upgraded the RAM to 2GB and the issue has been resolved.
@intari commented on GitHub (Mar 7, 2026):
@ErroneousBosch I also didn't limit resources in docker-compose, only hypervisor limits in hosting's admin panel.
@Viceman256 commented on GitHub (Mar 7, 2026):
Not for me. My server has 4 GB available and never peaks past 2 GB.
@ErroneousBosch commented on GitHub (Mar 7, 2026):
I restarted my stack with port 3001 on the Pangolin container exposed. we'll see if it makes a difference. usually for internal docker network calls like these you don't need it
@github-actions[bot] commented on GitHub (Mar 22, 2026):
This issue has been automatically marked as stale due to 14 days of inactivity. It will be closed in 14 days if no further activity occurs.
@github-actions[bot] commented on GitHub (Apr 5, 2026):
This issue has been automatically closed due to inactivity. If you believe this is still relevant, please open a new issue with up-to-date information.
@m-elsharkawi commented on GitHub (Apr 13, 2026):
I found what was causing a similar issue for me. Docker bridge networking on the host was broken by stale nftables raw PREROUTING rules left behind after the outage and network recreation.
Symptoms were:
In my case,
pangolinandgerbilwere on the same bridge, butnft list rulesetshowed old rules like these still dropping traffic:Those rules were stale and no longer matched the current bridge.
What fixed it:
Identify the stale drop rules for the affected container IPs.
Delete the bad rules by handle:
In my case, removing the stale rules immediately restored container-to-container connectivity and Pangolin started working again.
A Docker restart alone did not remove those stale rules, so checking
nftdirectly was the key.