[GH-ISSUE #2364] Traefik cannot fetch configuration data (Client.Timeout exceeded while awaiting headers) #4094

Closed
opened 2026-04-20 08:32:18 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @pizzaandcheese on GitHub (Jan 28, 2026).
Original GitHub issue: https://github.com/fosrl/pangolin/issues/2364

Describe the Bug

When opening the "Request Logs" section the following error appears in the traefik logs:
ERR Provider error, retrying in 586.330963ms error="cannot fetch configuration data: do fetch request: Get \"http://pangolin-app:3001/api/v1/traefik-config\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" providerName=http
When the error appears in the logs the Dashboard also spits out a Modal that states: "Error: failed to filter logs"

On one of my larger instances the "Request Logs" page never loads and ends up locking up the dashboard for a time.

Environment

  • OS Type & Version: Almalinux 9.7
  • Pangolin Version: 1.15.1
  • Gerbil Version: 1.3.0
  • Traefik Version: 3.6.7
  • Newt Version: 1.9.0
  • Olm Version: (if applicable)
  • Container Runtime: podman 5.6.0

To Reproduce

Open "Request Logs" section

https://github.com/user-attachments/assets/8680cd9c-1265-40d4-9bd3-8ed90324d1be

Expected Behavior

Request Logs appear without error

Originally created by @pizzaandcheese on GitHub (Jan 28, 2026). Original GitHub issue: https://github.com/fosrl/pangolin/issues/2364 ### Describe the Bug When opening the "Request Logs" section the following error appears in the traefik logs: `ERR Provider error, retrying in 586.330963ms error="cannot fetch configuration data: do fetch request: Get \"http://pangolin-app:3001/api/v1/traefik-config\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" providerName=http` When the error appears in the logs the Dashboard also spits out a Modal that states: "Error: failed to filter logs" On one of my larger instances the "Request Logs" page never loads and ends up locking up the dashboard for a time. ### Environment - OS Type & Version: Almalinux 9.7 - Pangolin Version: 1.15.1 - Gerbil Version: 1.3.0 - Traefik Version: 3.6.7 - Newt Version: 1.9.0 - Olm Version: (if applicable) - Container Runtime: podman 5.6.0 ### To Reproduce Open "Request Logs" section https://github.com/user-attachments/assets/8680cd9c-1265-40d4-9bd3-8ed90324d1be ### Expected Behavior Request Logs appear without error
GiteaMirror added the stale label 2026-04-20 08:32:18 -05:00
Author
Owner

@pizzaandcheese commented on GitHub (Jan 28, 2026):

I am running everything with podman quadlets.

Here are my configs:

Pangolin Dashboard:

[Unit]
Description=Pangolin Container
After=network-online.target

[Container]
Image=docker.io/fosrl/pangolin:1.15
ContainerName=pangolin-app
AutoUpdate=registry
Pod=pangolin.pod
Network=pangolin.network
Volume=/var/lib/pangolin/config:/app/config:Z

[Service]
Restart=always
TimeoutStartSec=900

[Install]
WantedBy=default.target

Traefik:

Unit]
Description=Traefik Container
After=network-online.target

[Container]
Image=docker.io/traefik:latest
ContainerName=pangolin-traefik
AddCapability=CAP_NET_BIND_SERVICE
AutoUpdate=registry
Pod=pangolin.pod
Network=pangolin-gerbil.container
Volume=/var/lib/pangolin/traefik:/etc/traefik:Z,ro
Volume=/var/lib/pangolin/letsencrypt:/letsencrypt:Z
Volume=traefik-plugin-storage:/plugins-storage:Z
Exec=--configFile=/etc/traefik/traefik_config.yml

[Service]
Restart=always

Gerbil:

[Unit]
Description=Gerbil Container
After=network-online.target

[Container]
Image=docker.io/fosrl/gerbil:latest
ContainerName=pangolin-gerbil
AutoUpdate=registry
Pod=pangolin.pod
Network=pangolin.network
PublishPort=51820:51820/udp
PublishPort=80:80
PublishPort=443:443
AddCapability=NET_ADMIN
AddCapability=SYS_MODULE
Sysctl=net.ipv4.ip_forward=1
Sysctl=net.ipv4.conf.all.src_valid_mark=1
Volume=/var/lib/pangolin/config:/var/config:Z
Exec=--reachableAt=http://pangolin-gerbil:3003 --generateAndSaveKeyTo=/var/config/key --remoteConfig=http://pangolin-app:3001/api/v1/gerbil/get-config --reportBandwidthTo=http://pangolin-app:3001/api/v1/gerbil/receive-bandwidth

[Service]
Restart=always
TimeoutStartSec=900

[Install]
WantedBy=default.target

Traefik Static Config:

api:
  insecure: true
  dashboard: true

providers:
  http:
    endpoint: "http://pangolin-app:3001/api/v1/traefik-config"
    pollInterval: "5s"
  file:
    filename: "/etc/traefik/dynamic_config.yml"

experimental:
  plugins:
    badger:
      moduleName: "github.com/fosrl/badger"
      version: "v1.2.0"
    bouncer:
      moduleName: "github.com/maxlerebourg/crowdsec-bouncer-traefik-plugin"
      version: "v1.4.4"
log:
  level: "INFO"
  format: "common"

certificatesResolvers:
  letsencrypt:
    acme:
      dnsChallenge:
        provider: <REDACTED>
      email: <REDACTED>
      storage: "/letsencrypt/acme.json"
      caServer: "https://acme-v02.api.letsencrypt.org/directory"
#      caServer: "https://acme-staging-v02.api.letsencrypt.org/directory"


entryPoints:
  web:
    address: ":80"
    http:
      redirections:
        entryPoint:
          to: "websecure"
          scheme: "https"
  websecure:
    address: ":443"
    transport:
      respondingTimeouts:
        readTimeout: "30m"
    http:
      tls:
        certResolver: "letsencrypt"

serversTransport:
  insecureSkipVerify: true

Traefik Dynamic Config:

http:
  middlewares:
    crowdsec:
      plugin:
        bouncer:
          enabled: true
          logLevel: INFO
          crowdsecAppsecEnabled: true
          crowdsecAppsecHost: "crowdsec-app:7422"
          crowdsecAppsecFailureBlock: true
          crowdsecAppsecUnreachableBlock: true
          crowdsecLapiHost: "crowdsec-app:8080"
          crowdsecLapiKey: <REDACTED>
          redisCacheEnabled: true
          redisCacheHost: "pangolin-redis:6379"

  routers:
    # Next.js router (handles everything except API and WebSocket paths)
    next-router:
      rule: "Host(<REDACTED>) && !PathPrefix(`/api/v1`)"
      service: next-service
      entryPoints:
        - websecure
      middlewares:
        - crowdsec
      tls:
        certResolver: letsencrypt

    # API router (handles /api/v1 paths)
    api-router:
      rule: "Host(<REDACTED>) && PathPrefix(`/api/v1`)"
      service: api-service
      entryPoints:
        - websecure
      middlewares:
        - crowdsec
      tls:
        certResolver: letsencrypt

    # WebSocket router
    ws-router:
      rule: "Host(<REDACTED>)"
      service: api-service
      entryPoints:
        - websecure
      tls:
        certResolver: letsencrypt

  services:
    next-service:
      loadBalancer:
        servers:
          - url: "http://pangolin-app:3002" # Next.js server

    api-service:
      loadBalancer:
        servers:
          - url: "http://pangolin-app:3000" # API/WebSocket server

tcp:
  serversTransports:
    pp-transport-v1:
      proxyProtocol:
        version: 1
    pp-transport-v2:
      proxyProtocol:
        version: 2

Pangolin Config:

app:
  dashboard_url: <REDACTED>
  log_level: info
  save_logs: true
  telemetry:
    anonymous_usage: false
domains:
  <REDACTED>:
    base_domain: <REDACTED>
    cert_resolver: letsencrypt
    prefer_wildcard_cert: false
server:
  external_port: 3000
  internal_port: 3001
  next_port: 3002
  internal_hostname: pangolin-app
  session_cookie_name: p_session_token
  resource_access_token_param: p_token
  resource_access_token_headers:
    id: P-Access-Token-Id
    token: P-Access-Token
  resource_session_request_param: p_session_request
  secret: <REDACTED>
postgres:
  connection_string: <REDACTED>
traefik:
  cert_resolver: letsencrypt
  http_entrypoint: web
  https_entrypoint: websecure
  additional_middlewares:
    - crowdsec@file
gerbil:
  start_port: <REDACTED>
  base_endpoint: <REDACTED>
  use_subdomain: <REDACTED>
  block_size: <REDACTED>
  site_block_size: <REDACTED>
  subnet_group: <REDACTED>
rate_limits:
  global:
    window_minutes: 1
    max_requests: 100
flags:
  require_email_verification: true
  disable_signup_without_invite: true
  disable_user_create_org: true
  allow_raw_resources: true
  allow_base_domain_resources: true
<!-- gh-comment-id:3813599554 --> @pizzaandcheese commented on GitHub (Jan 28, 2026): I am running everything with podman quadlets. Here are my configs: Pangolin Dashboard: ```` [Unit] Description=Pangolin Container After=network-online.target [Container] Image=docker.io/fosrl/pangolin:1.15 ContainerName=pangolin-app AutoUpdate=registry Pod=pangolin.pod Network=pangolin.network Volume=/var/lib/pangolin/config:/app/config:Z [Service] Restart=always TimeoutStartSec=900 [Install] WantedBy=default.target ```` Traefik: ```` Unit] Description=Traefik Container After=network-online.target [Container] Image=docker.io/traefik:latest ContainerName=pangolin-traefik AddCapability=CAP_NET_BIND_SERVICE AutoUpdate=registry Pod=pangolin.pod Network=pangolin-gerbil.container Volume=/var/lib/pangolin/traefik:/etc/traefik:Z,ro Volume=/var/lib/pangolin/letsencrypt:/letsencrypt:Z Volume=traefik-plugin-storage:/plugins-storage:Z Exec=--configFile=/etc/traefik/traefik_config.yml [Service] Restart=always ```` Gerbil: ```` [Unit] Description=Gerbil Container After=network-online.target [Container] Image=docker.io/fosrl/gerbil:latest ContainerName=pangolin-gerbil AutoUpdate=registry Pod=pangolin.pod Network=pangolin.network PublishPort=51820:51820/udp PublishPort=80:80 PublishPort=443:443 AddCapability=NET_ADMIN AddCapability=SYS_MODULE Sysctl=net.ipv4.ip_forward=1 Sysctl=net.ipv4.conf.all.src_valid_mark=1 Volume=/var/lib/pangolin/config:/var/config:Z Exec=--reachableAt=http://pangolin-gerbil:3003 --generateAndSaveKeyTo=/var/config/key --remoteConfig=http://pangolin-app:3001/api/v1/gerbil/get-config --reportBandwidthTo=http://pangolin-app:3001/api/v1/gerbil/receive-bandwidth [Service] Restart=always TimeoutStartSec=900 [Install] WantedBy=default.target ```` Traefik Static Config: ````yaml api: insecure: true dashboard: true providers: http: endpoint: "http://pangolin-app:3001/api/v1/traefik-config" pollInterval: "5s" file: filename: "/etc/traefik/dynamic_config.yml" experimental: plugins: badger: moduleName: "github.com/fosrl/badger" version: "v1.2.0" bouncer: moduleName: "github.com/maxlerebourg/crowdsec-bouncer-traefik-plugin" version: "v1.4.4" log: level: "INFO" format: "common" certificatesResolvers: letsencrypt: acme: dnsChallenge: provider: <REDACTED> email: <REDACTED> storage: "/letsencrypt/acme.json" caServer: "https://acme-v02.api.letsencrypt.org/directory" # caServer: "https://acme-staging-v02.api.letsencrypt.org/directory" entryPoints: web: address: ":80" http: redirections: entryPoint: to: "websecure" scheme: "https" websecure: address: ":443" transport: respondingTimeouts: readTimeout: "30m" http: tls: certResolver: "letsencrypt" serversTransport: insecureSkipVerify: true ```` Traefik Dynamic Config: ````yaml http: middlewares: crowdsec: plugin: bouncer: enabled: true logLevel: INFO crowdsecAppsecEnabled: true crowdsecAppsecHost: "crowdsec-app:7422" crowdsecAppsecFailureBlock: true crowdsecAppsecUnreachableBlock: true crowdsecLapiHost: "crowdsec-app:8080" crowdsecLapiKey: <REDACTED> redisCacheEnabled: true redisCacheHost: "pangolin-redis:6379" routers: # Next.js router (handles everything except API and WebSocket paths) next-router: rule: "Host(<REDACTED>) && !PathPrefix(`/api/v1`)" service: next-service entryPoints: - websecure middlewares: - crowdsec tls: certResolver: letsencrypt # API router (handles /api/v1 paths) api-router: rule: "Host(<REDACTED>) && PathPrefix(`/api/v1`)" service: api-service entryPoints: - websecure middlewares: - crowdsec tls: certResolver: letsencrypt # WebSocket router ws-router: rule: "Host(<REDACTED>)" service: api-service entryPoints: - websecure tls: certResolver: letsencrypt services: next-service: loadBalancer: servers: - url: "http://pangolin-app:3002" # Next.js server api-service: loadBalancer: servers: - url: "http://pangolin-app:3000" # API/WebSocket server tcp: serversTransports: pp-transport-v1: proxyProtocol: version: 1 pp-transport-v2: proxyProtocol: version: 2 ```` Pangolin Config: ````yaml app: dashboard_url: <REDACTED> log_level: info save_logs: true telemetry: anonymous_usage: false domains: <REDACTED>: base_domain: <REDACTED> cert_resolver: letsencrypt prefer_wildcard_cert: false server: external_port: 3000 internal_port: 3001 next_port: 3002 internal_hostname: pangolin-app session_cookie_name: p_session_token resource_access_token_param: p_token resource_access_token_headers: id: P-Access-Token-Id token: P-Access-Token resource_session_request_param: p_session_request secret: <REDACTED> postgres: connection_string: <REDACTED> traefik: cert_resolver: letsencrypt http_entrypoint: web https_entrypoint: websecure additional_middlewares: - crowdsec@file gerbil: start_port: <REDACTED> base_endpoint: <REDACTED> use_subdomain: <REDACTED> block_size: <REDACTED> site_block_size: <REDACTED> subnet_group: <REDACTED> rate_limits: global: window_minutes: 1 max_requests: 100 flags: require_email_verification: true disable_signup_without_invite: true disable_user_create_org: true allow_raw_resources: true allow_base_domain_resources: true ````
Author
Owner

@ErroneousBosch commented on GitHub (Feb 4, 2026):

Seeing the same error running via Docker compose on Truenas.

<!-- gh-comment-id:3844813611 --> @ErroneousBosch commented on GitHub (Feb 4, 2026): Seeing the same error running via Docker compose on Truenas.
Author
Owner

@Viceman256 commented on GitHub (Feb 5, 2026):

Same issues with Newt in Docker for Windows.

<!-- gh-comment-id:3855352811 --> @Viceman256 commented on GitHub (Feb 5, 2026): Same issues with Newt in Docker for Windows.
Author
Owner

@github-actions[bot] commented on GitHub (Feb 20, 2026):

This issue has been automatically marked as stale due to 14 days of inactivity. It will be closed in 14 days if no further activity occurs.

<!-- gh-comment-id:3930918687 --> @github-actions[bot] commented on GitHub (Feb 20, 2026): This issue has been automatically marked as stale due to 14 days of inactivity. It will be closed in 14 days if no further activity occurs.
Author
Owner

@ErroneousBosch commented on GitHub (Feb 20, 2026):

Still happening, not stale

<!-- gh-comment-id:3935568902 --> @ErroneousBosch commented on GitHub (Feb 20, 2026): Still happening, not stale
Author
Owner

@intari commented on GitHub (Feb 21, 2026):

Still happens for me.
even with EE

system details:

``
root@sarri:# uname -a
Linux sarri.intari.net 6.1.0-41-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.158-1 (2025-11-09) x86_64 GNU/Linux
root@sarri:
# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
root@sarri:# hostnamectl
Static hostname: sarri.intari.net
Icon name: computer-vm
Chassis: vm 🖴
Machine ID: 4a22df7f51a9497ce4cb6e8c04f2813f
Boot ID: 24728e16214a442b9be764060a63fa3c
Virtualization: kvm
Operating System: Debian GNU/Linux 12 (bookworm)
Kernel: Linux 6.1.0-41-amd64
Architecture: x86-64
Hardware Vendor: Red Hat
Hardware Model: KVM
Firmware Version: 1.16.0-4.module_el8.9.0+3659+9c8643f3
root@sarri:
# uname -r
6.1.0-41-amd64
root@sarri:# cat /proc/version
Linux version 6.1.0-41-amd64 (debian-kernel@lists.debian.org) (gcc-12 (Debian 12.2.0-14+deb12u1) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_DYNAMIC Debian 6.1.158-1 (2025-11-09)
root@sarri:
# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
BIOS Vendor ID: Red Hat
Model name: QEMU Virtual CPU version 2.5+
BIOS Model name: RHEL 7.6.0 PC (i440FX + PIIX, 1996) CPU @ 2.0GHz
BIOS CPU family: 1
CPU family: 6
Model: 6
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
Stepping: 3
BogoMIPS: 5199.99
Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx1
6 x2apic hypervisor lahf_lm cpuid_fault pti
Virtualization features:
Hypervisor vendor: KVM
Virtualization type: full
Caches (sum of all):
L1d: 64 KiB (2 instances)
L1i: 64 KiB (2 instances)
L2: 8 MiB (2 instances)
L3: 16 MiB (1 instance)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerabilities:
Gather data sampling: Not affected
Indirect target selection: Mitigation; Aligned branch/return thunks
Itlb multihit: KVM: Mitigation: VMX unsupported
L1tf: Mitigation; PTE Inversion
Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Meltdown: Mitigation; PTI
Mmio stale data: Unknown: No mitigations
Reg file data sampling: Not affected
Retbleed: Not affected
Spec rstack overflow: Not affected
Spec store bypass: Vulnerable
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Srbds: Not affected
Tsa: Not affected
Tsx async abort: Not affected
Vmscape: Not affected
root@sarri:# cat /proc/cpuinfo | head -20
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 6
model name : QEMU Virtual CPU version 2.5+
stepping : 3
microcode : 0x1
cpu MHz : 2599.996
cache size : 16384 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm cpuid_fault pti
root@sarri:
# free -h
total used free shared buff/cache available
Mem: 960Mi 867Mi 72Mi 2.3Mi 165Mi 92Mi
Swap: 0B 0B 0B
root@sarri:# cat /proc/meminfo | grep -E "MemTotal|MemFree|MemAvailable"
MemTotal: 983488 kB
MemFree: 73740 kB
MemAvailable: 95352 kB
root@sarri:
# df -h
Filesystem Size Used Avail Use% Mounted on
udev 462M 0 462M 0% /dev
tmpfs 97M 832K 96M 1% /run
/dev/vda2 20G 5.8G 13G 31% /
tmpfs 481M 0 481M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
overlay 20G 5.8G 13G 31% /var/lib/docker/overlay2/3b3e34ddf008f62d92bb64f79f528ce66603503c1232f7a84c249fea65ffec49/merged
overlay 20G 5.8G 13G 31% /var/lib/docker/overlay2/a9d773c2e97e990499716c3b0df9fdc4786e7556c616f76f609fe3c3e8efaf22/merged
overlay 20G 5.8G 13G 31% /var/lib/docker/overlay2/ee9ec06245f0671244df180d9089d17dacde0fa5e26f23963f0036c4af54e7b5/merged
overlay 20G 5.8G 13G 31% /var/lib/docker/overlay2/7df33bc536d2b10666dcd67e7bae4bc9d040a137f6d5ecb7d4ab5914dff08874/merged
tmpfs 97M 0 97M 0% /run/user/0
root@sarri:# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sr0 11:0 1 1024M 0 rom
vda 254:0 0 20G 0 disk
├─vda1 254:1 0 1M 0 part
└─vda2 254:2 0 20G 0 part /
root@sarri:
# fdisk -l
Disk /dev/vda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 268BA30D-8C1A-4C4C-9AB0-506F234B569C

Device Start End Sectors Size Type
/dev/vda1 2048 4095 2048 1M BIOS boot
/dev/vda2 4096 41940607 41936512 20G Linux filesystem
root@sarri:# systemd-detect-virt
kvm
root@sarri:
# dmesg | grep -i virtual
[ 0.011780] Booting paravirtualized kernel on KVM
[ 0.175592] smpboot: CPU0: Intel QEMU Virtual CPU version 2.5+ (family: 0x6, model: 0x6, stepping: 0x3)
[ 0.175849] Performance Events: PMU not available due to virtualization, using software events only.
[ 1.035752] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
[ 1.039233] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input2
[ 2.048576] systemd[1]: Detected virtualization kvm.
root@sarri:# lspci | grep -i virtual
root@sarri:
# cat /proc/cpuinfo | grep -i hypervisor
flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm cpuid_fault pti
flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm cpuid_fault pti
root@sarri:# dmesg | grep -i kvm
[ 0.000000] DMI: Red Hat KVM, BIOS 1.16.0-4.module_el8.9.0+3659+9c8643f3 04/01/2014
[ 0.000000] Hypervisor detected: KVM
[ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
[ 0.000001] kvm-clock: using sched offset of 16661692755771130 cycles
[ 0.000003] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
[ 0.011780] Booting paravirtualized kernel on KVM
[ 0.016251] kvm-guest: PV spinlocks enabled
[ 0.275098] clocksource: Switched to clocksource kvm-clock
[ 2.048576] systemd[1]: Detected virtualization kvm.
root@sarri:
# docker --version
Docker version 29.1.0, build 360952c
root@sarri:~# docker info
Client: Docker Engine - Community
Version: 29.1.0
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.30.1
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.40.3
Path: /usr/libexec/docker/cli-plugins/docker-compose

Server:
Containers: 4
Running: 4
Paused: 0
Stopped: 0
Images: 4
Server Version: 29.1.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
CDI spec directories:
/etc/cdi
/var/run/cdi
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: fcd43222d6b07379a4be9786bda52438f0dd16a1
runc version: v1.3.3-0-gd842d771
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.1.0-41-amd64
Operating System: Debian GNU/Linux 12 (bookworm)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 960.4MiB
Name: sarri.intari.net
ID: ad391b35-c3a6-4cdf-a944-95107a1b5b9e
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
::1/128
127.0.0.0/8
Live Restore Enabled: false
Firewall Backend: iptables

root@sarri:~# docker system info
Client: Docker Engine - Community
Version: 29.1.0
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.30.1
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.40.3
Path: /usr/libexec/docker/cli-plugins/docker-compose

Server:
Containers: 4
Running: 4
Paused: 0
Stopped: 0
Images: 4
Server Version: 29.1.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
CDI spec directories:
/etc/cdi
/var/run/cdi
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: fcd43222d6b07379a4be9786bda52438f0dd16a1
runc version: v1.3.3-0-gd842d771
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.1.0-41-amd64
Operating System: Debian GNU/Linux 12 (bookworm)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 960.4MiB
Name: sarri.intari.net
ID: ad391b35-c3a6-4cdf-a944-95107a1b5b9e
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
::1/128
127.0.0.0/8
Live Restore Enabled: false
Firewall Backend: iptables

root@sarri:~# docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 4 4 1.555GB 1.555GB (100%)
Containers 4 4 4.356kB 0B (0%)
Local Volumes 0 0 0B 0B
Build Cache 0 0 0B 0B

root@sarri:# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0a24408919fd traefik:v3.6.8 "/entrypoint.sh --co…" 2 hours ago Up 2 hours traefik
65e308e4d405 fosrl/gerbil:1.3.0 "/entrypoint.sh --re…" 2 hours ago Up 2 hours 0.0.0.0:25->25/tcp, [::]:25->25/tcp, 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp, 0.0.0.0:21820->21820/udp, [::]:21820->21820/udp, 0.0.0.0:3118-3119->3118-3119/tcp, [::]:3118-3119->3118-3119/tcp, 0.0.0.0:51820->51820/udp, [::]:51820->51820/udp gerbil
2997b9424b2d fosrl/pangolin:ee-latest "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) pangolin
root@sarri:
# root@sarri:# docker container ls -a --format "table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}"
NAMES IMAGE STATUS PORTS
traefik traefik:v3.6.8 Up 2 hours
gerbil fosrl/gerbil:1.3.0 Up 2 hours 0.0.0.0:25->25/tcp, [::]:25->25/tcp, 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp, 0.0.0.0:21820->21820/udp, [::]:21820->21820/udp, 0.0.0.0:3118-3119->3118-3119/tcp, [::]:3118-3119->3118-3119/tcp, 0.0.0.0:51820->51820/udp, [::]:51820->51820/udp
pangolin fosrl/pangolin:ee-latest Up 2 hours (healthy)
root@sarri:
# docker images
i Info → U In Use
IMAGE ID DISK USAGE CONTENT SIZE EXTRA
amnezia-awg:latest f780740a359d 26.8MB 0B
fosrl/gerbil:1.3.0 5fe045b02895 24.3MB 0B U
fosrl/pangolin:ee-latest 04f047cf2512 1.33GB 0B U
traefik:v3.6.8 3de33707981b 186MB 0B U
root@sarri:# docker image ls --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"
REPOSITORY TAG SIZE
fosrl/pangolin ee-latest 1.33GB
traefik v3.6.8 186MB
fosrl/gerbil 1.3.0 24.3MB
root@sarri:
# docker network ls
NETWORK ID NAME DRIVER SCOPE
f1425979ba3a bridge bridge local
3b2ccec9aaa6 host host local
c5b651d619fe none null local
855d771dc17b pangolin bridge local
root@sarri:~#
``
it doesn't matter if it's personal EE or not. docker-based setup

<!-- gh-comment-id:3938624474 --> @intari commented on GitHub (Feb 21, 2026): Still happens for me. even with EE system details: `` root@sarri:~# uname -a Linux sarri.intari.net 6.1.0-41-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.158-1 (2025-11-09) x86_64 GNU/Linux root@sarri:~# cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 12 (bookworm)" NAME="Debian GNU/Linux" VERSION_ID="12" VERSION="12 (bookworm)" VERSION_CODENAME=bookworm ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" root@sarri:~# hostnamectl Static hostname: sarri.intari.net Icon name: computer-vm Chassis: vm 🖴 Machine ID: 4a22df7f51a9497ce4cb6e8c04f2813f Boot ID: 24728e16214a442b9be764060a63fa3c Virtualization: kvm Operating System: Debian GNU/Linux 12 (bookworm) Kernel: Linux 6.1.0-41-amd64 Architecture: x86-64 Hardware Vendor: Red Hat Hardware Model: KVM Firmware Version: 1.16.0-4.module_el8.9.0+3659+9c8643f3 root@sarri:~# uname -r 6.1.0-41-amd64 root@sarri:~# cat /proc/version Linux version 6.1.0-41-amd64 (debian-kernel@lists.debian.org) (gcc-12 (Debian 12.2.0-14+deb12u1) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_DYNAMIC Debian 6.1.158-1 (2025-11-09) root@sarri:~# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Vendor ID: GenuineIntel BIOS Vendor ID: Red Hat Model name: QEMU Virtual CPU version 2.5+ BIOS Model name: RHEL 7.6.0 PC (i440FX + PIIX, 1996) CPU @ 2.0GHz BIOS CPU family: 1 CPU family: 6 Model: 6 Thread(s) per core: 1 Core(s) per socket: 2 Socket(s): 1 Stepping: 3 BogoMIPS: 5199.99 Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx1 6 x2apic hypervisor lahf_lm cpuid_fault pti Virtualization features: Hypervisor vendor: KVM Virtualization type: full Caches (sum of all): L1d: 64 KiB (2 instances) L1i: 64 KiB (2 instances) L2: 8 MiB (2 instances) L3: 16 MiB (1 instance) NUMA: NUMA node(s): 1 NUMA node0 CPU(s): 0,1 Vulnerabilities: Gather data sampling: Not affected Indirect target selection: Mitigation; Aligned branch/return thunks Itlb multihit: KVM: Mitigation: VMX unsupported L1tf: Mitigation; PTE Inversion Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Meltdown: Mitigation; PTI Mmio stale data: Unknown: No mitigations Reg file data sampling: Not affected Retbleed: Not affected Spec rstack overflow: Not affected Spec store bypass: Vulnerable Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline Srbds: Not affected Tsa: Not affected Tsx async abort: Not affected Vmscape: Not affected root@sarri:~# cat /proc/cpuinfo | head -20 processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 6 model name : QEMU Virtual CPU version 2.5+ stepping : 3 microcode : 0x1 cpu MHz : 2599.996 cache size : 16384 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm cpuid_fault pti root@sarri:~# free -h total used free shared buff/cache available Mem: 960Mi 867Mi 72Mi 2.3Mi 165Mi 92Mi Swap: 0B 0B 0B root@sarri:~# cat /proc/meminfo | grep -E "MemTotal|MemFree|MemAvailable" MemTotal: 983488 kB MemFree: 73740 kB MemAvailable: 95352 kB root@sarri:~# df -h Filesystem Size Used Avail Use% Mounted on udev 462M 0 462M 0% /dev tmpfs 97M 832K 96M 1% /run /dev/vda2 20G 5.8G 13G 31% / tmpfs 481M 0 481M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock overlay 20G 5.8G 13G 31% /var/lib/docker/overlay2/3b3e34ddf008f62d92bb64f79f528ce66603503c1232f7a84c249fea65ffec49/merged overlay 20G 5.8G 13G 31% /var/lib/docker/overlay2/a9d773c2e97e990499716c3b0df9fdc4786e7556c616f76f609fe3c3e8efaf22/merged overlay 20G 5.8G 13G 31% /var/lib/docker/overlay2/ee9ec06245f0671244df180d9089d17dacde0fa5e26f23963f0036c4af54e7b5/merged overlay 20G 5.8G 13G 31% /var/lib/docker/overlay2/7df33bc536d2b10666dcd67e7bae4bc9d040a137f6d5ecb7d4ab5914dff08874/merged tmpfs 97M 0 97M 0% /run/user/0 root@sarri:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sr0 11:0 1 1024M 0 rom vda 254:0 0 20G 0 disk ├─vda1 254:1 0 1M 0 part └─vda2 254:2 0 20G 0 part / root@sarri:~# fdisk -l Disk /dev/vda: 20 GiB, 21474836480 bytes, 41943040 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 268BA30D-8C1A-4C4C-9AB0-506F234B569C Device Start End Sectors Size Type /dev/vda1 2048 4095 2048 1M BIOS boot /dev/vda2 4096 41940607 41936512 20G Linux filesystem root@sarri:~# systemd-detect-virt kvm root@sarri:~# dmesg | grep -i virtual [ 0.011780] Booting paravirtualized kernel on KVM [ 0.175592] smpboot: CPU0: Intel QEMU Virtual CPU version 2.5+ (family: 0x6, model: 0x6, stepping: 0x3) [ 0.175849] Performance Events: PMU not available due to virtualization, using software events only. [ 1.035752] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 [ 1.039233] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input2 [ 2.048576] systemd[1]: Detected virtualization kvm. root@sarri:~# lspci | grep -i virtual root@sarri:~# cat /proc/cpuinfo | grep -i hypervisor flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm cpuid_fault pti flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm cpuid_fault pti root@sarri:~# dmesg | grep -i kvm [ 0.000000] DMI: Red Hat KVM, BIOS 1.16.0-4.module_el8.9.0+3659+9c8643f3 04/01/2014 [ 0.000000] Hypervisor detected: KVM [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 [ 0.000001] kvm-clock: using sched offset of 16661692755771130 cycles [ 0.000003] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns [ 0.011780] Booting paravirtualized kernel on KVM [ 0.016251] kvm-guest: PV spinlocks enabled [ 0.275098] clocksource: Switched to clocksource kvm-clock [ 2.048576] systemd[1]: Detected virtualization kvm. root@sarri:~# docker --version Docker version 29.1.0, build 360952c root@sarri:~# docker info Client: Docker Engine - Community Version: 29.1.0 Context: default Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.30.1 Path: /usr/libexec/docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: v2.40.3 Path: /usr/libexec/docker/cli-plugins/docker-compose Server: Containers: 4 Running: 4 Paused: 0 Stopped: 0 Images: 4 Server Version: 29.1.0 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Using metacopy: false Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: systemd Cgroup Version: 2 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog CDI spec directories: /etc/cdi /var/run/cdi Swarm: inactive Runtimes: io.containerd.runc.v2 runc Default Runtime: runc Init Binary: docker-init containerd version: fcd43222d6b07379a4be9786bda52438f0dd16a1 runc version: v1.3.3-0-gd842d771 init version: de40ad0 Security Options: apparmor seccomp Profile: builtin cgroupns Kernel Version: 6.1.0-41-amd64 Operating System: Debian GNU/Linux 12 (bookworm) OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 960.4MiB Name: sarri.intari.net ID: ad391b35-c3a6-4cdf-a944-95107a1b5b9e Docker Root Dir: /var/lib/docker Debug Mode: false Experimental: false Insecure Registries: ::1/128 127.0.0.0/8 Live Restore Enabled: false Firewall Backend: iptables root@sarri:~# docker system info Client: Docker Engine - Community Version: 29.1.0 Context: default Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.30.1 Path: /usr/libexec/docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: v2.40.3 Path: /usr/libexec/docker/cli-plugins/docker-compose Server: Containers: 4 Running: 4 Paused: 0 Stopped: 0 Images: 4 Server Version: 29.1.0 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Using metacopy: false Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: systemd Cgroup Version: 2 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog CDI spec directories: /etc/cdi /var/run/cdi Swarm: inactive Runtimes: io.containerd.runc.v2 runc Default Runtime: runc Init Binary: docker-init containerd version: fcd43222d6b07379a4be9786bda52438f0dd16a1 runc version: v1.3.3-0-gd842d771 init version: de40ad0 Security Options: apparmor seccomp Profile: builtin cgroupns Kernel Version: 6.1.0-41-amd64 Operating System: Debian GNU/Linux 12 (bookworm) OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 960.4MiB Name: sarri.intari.net ID: ad391b35-c3a6-4cdf-a944-95107a1b5b9e Docker Root Dir: /var/lib/docker Debug Mode: false Experimental: false Insecure Registries: ::1/128 127.0.0.0/8 Live Restore Enabled: false Firewall Backend: iptables root@sarri:~# docker system df TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 4 4 1.555GB 1.555GB (100%) Containers 4 4 4.356kB 0B (0%) Local Volumes 0 0 0B 0B Build Cache 0 0 0B 0B root@sarri:~# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0a24408919fd traefik:v3.6.8 "/entrypoint.sh --co…" 2 hours ago Up 2 hours traefik 65e308e4d405 fosrl/gerbil:1.3.0 "/entrypoint.sh --re…" 2 hours ago Up 2 hours 0.0.0.0:25->25/tcp, [::]:25->25/tcp, 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp, 0.0.0.0:21820->21820/udp, [::]:21820->21820/udp, 0.0.0.0:3118-3119->3118-3119/tcp, [::]:3118-3119->3118-3119/tcp, 0.0.0.0:51820->51820/udp, [::]:51820->51820/udp gerbil 2997b9424b2d fosrl/pangolin:ee-latest "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) pangolin root@sarri:~# root@sarri:~# docker container ls -a --format "table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}" NAMES IMAGE STATUS PORTS traefik traefik:v3.6.8 Up 2 hours gerbil fosrl/gerbil:1.3.0 Up 2 hours 0.0.0.0:25->25/tcp, [::]:25->25/tcp, 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp, 0.0.0.0:21820->21820/udp, [::]:21820->21820/udp, 0.0.0.0:3118-3119->3118-3119/tcp, [::]:3118-3119->3118-3119/tcp, 0.0.0.0:51820->51820/udp, [::]:51820->51820/udp pangolin fosrl/pangolin:ee-latest Up 2 hours (healthy) root@sarri:~# docker images i Info → U In Use IMAGE ID DISK USAGE CONTENT SIZE EXTRA amnezia-awg:latest f780740a359d 26.8MB 0B fosrl/gerbil:1.3.0 5fe045b02895 24.3MB 0B U fosrl/pangolin:ee-latest 04f047cf2512 1.33GB 0B U traefik:v3.6.8 3de33707981b 186MB 0B U root@sarri:~# docker image ls --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}" REPOSITORY TAG SIZE fosrl/pangolin ee-latest 1.33GB traefik v3.6.8 186MB fosrl/gerbil 1.3.0 24.3MB root@sarri:~# docker network ls NETWORK ID NAME DRIVER SCOPE f1425979ba3a bridge bridge local 3b2ccec9aaa6 host host local c5b651d619fe none null local 855d771dc17b pangolin bridge local root@sarri:~# `` it doesn't matter if it's personal EE or not. docker-based setup
Author
Owner

@intari commented on GitHub (Feb 28, 2026):

I was able to fix this issues in my specific setup by ... requesting RAM increase for Pangolin's VM from 1 Gb to 2 Gb. May be be Pangolin just OOMs?

<!-- gh-comment-id:3977108084 --> @intari commented on GitHub (Feb 28, 2026): I was able to fix this issues in my specific setup by ... requesting RAM increase for Pangolin's VM from 1 Gb to 2 Gb. May be be Pangolin just OOMs?
Author
Owner

@ErroneousBosch commented on GitHub (Mar 1, 2026):

I am on Docker, not currently limiting resources on those containers

<!-- gh-comment-id:3981071125 --> @ErroneousBosch commented on GitHub (Mar 1, 2026): I am on Docker, not currently limiting resources on those containers
Author
Owner

@Elsuuxi commented on GitHub (Mar 7, 2026):

Same as @intari
I upgraded the RAM to 2GB and the issue has been resolved.

<!-- gh-comment-id:4016260570 --> @Elsuuxi commented on GitHub (Mar 7, 2026): Same as @intari I upgraded the RAM to 2GB and the issue has been resolved.
Author
Owner

@intari commented on GitHub (Mar 7, 2026):

@ErroneousBosch I also didn't limit resources in docker-compose, only hypervisor limits in hosting's admin panel.

<!-- gh-comment-id:4017117860 --> @intari commented on GitHub (Mar 7, 2026): @ErroneousBosch I also didn't limit resources in docker-compose, only hypervisor limits in hosting's admin panel.
Author
Owner

@Viceman256 commented on GitHub (Mar 7, 2026):

I was able to fix this issues in my specific setup by ... requesting RAM increase for Pangolin's VM from 1 Gb to 2 Gb. May be be Pangolin just OOMs?

Not for me. My server has 4 GB available and never peaks past 2 GB.

<!-- gh-comment-id:4017558293 --> @Viceman256 commented on GitHub (Mar 7, 2026): > I was able to fix this issues in my specific setup by ... requesting RAM increase for Pangolin's VM from 1 Gb to 2 Gb. May be be Pangolin just OOMs? Not for me. My server has 4 GB available and never peaks past 2 GB.
Author
Owner

@ErroneousBosch commented on GitHub (Mar 7, 2026):

I restarted my stack with port 3001 on the Pangolin container exposed. we'll see if it makes a difference. usually for internal docker network calls like these you don't need it

<!-- gh-comment-id:4017640314 --> @ErroneousBosch commented on GitHub (Mar 7, 2026): I restarted my stack with port 3001 on the Pangolin container exposed. we'll see if it makes a difference. usually for internal docker network calls like these you don't need it
Author
Owner

@github-actions[bot] commented on GitHub (Mar 22, 2026):

This issue has been automatically marked as stale due to 14 days of inactivity. It will be closed in 14 days if no further activity occurs.

<!-- gh-comment-id:4105022824 --> @github-actions[bot] commented on GitHub (Mar 22, 2026): This issue has been automatically marked as stale due to 14 days of inactivity. It will be closed in 14 days if no further activity occurs.
Author
Owner

@github-actions[bot] commented on GitHub (Apr 5, 2026):

This issue has been automatically closed due to inactivity. If you believe this is still relevant, please open a new issue with up-to-date information.

<!-- gh-comment-id:4188007151 --> @github-actions[bot] commented on GitHub (Apr 5, 2026): This issue has been automatically closed due to inactivity. If you believe this is still relevant, please open a new issue with up-to-date information.
Author
Owner

@m-elsharkawi commented on GitHub (Apr 13, 2026):

I found what was causing a similar issue for me. Docker bridge networking on the host was broken by stale nftables raw PREROUTING rules left behind after the outage and network recreation.

Symptoms were:

  • host -> container worked
  • container name resolution worked
  • but container -> container traffic on the same Docker bridge timed out

In my case, pangolin and gerbil were on the same bridge, but nft list ruleset showed old rules like these still dropping traffic:

ip daddr 172.18.0.2 iifname != "docker_gwbridge" drop
ip daddr 172.18.0.3 iifname != "docker_gwbridge" drop

Those rules were stale and no longer matched the current bridge.

What fixed it:

  1. Inspect the nftables raw rules:
sudo nft -a list chain ip raw PREROUTING
  1. Identify the stale drop rules for the affected container IPs.

  2. Delete the bad rules by handle:

sudo nft delete rule ip raw PREROUTING handle <handle>

In my case, removing the stale rules immediately restored container-to-container connectivity and Pangolin started working again.

A Docker restart alone did not remove those stale rules, so checking nft directly was the key.

<!-- gh-comment-id:4238568092 --> @m-elsharkawi commented on GitHub (Apr 13, 2026): I found what was causing a similar issue for me. Docker bridge networking on the host was broken by stale nftables raw PREROUTING rules left behind after the outage and network recreation. Symptoms were: * host -> container worked * container name resolution worked * but container -> container traffic on the same Docker bridge timed out In my case, `pangolin` and `gerbil` were on the same bridge, but `nft list ruleset` showed old rules like these still dropping traffic: ```bash ip daddr 172.18.0.2 iifname != "docker_gwbridge" drop ip daddr 172.18.0.3 iifname != "docker_gwbridge" drop ``` Those rules were stale and no longer matched the current bridge. What fixed it: 1. Inspect the nftables raw rules: ```bash sudo nft -a list chain ip raw PREROUTING ``` 2. Identify the stale drop rules for the affected container IPs. 3. Delete the bad rules by handle: ```bash sudo nft delete rule ip raw PREROUTING handle <handle> ``` In my case, removing the stale rules immediately restored container-to-container connectivity and Pangolin started working again. A Docker restart alone did not remove those stale rules, so checking `nft` directly was the key.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/pangolin#4094