mirror of
https://github.com/fosrl/pangolin.git
synced 2026-05-06 20:59:07 -05:00
[GH-ISSUE #2134] Abnormal disk IO in version 1.13.1 #10838
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @asardaes on GitHub (Dec 21, 2025).
Original GitHub issue: https://github.com/fosrl/pangolin/issues/2134
Describe the Bug
I'm not sure if this is related to #2120, so I figured I would report it to be sure. My Pangolin container hasn't suffered from OOM kills, but it has been lagging significantly; I set the memory limit of the container to 320M. Now I looked at the disk metrics from my VPS and I see something like this:
Environment
Crowdsec is not running on the VM hosting Pangolin.
To Reproduce
You'll notice a couple spikes in the screenshot I posted. In this experiment I did the following:
I wonder if the memory constraints from Pangolin's container make it flush caches continuously and reload them from disk.
Expected Behavior
Sane disk IO.
@asardaes commented on GitHub (Dec 21, 2025):
I think this isn't related to geo-blocking. I stopped all Newts and then the Pangolin stack to comment out
maxmind_db_pathfrom the YAML and then started it again while runningiotop. I'm not very familiar withiotopso I'm not sure if it includes socket communications, but it reports gigabytes read during bootup?I only have info logs for pangolin, but it needs a couple of minutes to load, surely because the disk IO is being throttled, but I don't know how much it's reading and from where.
And naturally Traefik can't reach Pangolin during this time
@asardaes commented on GitHub (Dec 21, 2025):
Well, after starting containers one by one, it seems the main culprit is Traefik, but I don't really know why, although there's this warning in its documentation:
So I disabled watching for
dynamic_config.yml, but that didn't make such a big difference.I then found this old Traefik issue that mentions memory constraints as the issue, so I gave the container more memory, and that helped a bit I think, but only during periods of inactivity, so maybe the VM simply is too constrained for Traefik.