[PR #304] fix: chunk large Docker container lists to prevent WebSocket message loss #1044

Open
opened 2026-04-19 14:28:23 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/fosrl/newt/pull/304
Author: @jaydeep-pipaliya
Created: 4/9/2026
Status: 🔄 Open

Base: mainHead: fix/chunk-docker-container-messages


📝 Commits (2)

  • cf853b1 fix: chunk large Docker container lists to prevent WebSocket message loss
  • 82523a0 fix: add batchId to chunked container messages for concurrency safety

📊 Changes

1 file changed (+49 additions, -14 deletions)

View changed files

📝 main.go (+49 -14)

📄 Description

What does this PR do?

Companion PR for fosrl/pangolin#2117 — Docker Container View not displaying when >20 containers are running.

Problem

gorilla/websocket.WriteJSON serializes the full container list into a single WebSocket text frame. With 55+ containers (each 1-5KB of JSON metadata), the resulting frame can be 55-275KB. Intermediary proxies (Traefik, nginx, Cloudflare tunnels) can silently drop or truncate frames this large, causing the pangolin server to never receive the container data.

Solution

Added sendContainerList() that automatically chunks large container lists:

// ≤15 containers: single message (backward compatible, zero behavior change)
{"containers": [...]}

// >15 containers: chunked with batch metadata
{"containers": [...], "chunkIndex": 0, "totalChunks": 4, "batchId": "a1b2c3d4"}
{"containers": [...], "chunkIndex": 1, "totalChunks": 4, "batchId": "a1b2c3d4"}
{"containers": [...], "chunkIndex": 2, "totalChunks": 4, "batchId": "a1b2c3d4"}
{"containers": [...], "chunkIndex": 3, "totalChunks": 4, "batchId": "a1b2c3d4"}

Why batchId?

Two concurrent container sends can happen when a manual fetch request and a Docker event fire at the same time. Without batchId, their chunks would interleave and corrupt the accumulated data on the server. Each batch gets a unique ID (reusing the existing generateChainId helper) so the server can track and supersede batches correctly.

Why chunk size of 15?

Each container with full metadata (labels, ports, networks) serializes to 1-5KB. 15 containers ≈ 15-75KB per WebSocket frame — safely under common proxy defaults (Traefik default buffer: 64KB, nginx: 64KB). The threshold was chosen to balance message count vs. frame size.

Changes

main.go:

  • Added sendContainerList() — chunks large lists, passes small lists through unchanged
  • Updated both call sites: initial fetch handler + Docker event monitor callback
  • Reuses existing generateChainId() for batch IDs — no new dependencies

Companion PR

Pangolin server: https://github.com/fosrl/pangolin/pull/2817

  • Chunk reassembly with batchId tracking, input validation, typed accumulator, 120s TTL on partial chunks

Testing

  • go build passes
  • ≤15 containers: identical to current behavior (single message, no metadata)
  • >15 containers: splits into chunks of 15 with chunkIndex/totalChunks/batchId
  • Concurrent sends: each gets unique batchId, server handles superseding

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/fosrl/newt/pull/304 **Author:** [@jaydeep-pipaliya](https://github.com/jaydeep-pipaliya) **Created:** 4/9/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `fix/chunk-docker-container-messages` --- ### 📝 Commits (2) - [`cf853b1`](https://github.com/fosrl/newt/commit/cf853b18acd7c29923b4c0a63fe7189bd11ecd3f) fix: chunk large Docker container lists to prevent WebSocket message loss - [`82523a0`](https://github.com/fosrl/newt/commit/82523a022770490b831684efaaf6a7167df0288e) fix: add batchId to chunked container messages for concurrency safety ### 📊 Changes **1 file changed** (+49 additions, -14 deletions) <details> <summary>View changed files</summary> 📝 `main.go` (+49 -14) </details> ### 📄 Description ## What does this PR do? Companion PR for fosrl/pangolin#2117 — Docker Container View not displaying when >20 containers are running. ## Problem `gorilla/websocket.WriteJSON` serializes the full container list into a single WebSocket text frame. With 55+ containers (each 1-5KB of JSON metadata), the resulting frame can be 55-275KB. Intermediary proxies (Traefik, nginx, Cloudflare tunnels) can silently drop or truncate frames this large, causing the pangolin server to never receive the container data. ## Solution Added `sendContainerList()` that automatically chunks large container lists: ```go // ≤15 containers: single message (backward compatible, zero behavior change) {"containers": [...]} // >15 containers: chunked with batch metadata {"containers": [...], "chunkIndex": 0, "totalChunks": 4, "batchId": "a1b2c3d4"} {"containers": [...], "chunkIndex": 1, "totalChunks": 4, "batchId": "a1b2c3d4"} {"containers": [...], "chunkIndex": 2, "totalChunks": 4, "batchId": "a1b2c3d4"} {"containers": [...], "chunkIndex": 3, "totalChunks": 4, "batchId": "a1b2c3d4"} ``` ### Why `batchId`? Two concurrent container sends can happen when a manual fetch request and a Docker event fire at the same time. Without `batchId`, their chunks would interleave and corrupt the accumulated data on the server. Each batch gets a unique ID (reusing the existing `generateChainId` helper) so the server can track and supersede batches correctly. ### Why chunk size of 15? Each container with full metadata (labels, ports, networks) serializes to 1-5KB. 15 containers ≈ 15-75KB per WebSocket frame — safely under common proxy defaults (Traefik default buffer: 64KB, nginx: 64KB). The threshold was chosen to balance message count vs. frame size. ## Changes **`main.go`**: - Added `sendContainerList()` — chunks large lists, passes small lists through unchanged - Updated both call sites: initial fetch handler + Docker event monitor callback - Reuses existing `generateChainId()` for batch IDs — no new dependencies ## Companion PR **Pangolin server:** https://github.com/fosrl/pangolin/pull/2817 - Chunk reassembly with `batchId` tracking, input validation, typed accumulator, 120s TTL on partial chunks ## Testing - `go build` passes - ≤15 containers: identical to current behavior (single message, no metadata) - \>15 containers: splits into chunks of 15 with `chunkIndex`/`totalChunks`/`batchId` - Concurrent sends: each gets unique `batchId`, server handles superseding --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-19 14:28:23 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/newt#1044