[GH-ISSUE #1784] A single Docker label error prevents all from updating #3946

Open
opened 2026-04-20 08:16:27 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @sippeangelo on GitHub (Oct 30, 2025).
Original GitHub issue: https://github.com/fosrl/pangolin/issues/1784

Originally assigned to: @oschwartz10612 on GitHub.

Describe the Bug

I've been using the Docker socket functionality and it's really cool! However, it seems that if ANY Docker container has a label that Pangolin doesn't like, it prevents all other Docker services from being parsed by Pangolin.

I was updating a port number on an unrelated service (service-2), and I was puzzled why the port number wouldn't update in Pangolin even though I changed the label and brought the service up again. After a while I found this in the Pangolin log:

2025-10-30T18:47:36+00:00 [error]: Failed to update database from config: Error: Validation error: Required at "proxy-resources.service-2.targets[0].port"

The problem seemed to be that the port label NEEDED to be present on service-2 (I'm guessing because the container didn't expose one that it could pick up automatically?), but this made service-1 unable to change. I added the port label to service-2 and that made service-1 update, now that the Pangolin log was error free.

Environment

  • Pangolin Version: v1.11.1
  • Gerbil Version:
  • Traefik Version: v3.3.3
  • Newt Version:
  • Olm Version: (if applicable)

To Reproduce

See description

Expected Behavior

Unrelated label configuration errors shouldn't affect other services.

Originally created by @sippeangelo on GitHub (Oct 30, 2025). Original GitHub issue: https://github.com/fosrl/pangolin/issues/1784 Originally assigned to: @oschwartz10612 on GitHub. ### Describe the Bug I've been using the Docker socket functionality and it's really cool! However, it seems that if ANY Docker container has a label that Pangolin doesn't like, it prevents all other Docker services from being parsed by Pangolin. I was updating a port number on an unrelated service (`service-2`), and I was puzzled why the port number wouldn't update in Pangolin even though I changed the label and brought the service up again. After a while I found this in the Pangolin log: ``` 2025-10-30T18:47:36+00:00 [error]: Failed to update database from config: Error: Validation error: Required at "proxy-resources.service-2.targets[0].port" ``` The problem seemed to be that the `port` label NEEDED to be present on `service-2` (I'm guessing because the container didn't expose one that it could pick up automatically?), but this made `service-1` unable to change. I added the `port` label to `service-2` and that made `service-1` update, now that the Pangolin log was error free. ### Environment - Pangolin Version: v1.11.1 - Gerbil Version: - Traefik Version: v3.3.3 - Newt Version: - Olm Version: (if applicable) ### To Reproduce See description ### Expected Behavior Unrelated label configuration errors shouldn't affect other services.
GiteaMirror added the Improvementconfig labels 2026-04-20 08:16:27 -05:00
Author
Owner

@AstralDestiny commented on GitHub (Nov 4, 2025):

Can you show your exact labels by chance? redacting for any secrets? but retain the labels as much as possible?

<!-- gh-comment-id:3483230912 --> @AstralDestiny commented on GitHub (Nov 4, 2025): Can you show your exact labels by chance? redacting for any secrets? but retain the labels as much as possible?
Author
Owner

@sippeangelo commented on GitHub (Nov 5, 2025):

Here's a minimal repro:

testservice is a Docker image that doesn't "EXPOSE" any port. If I leave the Pangolin port tag commented out on testservice, I receive the error in the Pangolin logs, and any port update I do to the whoami service now seems to be ignored by Pangolin until the "error" on testservice is fixed.

[error]: Failed to update database from config: Error: Validation error: Required at "proxy-resources.testservice.targets[0].port"

I haven't tested if this applies to any other error causing tags, but it feels likely.

./docker-compose.yml

services:
  whoami:
    image: containous/whoami
    labels:
      - pangolin.proxy-resources.whoami.name=whoami
      - pangolin.proxy-resources.whoami.full-domain=whoami.example.com
      - pangolin.proxy-resources.whoami.protocol=http
      - pangolin.proxy-resources.whoami.targets[0].method=http
      - pangolin.proxy-resources.whoami.targets[0].port=80
    restart: unless-stopped

  testservice:
    build: ./testservice
    labels:
      - pangolin.proxy-resources.testservice.name=test
      - pangolin.proxy-resources.testservice.full-domain=test.example.com
      - pangolin.proxy-resources.testservice.protocol=http
      - pangolin.proxy-resources.testservice.targets[0].method=http
      # - pangolin.proxy-resources.testservice.targets[0].port=80
    restart: unless-stopped

./testservice/Dockerfile

FROM alpine:latest
CMD while true; do echo "Hello world"; sleep 1; done
<!-- gh-comment-id:3492544011 --> @sippeangelo commented on GitHub (Nov 5, 2025): Here's a minimal repro: `testservice` is a Docker image that doesn't "EXPOSE" any port. If I leave the Pangolin port tag commented out on `testservice`, I receive the error in the Pangolin logs, and any port update I do to the `whoami` service now seems to be ignored by Pangolin until the "error" on `testservice` is fixed. `[error]: Failed to update database from config: Error: Validation error: Required at "proxy-resources.testservice.targets[0].port"` I haven't tested if this applies to any other error causing tags, but it feels likely. ./docker-compose.yml ``` services: whoami: image: containous/whoami labels: - pangolin.proxy-resources.whoami.name=whoami - pangolin.proxy-resources.whoami.full-domain=whoami.example.com - pangolin.proxy-resources.whoami.protocol=http - pangolin.proxy-resources.whoami.targets[0].method=http - pangolin.proxy-resources.whoami.targets[0].port=80 restart: unless-stopped testservice: build: ./testservice labels: - pangolin.proxy-resources.testservice.name=test - pangolin.proxy-resources.testservice.full-domain=test.example.com - pangolin.proxy-resources.testservice.protocol=http - pangolin.proxy-resources.testservice.targets[0].method=http # - pangolin.proxy-resources.testservice.targets[0].port=80 restart: unless-stopped ``` ./testservice/Dockerfile ``` FROM alpine:latest CMD while true; do echo "Hello world"; sleep 1; done ```
Author
Owner

@oschwartz10612 commented on GitHub (Nov 8, 2025):

Closing this because it is by design. Perhaps it is confusing as you mention in this post but all of the labels are added together each time it is applied in newt. This means that the whole docker compose is the blueprint and blueprints fail if any one part of it is bad to prevent undefined behavior. It would not be possible to silo containers I think because technically you could define things for another resource on a container because it is just referenced by ID so we have to join them all together.

Open to suggestions if this feels incorrect.

<!-- gh-comment-id:3507170325 --> @oschwartz10612 commented on GitHub (Nov 8, 2025): Closing this because it is by design. Perhaps it is confusing as you mention in this post but all of the labels are added together each time it is applied in newt. This means that the whole docker compose is the blueprint and blueprints fail if any one part of it is bad to prevent undefined behavior. It would not be possible to silo containers I think because technically you could define things for another resource on a container because it is just referenced by ID so we have to join them all together. Open to suggestions if this feels incorrect.
Author
Owner

@sippeangelo commented on GitHub (Nov 10, 2025):

Closing this because it is by design. Perhaps it is confusing as you mention in this post but all of the labels are added together each time it is applied in newt. This means that the whole docker compose is the blueprint and blueprints fail if any one part of it is bad to prevent undefined behavior. It would not be possible to silo containers I think because technically you could define things for another resource on a container because it is just referenced by ID so we have to join them all together.

Open to suggestions if this feels incorrect.

The fact that completely unrelated configuration mistakes can bring down ALL unrelated services on the same Docker socket is really unfortunate and prevents this feature from being used in any realistic production scenario.

I get that we can't completely silo them, but I don't see why one configuration mistake in an unrelated manually namespaced tag need to bring down all the others! Especially since the fault is very insidious in how there is no clear way to see these errors without checking the Pangolin logs, and it might not be discovered until way down the line until you wonder why some other updated service is suddenly just down.

<!-- gh-comment-id:3510409431 --> @sippeangelo commented on GitHub (Nov 10, 2025): > Closing this because it is by design. Perhaps it is confusing as you mention in this post but all of the labels are added together each time it is applied in newt. This means that the whole docker compose is the blueprint and blueprints fail if any one part of it is bad to prevent undefined behavior. It would not be possible to silo containers I think because technically you could define things for another resource on a container because it is just referenced by ID so we have to join them all together. > > Open to suggestions if this feels incorrect. The fact that completely unrelated configuration mistakes can bring down ALL unrelated services on the same Docker socket is really unfortunate and prevents this feature from being used in any realistic production scenario. I get that we can't completely silo them, but I don't see why one configuration mistake in an unrelated manually namespaced tag need to bring down all the others! Especially since the fault is very insidious in how there is no clear way to see these errors without checking the Pangolin logs, and it might not be discovered until way down the line until you wonder why some other updated service is suddenly just down.
Author
Owner

@oschwartz10612 commented on GitHub (Nov 22, 2025):

Ill take a deeper look

<!-- gh-comment-id:3567090366 --> @oschwartz10612 commented on GitHub (Nov 22, 2025): Ill take a deeper look
Author
Owner

@LunarECL commented on GitHub (Mar 8, 2026):

Found the root cause. ConfigSchema.safeParse() in applyBlueprint.ts validates all resources at once, so one bad resource (like a missing port on a container with no EXPOSE) takes down the whole blueprint. Everything else that's perfectly fine fails too.

Going to add per-resource validation on the Docker blueprint side so we can just skip the broken ones with a warning instead of blowing up entirely. Cross-resource checks stay as-is. Will throw up a draft PR soon.

<!-- gh-comment-id:4019495335 --> @LunarECL commented on GitHub (Mar 8, 2026): Found the root cause. `ConfigSchema.safeParse()` in `applyBlueprint.ts` validates all resources at once, so one bad resource (like a missing port on a container with no EXPOSE) takes down the whole blueprint. Everything else that's perfectly fine fails too. Going to add per-resource validation on the Docker blueprint side so we can just skip the broken ones with a warning instead of blowing up entirely. Cross-resource checks stay as-is. Will throw up a draft PR soon.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/pangolin#3946