[GH-ISSUE #1645] pangolin resources are overridden when the full-domain as the base domain via blueprint #1969

Closed
opened 2026-04-16 08:52:39 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @wzsanders on GitHub (Oct 9, 2025).
Original GitHub issue: https://github.com/fosrl/pangolin/issues/1645

Originally assigned to: @oschwartz10612 on GitHub.

Describe the Bug

When setting full-domain as a registered base domain (see below) via pangolin docker blueprints, the addresses for other docker blueprints that were applied are changed to have their address modified to the base domain, even if it was previously listed as subdomain.base.domain.
If the full-domain below is a subdomain, this works fine and the entry is created as expected.

    labels:
      - pangolin.proxy-resources.basedomain.name=basedomain
      - pangolin.proxy-resources.basedomain.full-domain=base.domain
      - pangolin.proxy-resources.basedomain.protocol=http
      - pangolin.proxy-resources.basedomain.targets[0].method=https
      - pangolin.proxy-resources.basedomain.targets[0].hostname=host.docker.internal
      - pangolin.proxy-resources.basedomain.targets[0].port=3001

Adding the resource via the Pangolin Web UI and leaving the subdomain blank results in this configuration working with no issues. It's just the addition of the new resource when coming from blueprints.

When adding via blueprint.
The tcp proxy is never started.

pangolin  | 2025-10-08T21:41:04.086Z [error]: Failed to update database from config: Error: Resource already exists: sylver.lab.redacted in org redacted

When adding via the Web UI.

newt      | INFO: 2025/10/08 21:45:07 Blueprint applied successfully!
newt      | INFO: 2025/10/08 21:45:22 Started tcp proxy to host.docker.internal:3001

Environment

  • OS Type & Version: Synology DSM 7.3
  • Pangolin Version: 1.10.1
  • Gerbil Version: 1.2.1
  • Traefik Version: 3.4.0
  • Newt Version: 1.5.1

To Reproduce

  1. Deploy a self-hosted node as follows: https://docs.digpangolin.com/self-host/manual/docker-compose
    Follow the guide and add the relevant configuration for your DNS resolver implementation, etc. Ensure that you add the blueprint labels.
services:
  pangolin:
    image: fosrl/pangolin:latest # https://github.com/fosrl/pangolin/releases
    container_name: pangolin
    restart: always
    volumes:
      - /volume1/encrypted_share/docker/pangolin/config:/app/config
      - pangolin-data:/var/certificates
      - pangolin-data:/var/dynamic
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3001/api/v1/"]
      interval: "3s"
      timeout: "3s"
      retries: 30
    labels:
      # Proxy Resource Configuration
      - pangolin.proxy-resources.x.name=x
      - pangolin.proxy-resources.x.full-domain=base.domain
      - pangolin.proxy-resources.x.protocol=http
      # Target Configuration - the port and hostname will be auto-detected
      - pangolin.proxy-resources.x.targets[0].method=https
      - pangolin.proxy-resources.x.targets[0].hostname=host.docker.internal
      - pangolin.proxy-resources.x.targets[0].port=3001

  gerbil:
    image: fosrl/gerbil:latest # https://github.com/fosrl/gerbil/releases
    container_name: gerbil
    restart: always
    depends_on:
      pangolin:
        condition: service_healthy
    command:
      - --reachableAt=http://gerbil:3003
      - --generateAndSaveKeyTo=/var/config/key
      - --remoteConfig=http://pangolin:3001/api/v1/
    volumes:
      - ./config/:/var/config
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    ports:
      - 51820:51820/udp
      - 21820:21820/udp
      - 443:443 # Port for traefik because of the network_mode
      - 80:80 # Port for traefik because of the network_mode

  traefik:
    image: traefik:v3.4.0
    container_name: traefik
    restart: always
    network_mode: service:gerbil # Ports appear on the gerbil service
    depends_on:
      pangolin:
        condition: service_healthy
    command:
      - --configFile=/etc/traefik/traefik_config.yml
    environment:
      CLOUDFLARE_DNS_API_TOKEN: "" # REPLACE WITH YOUR CLOUDFLARE API TOKEN If you’re using Cloudflare, make sure your API token has the permissions Zone/Zone/Read and Zone/DNS/Edit and make sure it applies to all zones.
    volumes:
      - ./config/traefik:/etc/traefik:ro # Volume to store the Traefik configuration
      - ./config/traefik/logs:/var/log/traefik # Volume to store the Traefik configuration
      - ./config/letsencrypt:/letsencrypt # Volume to store the Let's Encrypt certificates
      # Shared volume for certificates and dynamic config in file mode
      - pangolin-data:/var/certificates:ro
      - pangolin-data:/var/dynamic:ro

  newt:
    image: fosrl/newt
    container_name: newt
    restart: unless-stopped
    extra_hosts:
      - "host.docker.internal:host-gateway" # Allow access to host services from containers
    volumes:
        - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - PANGOLIN_ENDPOINT=https://pangolin.domain
      - NEWT_ID=
      - NEWT_SECRET=
      - DOCKER_SOCKET=/var/run/docker.sock

networks:
  default:
    driver: bridge
    name: pangolin
    ipam:
      config:
        - subnet: 172.18.0.0/26

volumes:
  pangolin-data:
  1. Ensure that the host is listening / accepting connections for the port 3001 specified in the resource configuration (this is a web server in my case).
  2. Bring the compose file up.

Expected Behavior

Pangolin should add the resource with the base domain and empty subdomain and not change all other blueprint addresses to the base domain.

i.e., a new resource appears with the base domain as the address to access from, and other resources have the subdomain in front of it.

Originally created by @wzsanders on GitHub (Oct 9, 2025). Original GitHub issue: https://github.com/fosrl/pangolin/issues/1645 Originally assigned to: @oschwartz10612 on GitHub. ### Describe the Bug When setting full-domain as a registered base domain (see below) via pangolin docker blueprints, the addresses for other docker blueprints that were applied are changed to have their address modified to the base domain, even if it was previously listed as subdomain.base.domain. If the full-domain below is a subdomain, this works fine and the entry is created as expected. ``` labels: - pangolin.proxy-resources.basedomain.name=basedomain - pangolin.proxy-resources.basedomain.full-domain=base.domain - pangolin.proxy-resources.basedomain.protocol=http - pangolin.proxy-resources.basedomain.targets[0].method=https - pangolin.proxy-resources.basedomain.targets[0].hostname=host.docker.internal - pangolin.proxy-resources.basedomain.targets[0].port=3001 ``` Adding the resource via the Pangolin Web UI and leaving the subdomain blank results in this configuration working with no issues. It's just the addition of the new resource when coming from blueprints. When adding via blueprint. The tcp proxy is never started. ``` pangolin | 2025-10-08T21:41:04.086Z [error]: Failed to update database from config: Error: Resource already exists: sylver.lab.redacted in org redacted ``` When adding via the Web UI. ``` newt | INFO: 2025/10/08 21:45:07 Blueprint applied successfully! newt | INFO: 2025/10/08 21:45:22 Started tcp proxy to host.docker.internal:3001 ``` ### Environment - OS Type & Version: Synology DSM 7.3 - Pangolin Version: 1.10.1 - Gerbil Version: 1.2.1 - Traefik Version: 3.4.0 - Newt Version: 1.5.1 ### To Reproduce 1. Deploy a self-hosted node as follows: https://docs.digpangolin.com/self-host/manual/docker-compose Follow the guide and add the relevant configuration for your DNS resolver implementation, etc. Ensure that you add the blueprint labels. ``` services: pangolin: image: fosrl/pangolin:latest # https://github.com/fosrl/pangolin/releases container_name: pangolin restart: always volumes: - /volume1/encrypted_share/docker/pangolin/config:/app/config - pangolin-data:/var/certificates - pangolin-data:/var/dynamic healthcheck: test: ["CMD", "curl", "-f", "http://localhost:3001/api/v1/"] interval: "3s" timeout: "3s" retries: 30 labels: # Proxy Resource Configuration - pangolin.proxy-resources.x.name=x - pangolin.proxy-resources.x.full-domain=base.domain - pangolin.proxy-resources.x.protocol=http # Target Configuration - the port and hostname will be auto-detected - pangolin.proxy-resources.x.targets[0].method=https - pangolin.proxy-resources.x.targets[0].hostname=host.docker.internal - pangolin.proxy-resources.x.targets[0].port=3001 gerbil: image: fosrl/gerbil:latest # https://github.com/fosrl/gerbil/releases container_name: gerbil restart: always depends_on: pangolin: condition: service_healthy command: - --reachableAt=http://gerbil:3003 - --generateAndSaveKeyTo=/var/config/key - --remoteConfig=http://pangolin:3001/api/v1/ volumes: - ./config/:/var/config cap_add: - NET_ADMIN - SYS_MODULE ports: - 51820:51820/udp - 21820:21820/udp - 443:443 # Port for traefik because of the network_mode - 80:80 # Port for traefik because of the network_mode traefik: image: traefik:v3.4.0 container_name: traefik restart: always network_mode: service:gerbil # Ports appear on the gerbil service depends_on: pangolin: condition: service_healthy command: - --configFile=/etc/traefik/traefik_config.yml environment: CLOUDFLARE_DNS_API_TOKEN: "" # REPLACE WITH YOUR CLOUDFLARE API TOKEN If you’re using Cloudflare, make sure your API token has the permissions Zone/Zone/Read and Zone/DNS/Edit and make sure it applies to all zones. volumes: - ./config/traefik:/etc/traefik:ro # Volume to store the Traefik configuration - ./config/traefik/logs:/var/log/traefik # Volume to store the Traefik configuration - ./config/letsencrypt:/letsencrypt # Volume to store the Let's Encrypt certificates # Shared volume for certificates and dynamic config in file mode - pangolin-data:/var/certificates:ro - pangolin-data:/var/dynamic:ro newt: image: fosrl/newt container_name: newt restart: unless-stopped extra_hosts: - "host.docker.internal:host-gateway" # Allow access to host services from containers volumes: - /var/run/docker.sock:/var/run/docker.sock environment: - PANGOLIN_ENDPOINT=https://pangolin.domain - NEWT_ID= - NEWT_SECRET= - DOCKER_SOCKET=/var/run/docker.sock networks: default: driver: bridge name: pangolin ipam: config: - subnet: 172.18.0.0/26 volumes: pangolin-data: ``` 2. Ensure that the host is listening / accepting connections for the port 3001 specified in the resource configuration (this is a web server in my case). 3. Bring the compose file up. ### Expected Behavior Pangolin should add the resource with the base domain and empty subdomain and not change all other blueprint addresses to the base domain. i.e., a new resource appears with the base domain as the address to access from, and other resources have the subdomain in front of it.
GiteaMirror added the needs investigatingbug labels 2026-04-16 08:52:39 -05:00
Author
Owner

@github-actions[bot] commented on GitHub (Oct 24, 2025):

This issue has been automatically marked as stale due to 14 days of inactivity. It will be closed in 14 days if no further activity occurs.

<!-- gh-comment-id:3440047091 --> @github-actions[bot] commented on GitHub (Oct 24, 2025): This issue has been automatically marked as stale due to 14 days of inactivity. It will be closed in 14 days if no further activity occurs.
Author
Owner

@omarahmed786 commented on GitHub (Nov 3, 2025):

I applied blueprints thru the frontend UI of pangolin and also am having this issue. The resources sometimes reset to the base domain of example.com instead of service.example.com

Edit: This occurs whenever the pangolin container is rebuiilt!

<!-- gh-comment-id:3480367217 --> @omarahmed786 commented on GitHub (Nov 3, 2025): I applied blueprints thru the frontend UI of pangolin and also am having this issue. The resources sometimes reset to the base domain of example.com instead of service.example.com Edit: This occurs whenever the pangolin container is rebuiilt!
Author
Owner

@tullisar commented on GitHub (Nov 9, 2025):

I am experiencing this as well from my docker label blueprints. When I recreate the re-deploy the pangolin stack from my compose setup all of the resources that use blueprints revert to https://example.com instead of https://subdomain.example.com. Restarting the affected containers seems to cause newt/pangolin to fix the issue. I haven't dug at the proxy resources code yet - but since it happens when recreating the pangolin, it seems like there might be a disconnect with what's in the database vs. what pangolin gets from a newt client when applies a new template after detecting a container reboot from an affected container.

Edit: This may be the culprit here:

d38b321f85/server/setup/copyInConfig.ts (L178-L183)

For domains with the wildcard type, the subdomain isn't stored in the database I think. When updating proxy resources the subdomain ends up being null from this line:

d38b321f85/server/lib/blueprints/proxyResources.ts (L1073-L1079)

As a result, when the setup code is running it looks like since those resources don't have a subdomain stored with them - the full domain gets replaced with the base domain. I don't have a build environment for Pangolin right now so I can't explore this further at the moment. If work isn't too busy this week I might see about testing the theory out.

<!-- gh-comment-id:3508862649 --> @tullisar commented on GitHub (Nov 9, 2025): I am experiencing this as well from my docker label blueprints. When I recreate the re-deploy the pangolin stack from my compose setup all of the resources that use blueprints revert to https://example.com instead of https://subdomain.example.com. Restarting the affected containers seems to cause newt/pangolin to fix the issue. I haven't dug at the proxy resources code yet - but since it happens when recreating the pangolin, it seems like there might be a disconnect with what's in the database vs. what pangolin gets from a newt client when applies a new template after detecting a container reboot from an affected container. _Edit:_ This may be the culprit here: https://github.com/fosrl/pangolin/blob/d38b321f85ffdf7fc2ea22ed0f4d518d3baed12c/server/setup/copyInConfig.ts#L178-L183 For domains with the `wildcard` type, the subdomain isn't stored in the database I think. When updating proxy resources the subdomain ends up being `null` from this line: https://github.com/fosrl/pangolin/blob/d38b321f85ffdf7fc2ea22ed0f4d518d3baed12c/server/lib/blueprints/proxyResources.ts#L1073-L1079 As a result, when the setup code is running it looks like since those resources don't have a subdomain stored with them - the full domain gets replaced with the base domain. I don't have a build environment for Pangolin right now so I can't explore this further at the moment. If work isn't too busy this week I might see about testing the theory out.
Author
Owner

@omarahmed786 commented on GitHub (Nov 10, 2025):

I just updated my pangolin and it happened again. Fortunately I have a blueprint saved with ALL of my resources so I just reapply every time the container is recreated. Would love for this to be fixed!

<!-- gh-comment-id:3513434239 --> @omarahmed786 commented on GitHub (Nov 10, 2025): I just updated my pangolin and it happened again. Fortunately I have a blueprint saved with ALL of my resources so I just reapply every time the container is recreated. Would love for this to be fixed!
Author
Owner

@michimussato commented on GitHub (Nov 13, 2025):

I also experience issues after adding Resources using Blueprints as well as Docker Compose Labels.

Documented here: https://github.com/fosrl/pangolin/issues/1709#issuecomment-3529004161

<!-- gh-comment-id:3529020564 --> @michimussato commented on GitHub (Nov 13, 2025): I also experience issues after adding Resources using Blueprints as well as Docker Compose Labels. Documented here: https://github.com/fosrl/pangolin/issues/1709#issuecomment-3529004161
Author
Owner

@alexandrescieux commented on GitHub (Dec 6, 2025):

Just encountered this after upgrading from pangolin:postgresql-1.12.2 to pangolin:postgresql-1.12.3.

I use a dozen of blueprints applied through the UI that have "full-domain: subdomain.mydomain.net" and after upgrading every ressource created via blueprint have "https://mydomain.net" instead of "https://subdomain.mydomain.net" in Ressources > [Ressource] > General > Domain.

Reapplying blueprints fix the issue.
Ressources created manually outside blueprints seems unaffected.
I do not use docker labels for the blueprints, applying only through the UI seems to be important to reproduce.

<!-- gh-comment-id:3620988091 --> @alexandrescieux commented on GitHub (Dec 6, 2025): Just encountered this after upgrading from pangolin:postgresql-1.12.2 to pangolin:postgresql-1.12.3. I use a dozen of blueprints applied through the UI that have "full-domain: subdomain.mydomain.net" and after upgrading every ressource created via blueprint have "https://mydomain.net" instead of "https://subdomain.mydomain.net" in Ressources > [Ressource] > General > Domain. Reapplying blueprints fix the issue. Ressources created manually outside blueprints seems unaffected. I do not use docker labels for the blueprints, applying only through the UI seems to be important to reproduce.
Author
Owner

@omarahmed786 commented on GitHub (Dec 6, 2025):

Just encountered this after upgrading from pangolin:postgresql-1.12.2 to pangolin:postgresql-1.12.3.

I use a dozen of blueprints applied through the UI that have "full-domain: subdomain.mydomain.net" and after upgrading every ressource created via blueprint have "https://mydomain.net" instead of "https://subdomain.mydomain.net" in Ressources > [Ressource] > General > Domain.

Reapplying blueprints fix the issue. Ressources created manually outside blueprints seems unaffected. I do not use docker labels for the blueprints, applying only through the UI seems to be important to reproduce.

Same exact behavior for me. I now have to worry about reapplying my blueprint every time the container is stopped and started. I've had to turn off daily backups for pangolin (to prevent it from being stopped and started) so I'm really hoping this bug is prioritized as it currently makes my system much less reliable since I moved from Nginx Proxy Manager. Any system update on my unraid server or adding removing/drives now requires an extra step.

<!-- gh-comment-id:3621017933 --> @omarahmed786 commented on GitHub (Dec 6, 2025): > Just encountered this after upgrading from pangolin:postgresql-1.12.2 to pangolin:postgresql-1.12.3. > > I use a dozen of blueprints applied through the UI that have "full-domain: subdomain.mydomain.net" and after upgrading every ressource created via blueprint have "https://mydomain.net" instead of "https://subdomain.mydomain.net" in Ressources > [Ressource] > General > Domain. > > Reapplying blueprints fix the issue. Ressources created manually outside blueprints seems unaffected. I do not use docker labels for the blueprints, applying only through the UI seems to be important to reproduce. Same exact behavior for me. I now have to worry about reapplying my blueprint every time the container is stopped and started. I've had to turn off daily backups for pangolin (to prevent it from being stopped and started) so I'm really hoping this bug is prioritized as it currently makes my system much less reliable since I moved from Nginx Proxy Manager. Any system update on my unraid server or adding removing/drives now requires an extra step.
Author
Owner

@oschwartz10612 commented on GitHub (Dec 7, 2025):

@tullisar thanks for that you made this easier to troubleshoot. I think it was indeed the problem. I think this will be fix by 2f8f1e263f69187d3ef20908b9f4385a51e7f7c6 in the next release.

<!-- gh-comment-id:3621532209 --> @oschwartz10612 commented on GitHub (Dec 7, 2025): @tullisar thanks for that you made this easier to troubleshoot. I think it was indeed the problem. I think this will be fix by 2f8f1e263f69187d3ef20908b9f4385a51e7f7c6 in the next release.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/pangolin#1969