Pull Stack fails with internal server error #170

Closed
opened 2025-10-31 15:03:52 -05:00 by GiteaMirror · 17 comments
Owner

Originally created by @gorootde on GitHub (Jan 3, 2025).

I just deployed the komodo sqlite compose file without any changes. I added my first stack from a git repository, and when clicking the "Pull Images" button, the action is reported as 'Failed'.

This is the information that is show on the UI:

ERROR: request to periphery failed | 500 Internal Server Error

TRACE:
	1: Stopped after repo pull failure

Docker logs for core and periphery container:

core-1       | 2025-01-03T01:35:07.696141Z  INFO ExecuteRequest{req_id=68855522-3a5a-41e7-949c-28f0a804cb20 user_id="6777212d6beee5d378584657" update_id="67773ecb65a9c6770557c183" request="PullStack"}: core::api::execute: /execute request 68855522-3a5a-41e7-949c-28f0a804cb20 | user: kolbm
periphery-1  | 2025-01-03T01:35:07.771063Z  WARN periphery::router: request f67d1b63-9a8b-46b8-bdb1-b23298ec4a7a | type: ComposePull | error: Stopped after repo pull failure
core-1       | 2025-01-03T01:35:07.771362Z  WARN ExecuteRequest{req_id=68855522-3a5a-41e7-949c-28f0a804cb20 user_id="6777212d6beee5d378584657" update_id="67773ecb65a9c6770557c183" request="PullStack"}: core::api::execute: /execute request 68855522-3a5a-41e7-949c-28f0a804cb20 error: request to periphery failed | 500 Internal Server Error: Stopped after repo pull failure
core-1       | 2025-01-03T01:35:07.771395Z  WARN core::api::execute: /execute request 68855522-3a5a-41e7-949c-28f0a804cb20 task error: request to periphery failed | 500 Internal Server Error: Stopped after repo pull failure
Originally created by @gorootde on GitHub (Jan 3, 2025). I just deployed the komodo sqlite compose file without any changes. I added my first stack from a git repository, and when clicking the "Pull Images" button, the action is reported as 'Failed'. This is the information that is show on the UI: ``` ERROR: request to periphery failed | 500 Internal Server Error TRACE: 1: Stopped after repo pull failure ``` Docker logs for core and periphery container: ``` core-1 | 2025-01-03T01:35:07.696141Z INFO ExecuteRequest{req_id=68855522-3a5a-41e7-949c-28f0a804cb20 user_id="6777212d6beee5d378584657" update_id="67773ecb65a9c6770557c183" request="PullStack"}: core::api::execute: /execute request 68855522-3a5a-41e7-949c-28f0a804cb20 | user: kolbm periphery-1 | 2025-01-03T01:35:07.771063Z WARN periphery::router: request f67d1b63-9a8b-46b8-bdb1-b23298ec4a7a | type: ComposePull | error: Stopped after repo pull failure core-1 | 2025-01-03T01:35:07.771362Z WARN ExecuteRequest{req_id=68855522-3a5a-41e7-949c-28f0a804cb20 user_id="6777212d6beee5d378584657" update_id="67773ecb65a9c6770557c183" request="PullStack"}: core::api::execute: /execute request 68855522-3a5a-41e7-949c-28f0a804cb20 error: request to periphery failed | 500 Internal Server Error: Stopped after repo pull failure core-1 | 2025-01-03T01:35:07.771395Z WARN core::api::execute: /execute request 68855522-3a5a-41e7-949c-28f0a804cb20 task error: request to periphery failed | 500 Internal Server Error: Stopped after repo pull failure ```
GiteaMirror added the bugseen 👀 labels 2025-10-31 15:03:52 -05:00
Author
Owner

@mbecker20 commented on GitHub (Jan 3, 2025):

Hm, does it work after you Deploy? There may be a case with incomplete error handling for https://docs.rs/komodo_client/latest/komodo_client/api/execute/struct.PullStack.html. It is supposed to show the reason in the Update message, other than basically just saying "failed".

Does the compose file use relative file mounts? See https://github.com/mbecker20/komodo/discussions/180.

If this is not the case, I can take a look at your compose file to fix the issue in the short term.

@mbecker20 commented on GitHub (Jan 3, 2025): Hm, does it work after you Deploy? There may be a case with incomplete error handling for https://docs.rs/komodo_client/latest/komodo_client/api/execute/struct.PullStack.html. It is supposed to show the reason in the Update message, other than basically just saying "failed". Does the compose file use relative file mounts? See https://github.com/mbecker20/komodo/discussions/180. If this is not the case, I can take a look at your compose file to fix the issue in the short term.
Author
Owner

@gorootde commented on GitHub (Jan 3, 2025):

No relative file mounts. Also doesn't work after I (Re-)deployed it. It immediately fails without any reason given.

All other git operations seem to be working fine (e.g. I tested editing the compose in webui and I can see the commit in git)

compose file:

services:
    redis:
        image: redis:6.0
        restart: always
        networks:
            - paperless
    paperless-ngx:
        restart: always
        image: ghcr.io/paperless-ngx/paperless-ngx:latest
        labels:
            traefik.enable: true
        volumes:
            - /mnt/cache/appdata/paperless-ng/data:/usr/src/paperless/data
            - /mnt/cache/appdata/paperless-ng/scripts:/usr/src/paperless/scripts
            - /mnt/user/Paperless/media/:/usr/src/paperless/media
            - /mnt/cache/paperless-consume/:/usr/src/paperless/consume
            - /mnt/user/Paperless/export/:/usr/src/paperless/export
        env_file: .env
        environment:
            PAPERLESS_REDIS: redis://redis:6379
            PAPERLESS_OCR_LANGUAGE: deu
            PAPERLESS_OCR_LANGUAGES: eng
            PAPERLESS_OCR_USER_ARGS: '{"invalidate_digital_signatures": true}'
            PAPERLESS_FILENAME_FORMAT: '{{ created }}-{{ correspondent }}-{{ title }}'
            PAPERLESS_ENABLE_HTTP_REMOTE_USER: true
            PAPERLESS_CONSUMER_DELETE_DUPLICATES: true
            PAPERLESS_CONSUMER_ENABLE_ASN_BARCODE: true
            PAPERLESS_CONSUMER_BARCODE_SCANNER: ZXING
            PAPERLESS_CONSUMER_POLLING: 0
            PAPERLESS_CONSUMER_RECURSIVE: true
            PAPERLESS_CONSUMER_SUBDIRS_AS_TAGS: true
            PAPERLESS_TIKA_ENABLED: 1
            PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
            PAPERLESS_TIKA_ENDPOINT: http://tika:9998
            PAPERLESS_EMAIL_TASK_CRON: '*/5 * * * *'
            PUID: 99
            PGID: 100
            PAPERLESS_IGNORE_DATES: 
            
        networks:
            - ingress
            - paperless
    gotenberg:
        image: docker.io/gotenberg/gotenberg:8.7
        restart: always

        # The gotenberg chromium route is used to convert .eml files. We do not
        # want to allow external content like tracking pixels or even javascript.
        command:
            - "gotenberg"
            - "--chromium-disable-javascript=true"
            - "--chromium-allow-list=file:///tmp/.*"
        networks:
            - paperless
    tika:
        image: ghcr.io/paperless-ngx/tika:latest
        restart: always
        networks:
            - paperless
networks:
    paperless:
        driver: bridge
        internal: true
    ingress:
        external: true

And this is the stack:

[[stack]]
name = "paperless"
[stack.config]
server = "dockerhost"
poll_for_updates = true
run_directory = "_stacks/paperless"
file_paths = ["docker-compose.yml"]
git_provider = "gitserver"
git_account = "gorootde"
repo = "gorootde/docker-projects"
webhook_enabled = false
environment = """
PAPERLESS_SECRET_KEY="[[PAPERLESS_SECRET_KEY]]"
"""
@gorootde commented on GitHub (Jan 3, 2025): No relative file mounts. Also doesn't work after I (Re-)deployed it. It immediately fails without any reason given. All other git operations seem to be working fine (e.g. I tested editing the compose in webui and I can see the commit in git) compose file: ```yaml services: redis: image: redis:6.0 restart: always networks: - paperless paperless-ngx: restart: always image: ghcr.io/paperless-ngx/paperless-ngx:latest labels: traefik.enable: true volumes: - /mnt/cache/appdata/paperless-ng/data:/usr/src/paperless/data - /mnt/cache/appdata/paperless-ng/scripts:/usr/src/paperless/scripts - /mnt/user/Paperless/media/:/usr/src/paperless/media - /mnt/cache/paperless-consume/:/usr/src/paperless/consume - /mnt/user/Paperless/export/:/usr/src/paperless/export env_file: .env environment: PAPERLESS_REDIS: redis://redis:6379 PAPERLESS_OCR_LANGUAGE: deu PAPERLESS_OCR_LANGUAGES: eng PAPERLESS_OCR_USER_ARGS: '{"invalidate_digital_signatures": true}' PAPERLESS_FILENAME_FORMAT: '{{ created }}-{{ correspondent }}-{{ title }}' PAPERLESS_ENABLE_HTTP_REMOTE_USER: true PAPERLESS_CONSUMER_DELETE_DUPLICATES: true PAPERLESS_CONSUMER_ENABLE_ASN_BARCODE: true PAPERLESS_CONSUMER_BARCODE_SCANNER: ZXING PAPERLESS_CONSUMER_POLLING: 0 PAPERLESS_CONSUMER_RECURSIVE: true PAPERLESS_CONSUMER_SUBDIRS_AS_TAGS: true PAPERLESS_TIKA_ENABLED: 1 PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000 PAPERLESS_TIKA_ENDPOINT: http://tika:9998 PAPERLESS_EMAIL_TASK_CRON: '*/5 * * * *' PUID: 99 PGID: 100 PAPERLESS_IGNORE_DATES: networks: - ingress - paperless gotenberg: image: docker.io/gotenberg/gotenberg:8.7 restart: always # The gotenberg chromium route is used to convert .eml files. We do not # want to allow external content like tracking pixels or even javascript. command: - "gotenberg" - "--chromium-disable-javascript=true" - "--chromium-allow-list=file:///tmp/.*" networks: - paperless tika: image: ghcr.io/paperless-ngx/tika:latest restart: always networks: - paperless networks: paperless: driver: bridge internal: true ingress: external: true ``` And this is the stack: ```toml [[stack]] name = "paperless" [stack.config] server = "dockerhost" poll_for_updates = true run_directory = "_stacks/paperless" file_paths = ["docker-compose.yml"] git_provider = "gitserver" git_account = "gorootde" repo = "gorootde/docker-projects" webhook_enabled = false environment = """ PAPERLESS_SECRET_KEY="[[PAPERLESS_SECRET_KEY]]" """ ```
Author
Owner

@mbecker20 commented on GitHub (Jan 3, 2025):

If you click on the "Deploy" update, you see the full logs, can you send these?

Screenshot 2025-01-03 at 12 41 59 PM
Screenshot 2025-01-03 at 12 42 10 PM

@mbecker20 commented on GitHub (Jan 3, 2025): If you click on the "Deploy" update, you see the full logs, can you send these? ![Screenshot 2025-01-03 at 12 41 59 PM](https://github.com/user-attachments/assets/bcc14dd0-0ebf-46d0-b0bc-3514c21290d3) ![Screenshot 2025-01-03 at 12 42 10 PM](https://github.com/user-attachments/assets/26fc2382-0f45-4e4c-b5a4-f16e07897f99)
Author
Owner

@gorootde commented on GitHub (Jan 3, 2025):

Sure, here is a screenshot:

image
@gorootde commented on GitHub (Jan 3, 2025): Sure, here is a screenshot: <img width="592" alt="image" src="https://github.com/user-attachments/assets/81c2e7d9-2e21-4b4c-8e77-a12b35aab478" />
Author
Owner

@mbecker20 commented on GitHub (Jan 3, 2025):

That looks successful? And the stack appears to be running too. What is the issue?

@mbecker20 commented on GitHub (Jan 3, 2025): That looks successful? And the stack appears to be running too. What is the issue?
Author
Owner

@mbecker20 commented on GitHub (Jan 3, 2025):

I guess you mentioned it was PullStack that didn't work. What is the Update log showing for PullStack?

@mbecker20 commented on GitHub (Jan 3, 2025): I guess you mentioned it was PullStack that didn't work. What is the Update log showing for PullStack?
Author
Owner

@gorootde commented on GitHub (Jan 3, 2025):

Yes pull stack always fails (immediately!). This is all I can see:

image
@gorootde commented on GitHub (Jan 3, 2025): Yes pull stack always fails (immediately!). This is all I can see: <img width="555" alt="image" src="https://github.com/user-attachments/assets/9146404d-af21-415d-9c8b-a0f7428962ad" />
Author
Owner

@mbecker20 commented on GitHub (Jan 3, 2025):

Ok, I see that this is pretty much what you said originally. Thanks for clarifying all those points. I believe this is a bug.

@mbecker20 commented on GitHub (Jan 3, 2025): Ok, I see that this is pretty much what you said originally. Thanks for clarifying all those points. I believe this is a bug.
Author
Owner

@mbecker20 commented on GitHub (Jan 3, 2025):

Just adding here, I cannot reproduce using systemd managed Periphery (running outside docker container).

Basically the same setup that I can see. Private git repo based stack. And Pull is working.

[[stack]]
name = "komodo"
tags = ["komodo"]
[stack.config]
server = "basement"
links = ["https://komodo.bird.int"]
poll_for_updates = true
destroy_before_deploy = true
git_provider = "git.bird.int"
git_account = "mbecker20"
repo = "komodo/core"
webhook_force_deploy = true
extra_args = ["--build"]

Screenshot 2025-01-03 at 2 31 49 PM

@mbecker20 commented on GitHub (Jan 3, 2025): Just adding here, I cannot reproduce using `systemd` managed Periphery (running outside docker container). Basically the same setup that I can see. Private git repo based stack. And Pull is working. ```toml [[stack]] name = "komodo" tags = ["komodo"] [stack.config] server = "basement" links = ["https://komodo.bird.int"] poll_for_updates = true destroy_before_deploy = true git_provider = "git.bird.int" git_account = "mbecker20" repo = "komodo/core" webhook_force_deploy = true extra_args = ["--build"] ``` ![Screenshot 2025-01-03 at 2 31 49 PM](https://github.com/user-attachments/assets/eeb58fb2-c147-4f6e-9a61-1e3453ca2178)
Author
Owner

@gorootde commented on GitHub (Jan 4, 2025):

Meanwhile I see my log getting flooded with messages similar to these:

2025-01-04T22:03:37.418490Z  WARN core::api::write::stack: Failed to pull latest images for Stack baseservices | request to periphery failed | 500 Internal Server Error: Stopped after repo pull failure
2025-01-04T22:08:37.314550Z  WARN core::api::write::stack: Failed to pull latest images for Stack paperless | request to periphery failed | 500 Internal Server Error: Stopped after repo pull failure
2025-01-04T22:08:37.412495Z  WARN core::api::write::stack: Failed to pull latest images for Stack baseservices | request to periphery failed | 500 Internal Server Error: Stopped after repo pull failure
2025-01-04T22:13:37.303158Z  WARN core::api::write::stack: Failed to pull latest images for Stack paperless | request to periphery failed | 500 Internal Server Error: Stopped after repo pull failure

Not sure if it is related, but after a few hours of these messages the webui is also not responding anymore. After I restart the core container it works again for another few hours.

@gorootde commented on GitHub (Jan 4, 2025): Meanwhile I see my log getting flooded with messages similar to these: ``` 2025-01-04T22:03:37.418490Z WARN core::api::write::stack: Failed to pull latest images for Stack baseservices | request to periphery failed | 500 Internal Server Error: Stopped after repo pull failure 2025-01-04T22:08:37.314550Z WARN core::api::write::stack: Failed to pull latest images for Stack paperless | request to periphery failed | 500 Internal Server Error: Stopped after repo pull failure 2025-01-04T22:08:37.412495Z WARN core::api::write::stack: Failed to pull latest images for Stack baseservices | request to periphery failed | 500 Internal Server Error: Stopped after repo pull failure 2025-01-04T22:13:37.303158Z WARN core::api::write::stack: Failed to pull latest images for Stack paperless | request to periphery failed | 500 Internal Server Error: Stopped after repo pull failure ``` Not sure if it is related, but after a few hours of these messages the webui is also not responding anymore. After I restart the core container it works again for another few hours.
Author
Owner

@mbecker20 commented on GitHub (Jan 4, 2025):

Komodo Core is actually 2 parts: Backend API, and client side frontend. There is no server side rendering. So, backend container actually doesn't dictate the responsiveness on the UI side. Maybe with that knowledge you can have better idea of what is happening there. For example, what happens on page refresh? If you check frontend (browser) console logs / network tab, you may see API requests that are failing with a better reason.

In terms of this pull issue, can you try Systemd managed Periphery agent? See here: https://github.com/mbecker20/komodo/tree/main/scripts. Even if you just use it temporarily, like I mentioned this process is working for me and the only difference I can see is I don't run Periphery in container.

@mbecker20 commented on GitHub (Jan 4, 2025): Komodo Core is actually 2 parts: Backend API, and client side frontend. There is no server side rendering. So, backend container actually doesn't dictate the responsiveness on the UI side. Maybe with that knowledge you can have better idea of what is happening there. For example, what happens on page refresh? If you check frontend (browser) console logs / network tab, you may see API requests that are failing with a better reason. In terms of this pull issue, can you try Systemd managed Periphery agent? See here: https://github.com/mbecker20/komodo/tree/main/scripts. Even if you just use it temporarily, like I mentioned this process is working for me and the only difference I can see is I don't run Periphery in container.
Author
Owner

@joehand commented on GitHub (Jan 15, 2025):

I'm getting some pull stack errors using systemd periphery, but only on some stacks. I thought maybe it was the relative path but not sure changing things is fixing stuff. Happy to see if I can debug. Its only happenings on ~2 of my ~15 stacks, but can't quite tell what is unique about them (they are on two separate servers as well; other pulls are fine on those servers).

The error is the same:

ERROR: request to periphery failed | 500 Internal Server Error

TRACE:
	1: Stopped after repo pull failure

The API requests all return 200, this is all I see with the pull failure:

id	"6787c472f872a34ce7c594cf"
operation	"PullStack"
start_ts	1736950898114
success	false
username	"joe"
operator	"676ad2feffb71ad389815400"
target	Object { type: "Stack", id: "677308866a31a349bd2a6e2f" }
type	"Stack"
id	"677308866a31a349bd2a6e2f"
status	"Complete"

This is all I see in journalctl:

Jan 15 06:33:59 host sh[3736722]: 2025-01-15T14:33:59.628516Z  WARN periphery::router: request a7568fe9-74d8-4a6c-9cc3-f58a055fcd09 | type: ComposePull | error: Stopped after repo pull failure

Here is one pull that is failing, Outline. I use $DOCKER_DATA and other similar paths elsewhere, so not sure that is it. I have quite a few other services on this server and they all pull fine.

services:
  outline:
    image: docker.getoutline.com/outlinewiki/outline:latest
    networks:
      - caddy
      - default
    env_file: ./.env
    expose:
      - 3000
    user: "${UID}:${GID}"
    volumes:
      - /mnt/shared/outline/file-storage:/var/lib/outline/data
    depends_on:
      - postgres
      - redis
    labels:
      caddy: "*.${DOMAIN}, ${DOMAIN}"
      caddy.@outline: host docs.${DOMAIN}
      caddy.handle: "@outline"
      caddy.handle.reverse_proxy: "{{ upstreams 3000 }}"

  redis:
    image: redis
    env_file: ./.env
    networks:
      - default
    expose:
      - 6379
    volumes:
      - redis_data:/redis/redis.conf
    command: [ "redis-server", "/redis/redis.conf" ]
    healthcheck:
      test: [ "CMD", "redis-cli", "ping" ]
      interval: 10s
      timeout: 30s
      retries: 3

  postgres:
    image: postgres
    env_file: ./.env
    networks:
      - default
    expose:
      - 5432
    volumes:
      -  $DOCKER_DATA/outline-db:/var/lib/postgresql/data
    healthcheck:
      test: [ "CMD", "pg_isready", "-d", "outline", "-U", "outline" ]
      interval: 30s
      timeout: 20s
      retries: 3
    environment:
      POSTGRES_USER: 'outline'
      POSTGRES_PASSWORD: ${OUTLINE_POSTGRES_PW}
      POSTGRES_DB: 'outline'

volumes:
  redis_data:

networks:
  caddy:
    name: caddy
    driver: overlay
    external: true

Stack config:

[[stack]]
name = "outline"
[stack.config]
server = "my-server"
auto_pull = false
run_build = true
run_directory = "stacks/outline"
git_account = "joehand"
repo = "joehand/my-repo"
environment = """
# Lots of env stuff removed, can add if relevant
"""

I'm able to pull directly fine.

cd /etc/komodo/stacks/outline/stacks/outline && docker compose -p outline -f compose.yaml pull

The other stack failing is my main komodo server stack. Its a bit more complex so not sure where to start with that. But here is the compose files.

Details

main.compose.yaml

include:
  - caddy/compose.yaml
  - pocket-id/compose.yaml
  - komodo-core/compose.yaml

networks:
  caddy:
    name: caddy
    driver: overlay
    attachable: true

caddy/compose.yaml

services:
  caddy:
    container_name: caddy-core
    build:
      context: ./
      dockerfile: Dockerfile
    ports:
      - 80:80
      - 443:443
    env_file: $PWD/.env
    networks:
      - caddy
      - default
    volumes:
      - ${MNT_SERV_COMMON:?error}:/data
      - $PWD/caddy:/config/caddy
    restart: unless-stopped
    labels:
      komodo.skip: # Prevent Komodo from stopping with StopAllContainers
      # bunch of caddy labels removed

  oauth2-proxy:
    container_name: oauth2-proxy-core
    image: quay.io/oauth2-proxy/oauth2-proxy:latest
    command: --config /oauth2-proxy.cfg --client-secret ${CADDY_OIDC_CLIENT_SECRET:?error} --cookie-secret ${CADDY_COOKIE_SECRET:?error}
    volumes:
      - $PWD/oauth2-proxy.cfg:/oauth2-proxy.cfg
      - ${MNT_SERV_COMMON:?error}/assets:/assets:ro
    expose:
      - 4180
    restart: unless-stopped
    depends_on:
      - pocketid
    #add health
    networks:
      - default
    labels:
      komodo.skip: # Prevent Komodo from stopping with StopAllContainers

pocket-id/compose.yaml

services:
  pocketid:
    container_name: pocketid
    image: stonith404/pocket-id
    restart: unless-stopped
    networks:
      - caddy
    environment:
      - PUBLIC_APP_URL=https://${POCKETID_SUBDOMAIN:?error}.${DOMAIN:?error}
      - CADDY_PORT=3005
      - TRUST_PROXY=true
      - MAXMIND_LICENSE_KEY=""
      - PUID=1000
      - PGID=1000
    expose:
      - 3005
    depends_on:
      - caddy
    volumes:
      - $DOCKER_DATA/pocket-id/data:/app/backend/data
    # Optional healthcheck  
    healthcheck:
      test: "curl -f http://localhost:3005/health"
      interval: 1m30s
      timeout: 5s
      retries: 2
      start_period: 10s
    labels:
      komodo.skip: # Prevent Komodo from stopping with StopAllContainers
      caddy: ${CADDY_ROOT_LABEL:?error}
      caddy.2_@pocketid: host ${POCKETID_SUBDOMAIN:?error}.${DOMAIN:?error}
      caddy.2_handle: "@pocketid"
      caddy.2_handle.reverse_proxy: "{{ upstreams 3005 }}"
      caddy.2_handle.header: 'Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"'

komodo-core/compose.yaml

services:
  ferretdb:
    container_name: komodo-db
    image: ghcr.io/ferretdb/ferretdb
    labels:
      komodo.skip: # Prevent Komodo from stopping with StopAllContainers
    restart: unless-stopped
    logging:
      driver: ${COMPOSE_LOGGING_DRIVER:-local}
    networks:
      - default
    expose:
      - 27017
    volumes:
      - $DOCKER_DATA/komodo-core/sqllite-data:/state
    environment:
      - FERRETDB_HANDLER=sqlite

  komodo-core:
    container_name: komodo-core
    image: ghcr.io/mbecker20/komodo:${COMPOSE_KOMODO_IMAGE_TAG:-latest}
    restart: unless-stopped
    depends_on:
      - ferretdb
      - pocketid
    logging:
      driver: ${COMPOSE_LOGGING_DRIVER:-local}
    networks:
      - default
      - caddy
    ports:
      - 9120:9120
    environment:
      KOMODO_DATABASE_ADDRESS: ferretdb
    volumes:
      ## Core cache for repos for latest commit hash / contents
      - /etc/komodo/repo-cache:/repo-cache
      ## Store sync files on server
      - /etc/komodo/syncs:/syncs
      ## Optionally mount a custom core.config.toml
      - /etc/komodo/core.config.toml:/config/config.toml
    extra_hosts:
      - host.docker.internal:host-gateway
    labels:
      komodo.skip: # Prevent Komodo from stopping with StopAllContainers
      caddy: ${CADDY_ROOT_LABEL:?error}
      caddy.@komodo: host ${KOMODO_SUBDOMAIN:?error}.${DOMAIN:?error}
      caddy.handle: "@komodo"
      caddy.handle.reverse_proxy: "{{ upstreams 9120 }}"

Stack config:

## main stack

[[stack]]
name = "main-server"
[stack.config]
server = "main-server"
auto_pull = false
run_build = true
run_directory = "stacks"
file_paths = [
  "stack/main.compose.yaml",
  "common/docker-proxy/compose.yaml"
]
git_account = "joehand"
repo = "joehand/my-repo"
environment = """
# env stuff removed
"""

@joehand commented on GitHub (Jan 15, 2025): I'm getting some pull stack errors using systemd periphery, but only on some stacks. I thought maybe it was the relative path but not sure changing things is fixing stuff. Happy to see if I can debug. Its only happenings on ~2 of my ~15 stacks, but can't quite tell what is unique about them (they are on two separate servers as well; other pulls are fine on those servers). The error is the same: ``` ERROR: request to periphery failed | 500 Internal Server Error TRACE: 1: Stopped after repo pull failure ``` The API requests all return `200`, this is all I see with the pull failure: ```js id "6787c472f872a34ce7c594cf" operation "PullStack" start_ts 1736950898114 success false username "joe" operator "676ad2feffb71ad389815400" target Object { type: "Stack", id: "677308866a31a349bd2a6e2f" } type "Stack" id "677308866a31a349bd2a6e2f" status "Complete" ``` This is all I see in journalctl: ``` Jan 15 06:33:59 host sh[3736722]: 2025-01-15T14:33:59.628516Z WARN periphery::router: request a7568fe9-74d8-4a6c-9cc3-f58a055fcd09 | type: ComposePull | error: Stopped after repo pull failure ``` Here is one pull that is failing, Outline. I use `$DOCKER_DATA` and other similar paths elsewhere, so not sure that is it. I have quite a few other services on this server and they all pull fine. ```yaml services: outline: image: docker.getoutline.com/outlinewiki/outline:latest networks: - caddy - default env_file: ./.env expose: - 3000 user: "${UID}:${GID}" volumes: - /mnt/shared/outline/file-storage:/var/lib/outline/data depends_on: - postgres - redis labels: caddy: "*.${DOMAIN}, ${DOMAIN}" caddy.@outline: host docs.${DOMAIN} caddy.handle: "@outline" caddy.handle.reverse_proxy: "{{ upstreams 3000 }}" redis: image: redis env_file: ./.env networks: - default expose: - 6379 volumes: - redis_data:/redis/redis.conf command: [ "redis-server", "/redis/redis.conf" ] healthcheck: test: [ "CMD", "redis-cli", "ping" ] interval: 10s timeout: 30s retries: 3 postgres: image: postgres env_file: ./.env networks: - default expose: - 5432 volumes: - $DOCKER_DATA/outline-db:/var/lib/postgresql/data healthcheck: test: [ "CMD", "pg_isready", "-d", "outline", "-U", "outline" ] interval: 30s timeout: 20s retries: 3 environment: POSTGRES_USER: 'outline' POSTGRES_PASSWORD: ${OUTLINE_POSTGRES_PW} POSTGRES_DB: 'outline' volumes: redis_data: networks: caddy: name: caddy driver: overlay external: true ``` Stack config: ```toml [[stack]] name = "outline" [stack.config] server = "my-server" auto_pull = false run_build = true run_directory = "stacks/outline" git_account = "joehand" repo = "joehand/my-repo" environment = """ # Lots of env stuff removed, can add if relevant """ ``` I'm able to pull directly fine. ``` cd /etc/komodo/stacks/outline/stacks/outline && docker compose -p outline -f compose.yaml pull ``` The other stack failing is my main komodo server stack. Its a bit more complex so not sure where to start with that. But here is the compose files. <details><summary>Details</summary> <p> `main.compose.yaml` ```yaml include: - caddy/compose.yaml - pocket-id/compose.yaml - komodo-core/compose.yaml networks: caddy: name: caddy driver: overlay attachable: true ``` `caddy/compose.yaml` ```yaml services: caddy: container_name: caddy-core build: context: ./ dockerfile: Dockerfile ports: - 80:80 - 443:443 env_file: $PWD/.env networks: - caddy - default volumes: - ${MNT_SERV_COMMON:?error}:/data - $PWD/caddy:/config/caddy restart: unless-stopped labels: komodo.skip: # Prevent Komodo from stopping with StopAllContainers # bunch of caddy labels removed oauth2-proxy: container_name: oauth2-proxy-core image: quay.io/oauth2-proxy/oauth2-proxy:latest command: --config /oauth2-proxy.cfg --client-secret ${CADDY_OIDC_CLIENT_SECRET:?error} --cookie-secret ${CADDY_COOKIE_SECRET:?error} volumes: - $PWD/oauth2-proxy.cfg:/oauth2-proxy.cfg - ${MNT_SERV_COMMON:?error}/assets:/assets:ro expose: - 4180 restart: unless-stopped depends_on: - pocketid #add health networks: - default labels: komodo.skip: # Prevent Komodo from stopping with StopAllContainers ``` `pocket-id/compose.yaml` ```yaml services: pocketid: container_name: pocketid image: stonith404/pocket-id restart: unless-stopped networks: - caddy environment: - PUBLIC_APP_URL=https://${POCKETID_SUBDOMAIN:?error}.${DOMAIN:?error} - CADDY_PORT=3005 - TRUST_PROXY=true - MAXMIND_LICENSE_KEY="" - PUID=1000 - PGID=1000 expose: - 3005 depends_on: - caddy volumes: - $DOCKER_DATA/pocket-id/data:/app/backend/data # Optional healthcheck healthcheck: test: "curl -f http://localhost:3005/health" interval: 1m30s timeout: 5s retries: 2 start_period: 10s labels: komodo.skip: # Prevent Komodo from stopping with StopAllContainers caddy: ${CADDY_ROOT_LABEL:?error} caddy.2_@pocketid: host ${POCKETID_SUBDOMAIN:?error}.${DOMAIN:?error} caddy.2_handle: "@pocketid" caddy.2_handle.reverse_proxy: "{{ upstreams 3005 }}" caddy.2_handle.header: 'Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"' ``` `komodo-core/compose.yaml` ```yaml services: ferretdb: container_name: komodo-db image: ghcr.io/ferretdb/ferretdb labels: komodo.skip: # Prevent Komodo from stopping with StopAllContainers restart: unless-stopped logging: driver: ${COMPOSE_LOGGING_DRIVER:-local} networks: - default expose: - 27017 volumes: - $DOCKER_DATA/komodo-core/sqllite-data:/state environment: - FERRETDB_HANDLER=sqlite komodo-core: container_name: komodo-core image: ghcr.io/mbecker20/komodo:${COMPOSE_KOMODO_IMAGE_TAG:-latest} restart: unless-stopped depends_on: - ferretdb - pocketid logging: driver: ${COMPOSE_LOGGING_DRIVER:-local} networks: - default - caddy ports: - 9120:9120 environment: KOMODO_DATABASE_ADDRESS: ferretdb volumes: ## Core cache for repos for latest commit hash / contents - /etc/komodo/repo-cache:/repo-cache ## Store sync files on server - /etc/komodo/syncs:/syncs ## Optionally mount a custom core.config.toml - /etc/komodo/core.config.toml:/config/config.toml extra_hosts: - host.docker.internal:host-gateway labels: komodo.skip: # Prevent Komodo from stopping with StopAllContainers caddy: ${CADDY_ROOT_LABEL:?error} caddy.@komodo: host ${KOMODO_SUBDOMAIN:?error}.${DOMAIN:?error} caddy.handle: "@komodo" caddy.handle.reverse_proxy: "{{ upstreams 9120 }}" ``` Stack config: ```toml ## main stack [[stack]] name = "main-server" [stack.config] server = "main-server" auto_pull = false run_build = true run_directory = "stacks" file_paths = [ "stack/main.compose.yaml", "common/docker-proxy/compose.yaml" ] git_account = "joehand" repo = "joehand/my-repo" environment = """ # env stuff removed """ ``` </p> </details>
Author
Owner

@bpbradley commented on GitHub (Jan 31, 2025):

Komodo Core is actually 2 parts: Backend API, and client side frontend. There is no server side rendering. So, backend container actually doesn't dictate the responsiveness on the UI side. Maybe with that knowledge you can have better idea of what is happening there. For example, what happens on page refresh? If you check frontend (browser) console logs / network tab, you may see API requests that are failing with a better reason.

In terms of this pull issue, can you try Systemd managed Periphery agent? See here: https://github.com/mbecker20/komodo/tree/main/scripts. Even if you just use it temporarily, like I mentioned this process is working for me and the only difference I can see is I don't run Periphery in container.

FYI just bumping because I am having the same issue and I am using periphery in systemd, so I don't think it is related to it being docker/systemd periphery.

logs from periphery with sudo -u komodo journalctl --user -u periphery

Jan 24 12:28:50 docker sh[4044210]: 2025-01-24T17:28:50.972364Z  INFO periphery: 🔒 Periphery SSL Enabled
Jan 24 12:28:50 docker sh[4044210]: 2025-01-24T17:28:50.972404Z  INFO periphery: Komodo Periphery starting on https://0.0.0.0:8120
Jan 24 21:18:41 docker sh[4044210]: 2025-01-25T02:18:41.768503Z  WARN periphery::router: request 27dca8f4-a01c-4cf3-b034-271622517f29 | type: ComposePull | error: Stopped after repo pull failure
Jan 24 21:20:30 docker sh[4044210]: 2025-01-25T02:20:30.427470Z  WARN periphery::router: request ae9fd8ca-ab1a-4ee8-a7a3-07f298ac2aec | type: ComposePull | error: Stopped after repo pull failure
Jan 24 21:21:18 docker sh[4044210]: 2025-01-25T02:21:18.582259Z  WARN periphery::router: request c485923a-febf-49fa-9c54-fc4a4074d9ed | type: ComposePull | error: Stopped after repo pull failure
Jan 24 21:26:18 docker sh[4044210]: 2025-01-25T02:26:18.511616Z  WARN periphery::router: request 9543c55a-18da-483d-888d-c0832479a362 | type: ComposePull | error: Stopped after repo pull failure
Jan 24 21:31:19 docker sh[4044210]: 2025-01-25T02:31:19.306866Z  WARN periphery::router: request 5ffc4dc3-5fb0-4136-8a6c-8d86741c8f99 | type: ComposePull | error: Stopped after repo pull failure
@bpbradley commented on GitHub (Jan 31, 2025): > Komodo Core is actually 2 parts: Backend API, and client side frontend. There is no server side rendering. So, backend container actually doesn't dictate the responsiveness on the UI side. Maybe with that knowledge you can have better idea of what is happening there. For example, what happens on page refresh? If you check frontend (browser) console logs / network tab, you may see API requests that are failing with a better reason. > > In terms of this pull issue, can you try Systemd managed Periphery agent? See here: https://github.com/mbecker20/komodo/tree/main/scripts. Even if you just use it temporarily, like I mentioned this process is working for me and the only difference I can see is I don't run Periphery in container. FYI just bumping because I am having the same issue and I am using periphery in systemd, so I don't think it is related to it being docker/systemd periphery. logs from periphery with `sudo -u komodo journalctl --user -u periphery` ``` Jan 24 12:28:50 docker sh[4044210]: 2025-01-24T17:28:50.972364Z INFO periphery: 🔒 Periphery SSL Enabled Jan 24 12:28:50 docker sh[4044210]: 2025-01-24T17:28:50.972404Z INFO periphery: Komodo Periphery starting on https://0.0.0.0:8120 Jan 24 21:18:41 docker sh[4044210]: 2025-01-25T02:18:41.768503Z WARN periphery::router: request 27dca8f4-a01c-4cf3-b034-271622517f29 | type: ComposePull | error: Stopped after repo pull failure Jan 24 21:20:30 docker sh[4044210]: 2025-01-25T02:20:30.427470Z WARN periphery::router: request ae9fd8ca-ab1a-4ee8-a7a3-07f298ac2aec | type: ComposePull | error: Stopped after repo pull failure Jan 24 21:21:18 docker sh[4044210]: 2025-01-25T02:21:18.582259Z WARN periphery::router: request c485923a-febf-49fa-9c54-fc4a4074d9ed | type: ComposePull | error: Stopped after repo pull failure Jan 24 21:26:18 docker sh[4044210]: 2025-01-25T02:26:18.511616Z WARN periphery::router: request 9543c55a-18da-483d-888d-c0832479a362 | type: ComposePull | error: Stopped after repo pull failure Jan 24 21:31:19 docker sh[4044210]: 2025-01-25T02:31:19.306866Z WARN periphery::router: request 5ffc4dc3-5fb0-4136-8a6c-8d86741c8f99 | type: ComposePull | error: Stopped after repo pull failure ```
Author
Owner

@bpbradley commented on GitHub (Jan 31, 2025):

More data --

It does appear to be related to periphery and not core, which is probably obvious. It is very unclear what the specific variable is, but this is only happening on one of my periphery deployments (the one on my internal server, i.e. the same host as komodo core). The remote agents, which were installed in the exact same manner (I have an ansible role that does this, so it should be identical on every host) are not showing the issue.

Perhaps it is some weird networking thing when communicating over internal docker networks? Everyone in this thread appears to be using a reverse proxy and thus communicating through some kind of docker network to reach periphery. Not sure if that means anything, but it is a thought.

edit:

Another variable -- this may be a trivial data point but perhaps offers some insight. It does not fail when pulling an image that doesn't actually have to pull an image (i.e. a compose stack with a build context rather than an image). Not sure that is meaningful or not.

@bpbradley commented on GitHub (Jan 31, 2025): More data -- It does appear to be related to periphery and not core, which is probably obvious. It is very unclear what the specific variable is, but this is only happening on one of my periphery deployments (the one on my internal server, i.e. the same host as komodo core). The remote agents, which were installed in the _exact_ same manner (I have an ansible role that does this, so it should be identical on every host) are not showing the issue. Perhaps it is some weird networking thing when communicating over internal docker networks? Everyone in this thread appears to be using a reverse proxy and thus communicating through some kind of docker network to reach periphery. Not sure if that means anything, but it is a thought. edit: Another variable -- this may be a trivial data point but perhaps offers some insight. It does not fail when pulling an image that doesn't actually have to pull an image (i.e. a compose stack with a build context rather than an image). Not sure that is meaningful or not.
Author
Owner

@plasmus commented on GitHub (Feb 5, 2025):

Same Issue. More info, maybe of help.

Running periphery as a systemd service.
Everything works pulling and deploying from a UI defined stack.

When linking to a git repo and trying to pull, I did notice that the compose.yaml file was not moved over into the stacks location.
The only place that I can see the compose.yaml is in the repo-cache.
Is the compose.yaml file supposed to be moved into the stack location before running the pull?

@plasmus commented on GitHub (Feb 5, 2025): Same Issue. More info, maybe of help. Running periphery as a systemd service. Everything works pulling and deploying from a UI defined stack. When linking to a git repo and trying to pull, I did notice that the compose.yaml file was not moved over into the stacks location. The only place that I can see the compose.yaml is in the repo-cache. Is the compose.yaml file supposed to be moved into the stack location before running the pull?
Author
Owner

@mbecker20 commented on GitHub (Aug 26, 2025):

Is anyone still having this issue?

@mbecker20 commented on GitHub (Aug 26, 2025): Is anyone still having this issue?
Author
Owner

@bpbradley commented on GitHub (Aug 26, 2025):

Is anyone still having this issue?

Resolved for me. Has been working correctly for the last several releases at least.

@bpbradley commented on GitHub (Aug 26, 2025): > Is anyone still having this issue? Resolved for me. Has been working correctly for the last several releases at least.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/komodo#170