Multiple Deployment Failures #357

Closed
opened 2025-10-31 15:09:44 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @MAndersen990 on GitHub (May 12, 2025).

Went to upgrade from version 1.17.2 -> 1.17.5 to use the epic Terminals feature.

Steps:

  1. Go to komodo-core-1 container
  2. Click "New Deployment" in top right of page
  3. Pulled new image in deployment
  4. Execute "Redeploy"

Receive the following: Komodo shutdown during execution. If this is a build, the builder may not have been terminated.

Upon running "Redeploy" the container stops and does not restart. Is this expected behavior? Interestingly the changes appear to take affect when I restart the container manually, even though the "Deploy" failed. Nothing appears in the stdout logs to be of any use for troubleshooting this. And the documentation doesn't really mention anything about "Deployments" as far as I could find. So perhaps I'm just misunderstanding how "Deployments" work? Does the list disappear upon successful deployment as well?

I receive a similar behavior when attempting the same steps with the core server periphery container. The container stops and does not restart. I have a feeling this may be related to my configuration though, but I don't see where my error would be. I can connect to the server with the address mentioned below in the GUI and can see all resources, and info, but still get a tcp connect error when doing a redeployment.

ERROR: failed at request to periphery

TRACE:
	1: error sending request for url (http://192.168.68.54:8120/)
	2: client error (Connect)
	3: tcp connect error: Connection refused (os error 111)
	4: Connection refused (os error 111)

My compose.yaml:

services:
  postgres:
    image: postgres:17
    labels:
      komodo.skip: # Prevent Komodo from stopping with StopAllContainers
    restart: unless-stopped
    logging:
      driver: ${COMPOSE_LOGGING_DRIVER:-local}
    # ports:
    #   - 5432:5432
    volumes:
      - pg-data:/var/lib/postgresql/data
    environment:
      - POSTGRES_USER=${KOMODO_DB_USERNAME}
      - POSTGRES_PASSWORD=${KOMODO_DB_PASSWORD}
      - POSTGRES_DB=${KOMODO_DATABASE_DB_NAME:-komodo}

  ferretdb:
    image: ghcr.io/ferretdb/ferretdb:1
    labels:
      komodo.skip: # Prevent Komodo from stopping with StopAllContainers
    restart: unless-stopped
    depends_on:
      - postgres
    logging:
      driver: ${COMPOSE_LOGGING_DRIVER:-local}
    # ports:
    #   - 27017:27017
    environment:
      - FERRETDB_POSTGRESQL_URL=postgres://postgres:5432/${KOMODO_DATABASE_DB_NAME:-komodo}
  
  core:
    image: ghcr.io/moghtech/komodo-core:${COMPOSE_KOMODO_IMAGE_TAG:-latest}
    labels:
      komodo.skip: # Prevent Komodo from stopping with StopAllContainers
    restart: unless-stopped
    depends_on:
      - ferretdb
    logging:
      driver: ${COMPOSE_LOGGING_DRIVER:-local}
    ports:
      - 9120:9120
    env_file: ./compose.env
    environment:
      KOMODO_DATABASE_URI: mongodb://${KOMODO_DB_USERNAME}:${KOMODO_DB_PASSWORD}@ferretdb:27017/${KOMODO_DATABASE_DB_NAME:-komodo}?authMechanism=PLAIN
    volumes:
      ## Core cache for repos for latest commit hash / contents
      - repo-cache:/repo-cache
      ## Store sync files on server
      # - /path/to/syncs:/syncs
      ## Optionally mount a custom core.config.toml
      # - /path/to/core.config.toml:/config/config.toml
    ## Allows for systemd Periphery connection at 
    ## "http://host.docker.internal:8120"
    # extra_hosts:
    #   - host.docker.internal:host-gateway

  ## Deploy Periphery container using this block,
  ## or deploy the Periphery binary with systemd using 
  ## https://github.com/moghtech/komodo/tree/main/scripts
  periphery:
    image: ghcr.io/moghtech/komodo-periphery:${COMPOSE_KOMODO_IMAGE_TAG:-latest}
    labels:
      komodo.skip: # Prevent Komodo from stopping with StopAllContainers
    restart: unless-stopped
    logging:
      driver: ${COMPOSE_LOGGING_DRIVER:-local}
    env_file: ./compose.env
    volumes:
      ## Mount external docker socket
      - /var/run/docker.sock:/var/run/docker.sock
      ## Allow Periphery to see processes outside of container
      - /proc:/proc
      ## Specify the Periphery agent root directory.
      ## Must be the same inside and outside the container,
      ## or docker will get confused. See https://github.com/moghtech/komodo/discussions/180.
      ## Default: /etc/komodo.
      - ${PERIPHERY_ROOT_DIRECTORY:-/etc/komodo}:${PERIPHERY_ROOT_DIRECTORY:-/etc/komodo}

volumes:
  # Postgres
  pg-data:
  # Core
  repo-cache:`

My .env file (with things redacted):

COMPOSE_KOMODO_IMAGE_TAG=latest
COMPOSE_LOGGING_DRIVER=local 
KOMODO_DB_USERNAME=redacted
KOMODO_DB_PASSWORD=redacted
KOMODO_PASSKEY=redacted
TZ=US/Eastern
KOMODO_HOST=https://192.168.68.54
KOMODO_TITLE=Komodo
KOMODO_FIRST_SERVER=https://192.168.68.54:8120
KOMODO_DISABLE_CONFIRM_DIALOG=false
KOMODO_MONITORING_INTERVAL="15-sec"
KOMODO_RESOURCE_POLL_INTERVAL="5-min"
KOMODO_WEBHOOK_SECRET=redacted
KOMODO_JWT_SECRET=redacted
KOMODO_LOCAL_AUTH=true
KOMODO_DISABLE_USER_REGISTRATION=false
KOMODO_ENABLE_NEW_USERS=false
KOMODO_DISABLE_NON_ADMIN_CREATE=false
KOMODO_TRANSPARENT_MODE=false
KOMODO_JWT_TTL="1-day"
KOMODO_OIDC_ENABLED=false
KOMODO_GITHUB_OAUTH_ENABLED=false
KOMODO_GOOGLE_OAUTH_ENABLED=false
KOMODO_AWS_ACCESS_KEY_ID= # Alt: KOMODO_AWS_ACCESS_KEY_ID_FILE
KOMODO_AWS_SECRET_ACCESS_KEY= # Alt: KOMODO_AWS_SECRET_ACCESS_KEY_FILE
PERIPHERY_ROOT_DIRECTORY=/etc/komodo
PERIPHERY_PASSKEYS=${KOMODO_PASSKEY}
PERIPHERY_DISABLE_TERMINALS=false
PERIPHERY_SSL_ENABLED=true
PERIPHERY_INCLUDE_DISK_MOUNTS=/etc/hostname
Originally created by @MAndersen990 on GitHub (May 12, 2025). Went to upgrade from version 1.17.2 -> 1.17.5 to use the epic Terminals feature. Steps: 1. Go to komodo-core-1 container 2. Click "New Deployment" in top right of page 3. Pulled new image in deployment 4. Execute "Redeploy" Receive the following: `Komodo shutdown during execution. If this is a build, the builder may not have been terminated.` Upon running "Redeploy" the container stops and does not restart. Is this expected behavior? Interestingly the changes appear to take affect when I restart the container manually, even though the "Deploy" failed. Nothing appears in the stdout logs to be of any use for troubleshooting this. And the documentation doesn't really mention anything about "Deployments" as far as I could find. So perhaps I'm just misunderstanding how "Deployments" work? Does the list disappear upon successful deployment as well? I receive a similar behavior when attempting the same steps with the core server periphery container. The container stops and does not restart. I have a feeling this may be related to my configuration though, but I don't see where my error would be. I can connect to the server with the address mentioned below in the GUI and can see all resources, and info, but still get a tcp connect error when doing a redeployment. ``` ERROR: failed at request to periphery TRACE: 1: error sending request for url (http://192.168.68.54:8120/) 2: client error (Connect) 3: tcp connect error: Connection refused (os error 111) 4: Connection refused (os error 111) ``` My compose.yaml: ``` services: postgres: image: postgres:17 labels: komodo.skip: # Prevent Komodo from stopping with StopAllContainers restart: unless-stopped logging: driver: ${COMPOSE_LOGGING_DRIVER:-local} # ports: # - 5432:5432 volumes: - pg-data:/var/lib/postgresql/data environment: - POSTGRES_USER=${KOMODO_DB_USERNAME} - POSTGRES_PASSWORD=${KOMODO_DB_PASSWORD} - POSTGRES_DB=${KOMODO_DATABASE_DB_NAME:-komodo} ferretdb: image: ghcr.io/ferretdb/ferretdb:1 labels: komodo.skip: # Prevent Komodo from stopping with StopAllContainers restart: unless-stopped depends_on: - postgres logging: driver: ${COMPOSE_LOGGING_DRIVER:-local} # ports: # - 27017:27017 environment: - FERRETDB_POSTGRESQL_URL=postgres://postgres:5432/${KOMODO_DATABASE_DB_NAME:-komodo} core: image: ghcr.io/moghtech/komodo-core:${COMPOSE_KOMODO_IMAGE_TAG:-latest} labels: komodo.skip: # Prevent Komodo from stopping with StopAllContainers restart: unless-stopped depends_on: - ferretdb logging: driver: ${COMPOSE_LOGGING_DRIVER:-local} ports: - 9120:9120 env_file: ./compose.env environment: KOMODO_DATABASE_URI: mongodb://${KOMODO_DB_USERNAME}:${KOMODO_DB_PASSWORD}@ferretdb:27017/${KOMODO_DATABASE_DB_NAME:-komodo}?authMechanism=PLAIN volumes: ## Core cache for repos for latest commit hash / contents - repo-cache:/repo-cache ## Store sync files on server # - /path/to/syncs:/syncs ## Optionally mount a custom core.config.toml # - /path/to/core.config.toml:/config/config.toml ## Allows for systemd Periphery connection at ## "http://host.docker.internal:8120" # extra_hosts: # - host.docker.internal:host-gateway ## Deploy Periphery container using this block, ## or deploy the Periphery binary with systemd using ## https://github.com/moghtech/komodo/tree/main/scripts periphery: image: ghcr.io/moghtech/komodo-periphery:${COMPOSE_KOMODO_IMAGE_TAG:-latest} labels: komodo.skip: # Prevent Komodo from stopping with StopAllContainers restart: unless-stopped logging: driver: ${COMPOSE_LOGGING_DRIVER:-local} env_file: ./compose.env volumes: ## Mount external docker socket - /var/run/docker.sock:/var/run/docker.sock ## Allow Periphery to see processes outside of container - /proc:/proc ## Specify the Periphery agent root directory. ## Must be the same inside and outside the container, ## or docker will get confused. See https://github.com/moghtech/komodo/discussions/180. ## Default: /etc/komodo. - ${PERIPHERY_ROOT_DIRECTORY:-/etc/komodo}:${PERIPHERY_ROOT_DIRECTORY:-/etc/komodo} volumes: # Postgres pg-data: # Core repo-cache:` ``` My .env file (with things redacted): ``` COMPOSE_KOMODO_IMAGE_TAG=latest COMPOSE_LOGGING_DRIVER=local KOMODO_DB_USERNAME=redacted KOMODO_DB_PASSWORD=redacted KOMODO_PASSKEY=redacted TZ=US/Eastern KOMODO_HOST=https://192.168.68.54 KOMODO_TITLE=Komodo KOMODO_FIRST_SERVER=https://192.168.68.54:8120 KOMODO_DISABLE_CONFIRM_DIALOG=false KOMODO_MONITORING_INTERVAL="15-sec" KOMODO_RESOURCE_POLL_INTERVAL="5-min" KOMODO_WEBHOOK_SECRET=redacted KOMODO_JWT_SECRET=redacted KOMODO_LOCAL_AUTH=true KOMODO_DISABLE_USER_REGISTRATION=false KOMODO_ENABLE_NEW_USERS=false KOMODO_DISABLE_NON_ADMIN_CREATE=false KOMODO_TRANSPARENT_MODE=false KOMODO_JWT_TTL="1-day" KOMODO_OIDC_ENABLED=false KOMODO_GITHUB_OAUTH_ENABLED=false KOMODO_GOOGLE_OAUTH_ENABLED=false KOMODO_AWS_ACCESS_KEY_ID= # Alt: KOMODO_AWS_ACCESS_KEY_ID_FILE KOMODO_AWS_SECRET_ACCESS_KEY= # Alt: KOMODO_AWS_SECRET_ACCESS_KEY_FILE PERIPHERY_ROOT_DIRECTORY=/etc/komodo PERIPHERY_PASSKEYS=${KOMODO_PASSKEY} PERIPHERY_DISABLE_TERMINALS=false PERIPHERY_SSL_ENABLED=true PERIPHERY_INCLUDE_DISK_MOUNTS=/etc/hostname ```
Author
Owner

@lukyjay commented on GitHub (May 13, 2025):

Your server is using http but the changelogs say ssl is now enabled by default, so change those hosts to https instead and it may work. This fixed it for me.

@lukyjay commented on GitHub (May 13, 2025): Your server is using http but the changelogs say ssl is now enabled by default, so change those hosts to https instead and it may work. This fixed it for me.
Author
Owner

@MAndersen990 commented on GitHub (May 13, 2025):

Your server is using http but the changelogs say ssl is now enabled by default, so change those hosts to https instead and it may work. This fixed it for me.

Well this didn't fix the problem but it changed the error for the periphery deployment. Now I get this:

ERROR: failed at request to periphery

TRACE:
	1: error sending request for url (https://192.168.68.54:8120/)
	2: client error (SendRequest)
	3: connection error
	4: peer closed connection without sending TLS close_notify: https://docs.rs/rustls/latest/rustls/manual/_03_howto/index.html#unexpected-eof

The core deployment issue remains the same. I updated the above .env variables to reflect changes made. Mainly these:

PERIPHERY_SSL_ENABLED=true (from false)
KOMODO_FIRST_SERVER=https://192.168.68.54:8120 (replaced http with https)
KOMODO_HOST=https://192.168.68.54 (replaced http with https)

Something else I'll add here for clarity. Komodo is up and running and I can access it from my proxy via https. The issues only occur when using the Deployment tab in the GUI. That's when I get the aforementioned errors.

@MAndersen990 commented on GitHub (May 13, 2025): > Your server is using http but the changelogs say ssl is now enabled by default, so change those hosts to https instead and it may work. This fixed it for me. Well this didn't fix the problem but it changed the error for the periphery deployment. Now I get this: ``` ERROR: failed at request to periphery TRACE: 1: error sending request for url (https://192.168.68.54:8120/) 2: client error (SendRequest) 3: connection error 4: peer closed connection without sending TLS close_notify: https://docs.rs/rustls/latest/rustls/manual/_03_howto/index.html#unexpected-eof ``` The core deployment issue remains the same. I updated the above .env variables to reflect changes made. Mainly these: ``` PERIPHERY_SSL_ENABLED=true (from false) KOMODO_FIRST_SERVER=https://192.168.68.54:8120 (replaced http with https) KOMODO_HOST=https://192.168.68.54 (replaced http with https) ``` Something else I'll add here for clarity. Komodo is up and running and I can access it from my proxy via https. The issues only occur when using the Deployment tab in the GUI. That's when I get the aforementioned errors.
Author
Owner

@MAndersen990 commented on GitHub (May 14, 2025):

I think I fixed it. I moved the periphery to it's own stack as per the documentation recommendation. I then recreated all the containers but specified a different path for the postgresql table.

Turns out the ferretdb was getting the incorrect username/pw and that's what was causing the core container to not come back up. I must have specified a certain username/pw upon initial setup and changed it to something else during troubleshooting.

I knew it would be something dumb. If anyone else stumbles on this make sure you delete or drop your postgresql table and have it recreated upon re-setting up all the containers or all the databases will yell at you and the core will be stuck in a restarting loop.

The red x still appears and says the build fails but I can confirm it does in fact make the changes and redeploys appropriately.

@MAndersen990 commented on GitHub (May 14, 2025): I think I fixed it. I moved the periphery to it's own stack as per the documentation recommendation. I then recreated all the containers but specified a different path for the postgresql table. Turns out the ferretdb was getting the incorrect username/pw and that's what was causing the core container to not come back up. I must have specified a certain username/pw upon initial setup and changed it to something else during troubleshooting. I knew it would be something dumb. If anyone else stumbles on this make sure you delete or drop your postgresql table and have it recreated upon re-setting up all the containers or all the databases will yell at you and the core will be stuck in a restarting loop. The red x still appears and says the build fails but I can confirm it does in fact make the changes and redeploys appropriately.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/komodo#357