[Bug]: Crash loop on Kubernetes in version 25.4.0 #2087

Closed
opened 2026-02-28 20:03:02 -06:00 by GiteaMirror · 6 comments
Owner

Originally created by @scottmckendry on GitHub (May 2, 2025).

Verified issue does not already exist?

  • I have searched and found no existing issue

What happened?

v25.3.1 is the most recent version where this doesn't occur. Upon upgrading to version 25.4.0, I get the following on startup:

/app/node_modules/convict/src/main.js:679
          throw new Error(output)
                ^

Error: port: ports must be within range 0 - 65535
    at Object.validate (/app/node_modules/convict/src/main.js:679:17)
    at file:///app/src/load-config.js:276:14
    at ModuleJob.run (node:internal/modules/esm/module_job:195:25)
    at async ModuleLoader.import (node:internal/modules/esm/loader:337:24)
    at async loadESM (node:internal/process/esm_loader:34:7)
    at async handleMainPromise (node:internal/modules/run_main:106:12)

Node.js v18.20.8

Which sends the pod into a crash loop.

I should note that I can run the same version on Docker just fine, so this is probably something weird with Kubernetes. It could be specific to my cluster.

How can we reproduce the issue?

Kubernetes version: v1.32.3

My deployment manifest:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: actual
  namespace: actual
spec:
  replicas: 1
  selector:
    matchLabels:
      app: actual
  template:
    metadata:
      labels:
        app: actual
    spec:
      containers:
        - name: actual
          image: ghcr.io/actualbudget/actual-server:25.4.0
          ports:
            - containerPort: 5006
          volumeMounts:
            - name: actual-data
              mountPath: /data
          livenessProbe:
            exec:
              command:
                - node
                - src/scripts/health-check.js
            initialDelaySeconds: 20
            periodSeconds: 60
            timeoutSeconds: 10
            failureThreshold: 3
      volumes:
        - name: actual-data
          persistentVolumeClaim:
            claimName: actual

Where are you hosting Actual?

Other

What browsers are you seeing the problem on?

No response

Operating System

Other

Originally created by @scottmckendry on GitHub (May 2, 2025). ### Verified issue does not already exist? - [x] I have searched and found no existing issue ### What happened? v25.3.1 is the most recent version where this doesn't occur. Upon upgrading to version 25.4.0, I get the following on startup: ``` /app/node_modules/convict/src/main.js:679 throw new Error(output) ^ Error: port: ports must be within range 0 - 65535 at Object.validate (/app/node_modules/convict/src/main.js:679:17) at file:///app/src/load-config.js:276:14 at ModuleJob.run (node:internal/modules/esm/module_job:195:25) at async ModuleLoader.import (node:internal/modules/esm/loader:337:24) at async loadESM (node:internal/process/esm_loader:34:7) at async handleMainPromise (node:internal/modules/run_main:106:12) Node.js v18.20.8 ``` Which sends the pod into a crash loop. I should note that I can run the same version on Docker just fine, so this is probably something weird with Kubernetes. It could be specific to my cluster. ### How can we reproduce the issue? Kubernetes version: `v1.32.3` My deployment manifest: ```yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: actual namespace: actual spec: replicas: 1 selector: matchLabels: app: actual template: metadata: labels: app: actual spec: containers: - name: actual image: ghcr.io/actualbudget/actual-server:25.4.0 ports: - containerPort: 5006 volumeMounts: - name: actual-data mountPath: /data livenessProbe: exec: command: - node - src/scripts/health-check.js initialDelaySeconds: 20 periodSeconds: 60 timeoutSeconds: 10 failureThreshold: 3 volumes: - name: actual-data persistentVolumeClaim: claimName: actual ``` ### Where are you hosting Actual? Other ### What browsers are you seeing the problem on? _No response_ ### Operating System Other
GiteaMirror added the bug label 2026-02-28 20:03:02 -06:00
Author
Owner

@alecbakholdin commented on GitHub (May 2, 2025):

I think this might be related to #4537. default used to just say 5006 but now it defaults to PORT if present.

  port: {
    doc: 'Port to run the server on.',
    format: 'port',
    default: process.env.PORT ? process.env.PORT : 5006,
    env: 'ACTUAL_PORT',
  },

@scottmckendry could you try two things for me?

  1. Set the PORT environment variable to 5006
  2. Set the ACTUAL_PORT environment variable to 5006
@alecbakholdin commented on GitHub (May 2, 2025): I think this might be related to #4537. default used to just say 5006 but now it defaults to PORT if present. ``` port: { doc: 'Port to run the server on.', format: 'port', default: process.env.PORT ? process.env.PORT : 5006, env: 'ACTUAL_PORT', }, ``` @scottmckendry could you try two things for me? 1. Set the PORT environment variable to 5006 2. Set the ACTUAL_PORT environment variable to 5006
Author
Owner

@scottmckendry commented on GitHub (May 2, 2025):

Confirming that the PORT env var made no difference. But ACTUAL_PORT resolved the problem.

The config below now works as expected.

    spec:
      containers:
        - name: actual
          image: ghcr.io/actualbudget/actual-server:25.4.0
          ports:
            - containerPort: 5006
          env:
            - name: ACTUAL_PORT
              value: "5006"

Appreciate the help @alecbakholdin!

@scottmckendry commented on GitHub (May 2, 2025): Confirming that the `PORT` env var made no difference. But `ACTUAL_PORT` resolved the problem. The config below now works as expected. ```yaml spec: containers: - name: actual image: ghcr.io/actualbudget/actual-server:25.4.0 ports: - containerPort: 5006 env: - name: ACTUAL_PORT value: "5006" ``` Appreciate the help @alecbakholdin!
Author
Owner

@alecbakholdin commented on GitHub (May 3, 2025):

Glad your problem is solved @scottmckendry! But I think there is still a latent issue. Default values should nto cause a crash loop.

@alecbakholdin commented on GitHub (May 3, 2025): Glad your problem is solved @scottmckendry! But I think there is still a latent issue. Default values should nto cause a crash loop.
Author
Owner

@scottmckendry commented on GitHub (May 3, 2025):

@alecbakholdin you're absolutely right. I'll keep this open for now then.

@scottmckendry commented on GitHub (May 3, 2025): @alecbakholdin you're absolutely right. I'll keep this open for now then.
Author
Owner

@ikaruswill commented on GitHub (May 7, 2025):

Hi folks, chiming in from my experience with Kubernetes and self-hosted apps. I've reported this issue prior in Uptime-Kuma https://github.com/louislam/uptime-kuma/issues/741#issuecomment-945854426

Background

This happens pretty often due to a feature of Kubernetes known as Service Links.

This feature is enabled by default and injects the hostname and ports of all services in the same namespace, into the Pod's environment variables.

Problem

In the case of Actual here is that there is an environment variable clash between the app and Kubernetes. Specifically, if you set Actual's Service Name as actual, Kubernetes injects the following:

ACTUAL_PORT=tcp://10.43.19.134:5006

Which follows the format: <Service Name>_PORT=<Protocol>://<Service IP>:<Service Port>

This breaks the expectation of Actual, with the error message:

ports must be within range 0 - 65535

Where Actual expects an int within the range.

Breaking change

The change can be found in this MR https://github.com/actualbudget/actual/pull/4537, which explains that the change was made to fix the behaviour of ACTUAL_PORT being ignored and PORT was used instead, due to config schema changes.

As a result, the new behaviour in 25.4.0 is that ACTUAL_PORT takes precedence and is no longer ignored.

Potential Solutions

  1. Disable the Service Links feature for Actual, under the Pod Spec.
    template:
      spec:
        enableServiceLinks: false
    
    This will not break anything in your application, those variables are not used. Most modern applications use DNS based service discovery nowadays anyway.
  2. Change the service name from actual to something else, e.g. actual-svc
  3. Actual's codebase switches away from using ACTUAL_PORT, to ACTUAL_SERVER_PORT, which ensures Kubernetes compatibility for perpetuity

Recommendation

I personally prefer Option 1, which is to disable service links, it creates unnecessary clutter in the Pod's environment variables and has no impact anyway on service discovery.

@ikaruswill commented on GitHub (May 7, 2025): Hi folks, chiming in from my experience with Kubernetes and self-hosted apps. I've reported this issue prior in Uptime-Kuma https://github.com/louislam/uptime-kuma/issues/741#issuecomment-945854426 ### Background This happens pretty often due to a feature of Kubernetes known as [Service Links](https://kubernetes.io/docs/tutorials/services/connect-applications-service/#accessing-the-service). This feature is enabled by default and injects the hostname and ports of all services in the same namespace, into the Pod's environment variables. ### Problem In the case of Actual here is that there is an environment variable clash between the app and Kubernetes. Specifically, **if you set Actual's Service Name as `actual`**, Kubernetes injects the following: ```env ACTUAL_PORT=tcp://10.43.19.134:5006 ``` Which follows the format: `<Service Name>_PORT=<Protocol>://<Service IP>:<Service Port>` This breaks the expectation of Actual, with the error message: ``` ports must be within range 0 - 65535 ``` Where Actual expects an int within the range. ### Breaking change The change can be found in this MR https://github.com/actualbudget/actual/pull/4537, which explains that the change was made to fix the behaviour of `ACTUAL_PORT` being ignored and `PORT` was used instead, due to config schema changes. As a result, the new behaviour in `25.4.0` is that `ACTUAL_PORT` takes precedence and is no longer ignored. ### Potential Solutions 1. Disable the Service Links feature for Actual, under the Pod Spec. ```yaml template: spec: enableServiceLinks: false ``` This will not break anything in your application, those variables are not used. Most modern applications use DNS based service discovery nowadays anyway. 1. Change the service name from `actual` to something else, e.g. `actual-svc` 1. Actual's codebase switches away from using `ACTUAL_PORT`, to `ACTUAL_SERVER_PORT`, which ensures Kubernetes compatibility for perpetuity ### Recommendation I personally prefer Option 1, which is to disable service links, it creates unnecessary clutter in the Pod's environment variables and has no impact anyway on service discovery.
Author
Owner

@rothman857 commented on GitHub (May 8, 2025):

@ikaruswill Excellent write up. I just changed my service name from actual to actual-svc and that worked, leaving service links intact.

@rothman857 commented on GitHub (May 8, 2025): @ikaruswill Excellent write up. I just changed my service name from `actual` to `actual-svc` and that worked, leaving service links intact.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/actual#2087