[GH-ISSUE #7517] v26.4.0 self-hosted: budget switch wedges on "Downloading..." with 8 budget files #44759

Open
opened 2026-04-26 06:39:15 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @wake-byte on GitHub (Apr 15, 2026).
Original GitHub issue: https://github.com/actualbudget/actual/issues/7517

Summary

On a self-hosted v26.4.0 deployment with 8 budget files, switching between any
two budgets in the web UI consistently wedges on the "Downloading..." spinner.
The only recovery is clearing all browser site data. This affects both desktop
(Chrome on Windows) and mobile (Chrome on Android). It happens whether or not
the previous budget was closed cleanly first.

This appears to be a regression / edge case in the SharedWorker coordinator
introduced by #7172, possibly related to #7026 and partially addressed by
#7411 (merged but not yet released).

Environment

  • actual-server image: actualbudget/actual-server:edge pulled 2026-04-15
    (sha256:8082f2b…), package.json reports 26.4.0 because no release tag has
    cut yet on master after v26.4.0
  • Reverse proxy: nginx in front of the sync server. The community workaround
    from #2793 is applied: /registerSW.js and /sw.js both return 404, all HTML
    carries Cache-Control: no-cache, no-store, must-revalidate, COOP/COEP set
    for SharedArrayBuffer. So service-worker / Workbox staleness is NOT the cause.
  • Number of budgets on this server: 8 (Fillmore, GTF, Hare Family,
    TSB, 412 Court, Billie, Hanner, JJ)
  • Browsers tested: Chrome 132 on Windows 11, Chrome 132 on Android 14,
    Safari 18 on iOS 18 — all three reproduce
  • Network: access via Tailscale (HTTPS), TLS terminated upstream of nginx

Server-side state (ruling out CRDT message accumulation)

group-fillmore.sqlite        msgs=794
group-gtf.sqlite             msgs=1737
group-hare-family.sqlite     msgs=3048
group-tsb.sqlite             msgs=243
group-412-court.sqlite       msgs=319
group-billie.sqlite          msgs=632
group-hanner.sqlite          msgs=654
group-jj.sqlite              msgs=347

Largest single group is 3048 messages. This is well below the 222,764 from #6904
that the "Reset Sync from desktop" workaround was prescribed for. Reset Sync
will not help here
, and prior maintainer advice in #6223 / #6574 indicates
that running it on a healthy CRDT can wedge clients further. Confirming the
failure mode is purely client-side.

Repro

  1. Fresh browser. Clear all site data for the Actual origin.
  2. Navigate to the Actual home, log in. The "Manage Files" screen lists all 8 budgets.
  3. Click any one budget, e.g. "GTF". It loads correctly — accounts, transactions,
    reports all functional.
  4. From inside that budget, click the budget name in the sidebar -> "Switch File"
    -> click any other budget, e.g. "Hare Family".
  5. Expected: the new budget loads.
  6. Actual: UI shows "Downloading..." indefinitely. Page hangs. Console shows
    no obvious error (will attach a full console dump).
  7. The only recovery is chrome://settings/content/all -> clear site data for the
    origin. Hard reload (Ctrl+Shift+R) does not recover. Closing the tab does not
    recover. New incognito window does not recover (clean state, but the next
    switch wedges again).

The wedge happens on the very FIRST switch after a fresh load. It is not a
gradual accumulation of switches.

What I've tried

  1. The nginx workaround from #2793. Already applied (see Environment). Helps
    eliminate stale service worker as a cause. Does not fix the switching wedge.

  2. Switching from :latest (v26.4.0) to :edge (today's master). Wanted
    to pick up #7411. The issue persists after the upgrade, even after a fresh
    clear-site-data + re-login + re-test cycle.

  3. Reset sync ID from desktop. Did NOT try, per the warnings in #6223 and
    #6574. Server-side message counts above show this is not the cause anyway.

  4. Selectively deleting only Actual's IndexedDB databases (not all site data).
    Used a bookmarklet that calls indexedDB.databases() then
    indexedDB.deleteDatabase() per match. After running, switching works for
    exactly one cycle (open A, switch to B). The next switch wedges again.

    This points strongly at IndexedDB state being the failure surface, not just
    stale JS or service worker registration.

Hypothesis

The SharedWorker coordinator from #7172 manages a BudgetGroup per opened
budget, and closeBudget() in
packages/loot-core/src/server/budgetfiles/app.ts:257 does not call
indexedDB.deleteDatabase() for the closed budget's per-budget SQLite database.
With 8 budgets, the coordinator state and the residual IndexedDB databases
multiply, and the BudgetGroup election or message-routing logic enters a state
it can't recover from.

#7411 ("prevent messages being dropped after closing last budget") is in master
and likely fixes one such failure mode, but the symptom persists even after
that patch is applied, suggesting at least one more lifecycle bug in the
SharedWorker coordinator with 4+ budgets.

What might help isolate

I'm happy to run any of the following if a maintainer wants the data:

  • Full browser console dump from the moment of the wedge
  • indexedDB.databases() snapshot showing the per-origin database list and
    versions, before and after the wedge
  • localStorage contents for the origin (with any sensitive keys redacted)
  • The output of a BroadcastChannel listener attached during the switch
  • Network HAR file of the wedge
  • Server access logs filtered to the wedge timestamp
  • Full test against a clean v26.5.0 release whenever it ships

The 8-budget repro is the unusual axis — I'd be surprised if existing CI tests
exceed 2 budgets — and I can spin up a minimal docker-compose with
representative dummy budgets if a maintainer wants a reproducible standalone.

  • #7172 (multi-tab SharedWorker, v26.4.0) — introduces the surface area
  • #7026 (open) — closest matching open issue, "iOS/Safari loses connection,
    cannot restore without removing local website data"
  • #7411 (merged, unreleased) — fixes one SharedWorker lifecycle bug, did not
    fix this symptom for me
  • #6904 (closed) — "Downloading…" hang on mobile, root-caused to a different
    thing (server-side message accumulation), but symptom is identical
  • #6574 (closed) — "Unable to open file after upgrade", users finding clearing
    site data was the only fix
  • #2793 (closed) — service worker community workaround, applied here
  • #4549 (closed) — feature request for URL-scheme deep links to bypass internal
    router state, never implemented
  • #6267 (closed) — feature request for unique URLs per budget, never implemented
Originally created by @wake-byte on GitHub (Apr 15, 2026). Original GitHub issue: https://github.com/actualbudget/actual/issues/7517 ## Summary On a self-hosted v26.4.0 deployment with 8 budget files, switching between any two budgets in the web UI consistently wedges on the "Downloading..." spinner. The only recovery is clearing all browser site data. This affects both desktop (Chrome on Windows) and mobile (Chrome on Android). It happens whether or not the previous budget was closed cleanly first. This appears to be a regression / edge case in the SharedWorker coordinator introduced by #7172, possibly related to #7026 and partially addressed by #7411 (merged but not yet released). ## Environment - **actual-server image:** `actualbudget/actual-server:edge` pulled 2026-04-15 (sha256:8082f2b…), `package.json` reports `26.4.0` because no release tag has cut yet on master after v26.4.0 - **Reverse proxy:** nginx in front of the sync server. The community workaround from #2793 is applied: `/registerSW.js` and `/sw.js` both return 404, all HTML carries `Cache-Control: no-cache, no-store, must-revalidate`, COOP/COEP set for SharedArrayBuffer. So service-worker / Workbox staleness is NOT the cause. - **Number of budgets on this server:** 8 (`Fillmore`, `GTF`, `Hare Family`, `TSB`, `412 Court`, `Billie`, `Hanner`, `JJ`) - **Browsers tested:** Chrome 132 on Windows 11, Chrome 132 on Android 14, Safari 18 on iOS 18 — all three reproduce - **Network:** access via Tailscale (HTTPS), TLS terminated upstream of nginx ## Server-side state (ruling out CRDT message accumulation) ``` group-fillmore.sqlite msgs=794 group-gtf.sqlite msgs=1737 group-hare-family.sqlite msgs=3048 group-tsb.sqlite msgs=243 group-412-court.sqlite msgs=319 group-billie.sqlite msgs=632 group-hanner.sqlite msgs=654 group-jj.sqlite msgs=347 ``` Largest single group is 3048 messages. This is well below the 222,764 from #6904 that the "Reset Sync from desktop" workaround was prescribed for. **Reset Sync will not help here**, and prior maintainer advice in #6223 / #6574 indicates that running it on a healthy CRDT can wedge clients further. Confirming the failure mode is purely client-side. ## Repro 1. Fresh browser. Clear all site data for the Actual origin. 2. Navigate to the Actual home, log in. The "Manage Files" screen lists all 8 budgets. 3. Click any one budget, e.g. "GTF". It loads correctly — accounts, transactions, reports all functional. 4. From inside that budget, click the budget name in the sidebar -> "Switch File" -> click any other budget, e.g. "Hare Family". 5. **Expected:** the new budget loads. 6. **Actual:** UI shows "Downloading..." indefinitely. Page hangs. Console shows no obvious error (will attach a full console dump). 7. The only recovery is `chrome://settings/content/all` -> clear site data for the origin. Hard reload (Ctrl+Shift+R) does not recover. Closing the tab does not recover. New incognito window does not recover (clean state, but the next switch wedges again). The wedge happens on the very FIRST switch after a fresh load. It is not a gradual accumulation of switches. ## What I've tried 1. **The nginx workaround from #2793.** Already applied (see Environment). Helps eliminate stale service worker as a cause. Does not fix the switching wedge. 2. **Switching from `:latest` (v26.4.0) to `:edge` (today's master).** Wanted to pick up #7411. The issue persists after the upgrade, even after a fresh clear-site-data + re-login + re-test cycle. 3. **Reset sync ID from desktop.** Did NOT try, per the warnings in #6223 and #6574. Server-side message counts above show this is not the cause anyway. 4. **Selectively deleting only Actual's IndexedDB databases (not all site data).** Used a bookmarklet that calls `indexedDB.databases()` then `indexedDB.deleteDatabase()` per match. After running, switching works for exactly one cycle (open A, switch to B). The next switch wedges again. This points strongly at IndexedDB state being the failure surface, not just stale JS or service worker registration. ## Hypothesis The SharedWorker coordinator from #7172 manages a `BudgetGroup` per opened budget, and `closeBudget()` in `packages/loot-core/src/server/budgetfiles/app.ts:257` does not call `indexedDB.deleteDatabase()` for the closed budget's per-budget SQLite database. With 8 budgets, the coordinator state and the residual IndexedDB databases multiply, and the BudgetGroup election or message-routing logic enters a state it can't recover from. #7411 ("prevent messages being dropped after closing last budget") is in master and likely fixes one such failure mode, but the symptom persists even after that patch is applied, suggesting at least one more lifecycle bug in the SharedWorker coordinator with 4+ budgets. ## What might help isolate I'm happy to run any of the following if a maintainer wants the data: - Full browser console dump from the moment of the wedge - `indexedDB.databases()` snapshot showing the per-origin database list and versions, before and after the wedge - localStorage contents for the origin (with any sensitive keys redacted) - The output of a `BroadcastChannel` listener attached during the switch - Network HAR file of the wedge - Server access logs filtered to the wedge timestamp - Full test against a clean v26.5.0 release whenever it ships The 8-budget repro is the unusual axis — I'd be surprised if existing CI tests exceed 2 budgets — and I can spin up a minimal docker-compose with representative dummy budgets if a maintainer wants a reproducible standalone. ## Related issues / PRs - #7172 (multi-tab SharedWorker, v26.4.0) — introduces the surface area - #7026 (open) — closest matching open issue, "iOS/Safari loses connection, cannot restore without removing local website data" - #7411 (merged, unreleased) — fixes one SharedWorker lifecycle bug, did not fix this symptom for me - #6904 (closed) — "Downloading…" hang on mobile, root-caused to a different thing (server-side message accumulation), but symptom is identical - #6574 (closed) — "Unable to open file after upgrade", users finding clearing site data was the only fix - #2793 (closed) — service worker community workaround, applied here - #4549 (closed) — feature request for URL-scheme deep links to bypass internal router state, never implemented - #6267 (closed) — feature request for unique URLs per budget, never implemented
Author
Owner

@Juulz commented on GitHub (Apr 16, 2026):

I can't import a backup either! I'll try edge.

<!-- gh-comment-id:4260131484 --> @Juulz commented on GitHub (Apr 16, 2026): I can't import a backup either! I'll try edge.
Author
Owner

@Juulz commented on GitHub (Apr 16, 2026):

Edge seems fine today. 😁

<!-- gh-comment-id:4260141372 --> @Juulz commented on GitHub (Apr 16, 2026): Edge seems fine today. 😁
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/actual#44759