[Bug]: Recover data from server when client is missing data and throws timestamp counter overflow #2067

Closed
opened 2026-02-28 20:02:29 -06:00 by GiteaMirror · 0 comments
Owner

Originally created by @alclary on GitHub (Apr 21, 2025).

Verified issue does not already exist?

  • I have searched and found no existing issue

What happened?

FYI only – no action needed. Posting for future reference.

Summary:
After testing and debugging a custom data importer I restarted a local browser session to find local budget data gone. Since I had subsequently conducted a lot of manual categorizing on top of this, I wanted to recover the data. On attempting to re-sync it via the Actual web interface (post-reboot), the client failed to sync and threw a timestamp counter overflow error.

Context:

  • Repeated clearing and importing via API during development led to huge messages tables in sqlite db (i.e. messages_crdt in backup zip or messages_binary in group file on server). Note: This likely could have been avoided by utilizing runImport in my custom importer as explained in the writing data importers docs.
  • Client-side data was lost. Only data on the server remained.
  • On reboot, with no local data, the client attempted a full sync from the server but was unsuccessful, with a huge amount of 'messages' from the Actual sync server (note: these live in the group*.sqlite file under the user-files on server OR in the messages_crdt table of the db in an Actual backup .zip)

Issue Details:

  • Client sync failed with thrown OverflowError. Message: timestamp counter overflow.
  • Console log before failure: Got messages from server 178059.

Solution:
The 'Reset sync' feature under 'Advanced Settings' in the Actual web interface should be utilized, it will reset the huge message tables causing the OverflowError.


Additional Considerations: If you somehow were unable to access the 'Reset sync' function in the web interface, there is a more laborious way to recover the data from the server, bypassing the OverflowError above:

  • The OverflowError occurs because a there is a limit to the number of messages permitted during sync. This is set by a constant MAX_COUNTER in /actual/packages/crdt/src/crdt/timestamp.ts, with an int value of 65535.
  • You can clone the actual repo, increased the MAX_COUNTER limit in the /actual/packages/crdt/src/crdt/timestamp.ts file, by adding another digit to the supplied hex number, build from source, copy the server-files and user-files directories from your original server to your local temporary development server, and finally run the development server.
  • This would permit you to load the budget with sync messages exceeding the normal constant, access the 'Reset sync' function in the web interface settings, and then create a backup of the recovered budget (now with the message tables cleared by 'Reset sync').
  • Finally the backup .zip files can be imported into your permanent server.

How can we reproduce the issue?

Perform enough transactions with the API to exceed the number of rows in the messages_binary table past 65535.

Where are you hosting Actual?

Yes. Self hosted instance on version v25.4.0.

What browsers are you seeing the problem on?

LibreWolf - Potentially responsible for wiping local data after reboot due to strict privacy settings

Operating System

NA

Originally created by @alclary on GitHub (Apr 21, 2025). ### Verified issue does not already exist? - [x] I have searched and found no existing issue ### What happened? FYI only – no action needed. Posting for future reference. **Summary:** After testing and debugging a custom [data importer](https://actualbudget.org/docs/api/#writing-data-importers) I restarted a local browser session to find local budget data gone. Since I had subsequently conducted a lot of manual categorizing on top of this, I wanted to recover the data. On attempting to re-sync it via the Actual web interface (post-reboot), the client failed to sync and threw a timestamp counter overflow error. **Context:** * Repeated clearing and importing via API during development led to huge messages tables in sqlite db (i.e. messages_crdt in backup zip or messages_binary in group file on server). Note: This likely could have been avoided by utilizing `runImport` in my custom importer as explained in the [writing data importers docs](https://actualbudget.org/docs/api/#writing-data-importers). * Client-side data was lost. Only data on the server remained. * On reboot, with no local data, the client attempted a full sync from the server but was unsuccessful, with a huge amount of 'messages' from the Actual sync server (note: these live in the `group*.sqlite` file under the `user-files` on server OR in the `messages_crdt` table of the db in an Actual backup .zip) **Issue Details:** * Client sync failed with thrown `OverflowError`. Message: `timestamp counter overflow`. * Console log before failure: `Got messages from server 178059`. **Solution:** The 'Reset sync' feature under 'Advanced Settings' in the Actual web interface should be utilized, it will reset the huge message tables causing the OverflowError. --- **Additional Considerations:** If you somehow were unable to access the 'Reset sync' function in the web interface, there is a more laborious way to recover the data from the server, bypassing the OverflowError above: * The OverflowError occurs because a there is a limit to the number of messages permitted during sync. This is set by a constant `MAX_COUNTER` in `/actual/packages/crdt/src/crdt/timestamp.ts`, with an int value of 65535. * You can clone the actual repo, increased the MAX_COUNTER limit in the `/actual/packages/crdt/src/crdt/timestamp.ts` file, by adding another digit to the supplied hex number, build from source, copy the `server-files` and `user-files` directories from your original server to your local temporary development server, and finally run the development server. * This would permit you to load the budget with sync messages exceeding the normal constant, access the 'Reset sync' function in the web interface settings, _and then_ create a backup of the recovered budget (now with the message tables cleared by 'Reset sync'). * Finally the backup .zip files can be imported into your permanent server. ### How can we reproduce the issue? Perform enough transactions with the API to exceed the number of rows in the `messages_binary` table past 65535. ### Where are you hosting Actual? Yes. Self hosted instance on version v25.4.0. ### What browsers are you seeing the problem on? LibreWolf - Potentially responsible for wiping local data after reboot due to strict privacy settings ### Operating System NA
GiteaMirror added the bug label 2026-02-28 20:02:29 -06:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/actual#2067