[PR #3133] [MERGED] Optimize CipherSyncData for very large vaults #6867

Closed
opened 2026-03-07 21:07:07 -06:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/dani-garcia/vaultwarden/pull/3133
Author: @BlackDex
Created: 1/11/2023
Status: Merged
Merged: 1/12/2023
Merged by: @dani-garcia

Base: mainHead: optimize-ciphersync


📝 Commits (1)

  • 3181e4e Optimize CipherSyncData for very large vaults

📊 Changes

4 files changed (+35 additions, -26 deletions)

View changed files

📝 src/api/core/ciphers.rs (+24 -19)
📝 src/api/core/emergency_access.rs (+2 -3)
📝 src/api/core/organizations.rs (+2 -2)
📝 src/db/models/attachment.rs (+7 -2)

📄 Description

As mentioned in #3111, using a very very large vault causes some issues. Mainly because of a SQLite limit, but, it could also cause issue on MariaDB/MySQL or PostgreSQL. It also uses a lot of memory, and memory allocations.

This PR solves this by removing the need of all the cipher_uuid's just to gather the correct attachments.

It will use the user_uuid and org_uuid's to get all attachments linked to both, weither the user has access to them or not. This isn't an issue, since the matching is done per cipher and the attachment data is only returned if there is a matching cipher to where the user has access to.

I also modified some code to be able to use ::with_capacity(n) where possible. This prevents re-allocations if the Vec increases size, which will happen a lot if there are a lot of ciphers.

According to my tests measuring the time it takes to sync, it seems to have lowered the duration a bit more.

Fixes #3111


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/dani-garcia/vaultwarden/pull/3133 **Author:** [@BlackDex](https://github.com/BlackDex) **Created:** 1/11/2023 **Status:** ✅ Merged **Merged:** 1/12/2023 **Merged by:** [@dani-garcia](https://github.com/dani-garcia) **Base:** `main` ← **Head:** `optimize-ciphersync` --- ### 📝 Commits (1) - [`3181e4e`](https://github.com/dani-garcia/vaultwarden/commit/3181e4e96e9e952b92b79ed07653fe6b6a46c322) Optimize CipherSyncData for very large vaults ### 📊 Changes **4 files changed** (+35 additions, -26 deletions) <details> <summary>View changed files</summary> 📝 `src/api/core/ciphers.rs` (+24 -19) 📝 `src/api/core/emergency_access.rs` (+2 -3) 📝 `src/api/core/organizations.rs` (+2 -2) 📝 `src/db/models/attachment.rs` (+7 -2) </details> ### 📄 Description As mentioned in #3111, using a very very large vault causes some issues. Mainly because of a SQLite limit, but, it could also cause issue on MariaDB/MySQL or PostgreSQL. It also uses a lot of memory, and memory allocations. This PR solves this by removing the need of all the cipher_uuid's just to gather the correct attachments. It will use the user_uuid and org_uuid's to get all attachments linked to both, weither the user has access to them or not. This isn't an issue, since the matching is done per cipher and the attachment data is only returned if there is a matching cipher to where the user has access to. I also modified some code to be able to use `::with_capacity(n)` where possible. This prevents re-allocations if the `Vec` increases size, which will happen a lot if there are a lot of ciphers. According to my tests measuring the time it takes to sync, it seems to have lowered the duration a bit more. Fixes #3111 --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-03-07 21:07:07 -06:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/vaultwarden#6867