[GH-ISSUE #97] Vikunja eating memory with large swap enabled on Raspberry Pi #5963

Closed
opened 2026-04-20 16:25:24 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @DaCHack on GitHub (Nov 27, 2023).
Original GitHub issue: https://github.com/go-vikunja/vikunja/issues/97

Description

Running Raspberry Pi OS on a RPi4 2GB

Vikunja runs fine under default settings with less than 5% memory usage.
When enabling 2GB of swap for another app I noticed that suddenly Vikunja took 40% of physical memory filling up the entire physical memory. Swap file was not used by the system yet due to swappiness at 0. Still, system got extremely slow so I need to revert back to default settings and disable swap to have all service available to the users again.

Vikunja Frontend Version

0.21.0

Vikunja API Version

0.21.0

Browser and version

Edge

Can you reproduce the bug on the Vikunja demo site?

No

Screenshots

No response

Originally created by @DaCHack on GitHub (Nov 27, 2023). Original GitHub issue: https://github.com/go-vikunja/vikunja/issues/97 ### Description Running Raspberry Pi OS on a RPi4 2GB Vikunja runs fine under default settings with less than 5% memory usage. When enabling 2GB of swap for another app I noticed that suddenly Vikunja took 40% of physical memory filling up the entire physical memory. Swap file was not used by the system yet due to swappiness at 0. Still, system got extremely slow so I need to revert back to default settings and disable swap to have all service available to the users again. ### Vikunja Frontend Version 0.21.0 ### Vikunja API Version 0.21.0 ### Browser and version Edge ### Can you reproduce the bug on the Vikunja demo site? No ### Screenshots _No response_
Author
Owner

@kolaente commented on GitHub (Nov 28, 2023):

Is this about the frontend or API? How are you running Vikunja? Did it use the memory as cache or as actual memory?

<!-- gh-comment-id:1830828797 --> @kolaente commented on GitHub (Nov 28, 2023): Is this about the frontend or API? How are you running Vikunja? Did it use the memory as cache or as actual memory?
Author
Owner

@DaCHack commented on GitHub (Nov 28, 2023):

I am not sure about frontend or API: Both is running as a docker container on my RPi.
I think in htop it was /app/vikunja/vikunja with multiple threads eating the memory - cannot reproduce right now without breaking the system again.
I run both at latest docker images for arm64 as well as linuxserver/mariadb:10.6.13 in the stack.
It was using actual memory.

<!-- gh-comment-id:1830828801 --> @DaCHack commented on GitHub (Nov 28, 2023): I am not sure about frontend or API: Both is running as a docker container on my RPi. I think in htop it was /app/vikunja/vikunja with multiple threads eating the memory - cannot reproduce right now without breaking the system again. I run both at latest docker images for arm64 as well as linuxserver/mariadb:10.6.13 in the stack. It was using actual memory.
Author
Owner

@kolaente commented on GitHub (Nov 28, 2023):

Sounds like a problem of the api. Were you doing anything during that time? Did it happen at a specific time? Anything in the logs when it happened?

<!-- gh-comment-id:1830830156 --> @kolaente commented on GitHub (Nov 28, 2023): Sounds like a problem of the api. Were you doing anything during that time? Did it happen at a specific time? Anything in the logs when it happened?
Author
Owner

@DaCHack commented on GitHub (Nov 28, 2023):

No, I was not doing anything yet. I occured right after the reboot to enable swap and did not stop until I stopped the containers for vikunja.

The logs show a couple of timeouts from database connections that I assume stem from the large latency when the system was memory flooded (while I do not get why it was slow despite no swap was used...).
Also this occurred as first output after boot which might have the same root cause since eventually vikunja was able to connect:

2023-11-27T21:57:04.02580685Z: CRITICAL	▶ migration/Migrate 002 Migration failed: dial tcp: lookup db on 127.0.0.11:53: no such host
2023-11-27T21:57:05.653843555Z: CRITICAL	▶ migration/Migrate 002 Migration failed: dial tcp 172.22.0.4:3306: connect: connection refused
2023-11-27T21:57:07.051648955Z: CRITICAL	▶ migration/Migrate 002 Migration failed: dial tcp 172.22.0.4:3306: connect: connection refused
2023-11-27T21:57:08.666376493Z: INFO	▶ migration/Migrate 052 Ran all migrations successfully.
2023-11-27T21:57:08.670324654Z: INFO	▶ cmd/func25 054 Vikunja version v0.21.0
<!-- gh-comment-id:1830842603 --> @DaCHack commented on GitHub (Nov 28, 2023): No, I was not doing anything yet. I occured right after the reboot to enable swap and did not stop until I stopped the containers for vikunja. The logs show a couple of timeouts from database connections that I assume stem from the large latency when the system was memory flooded (while I do not get why it was slow despite no swap was used...). Also this occurred as first output after boot which might have the same root cause since eventually vikunja was able to connect: ``` 2023-11-27T21:57:04.02580685Z: CRITICAL ▶ migration/Migrate 002 Migration failed: dial tcp: lookup db on 127.0.0.11:53: no such host 2023-11-27T21:57:05.653843555Z: CRITICAL ▶ migration/Migrate 002 Migration failed: dial tcp 172.22.0.4:3306: connect: connection refused 2023-11-27T21:57:07.051648955Z: CRITICAL ▶ migration/Migrate 002 Migration failed: dial tcp 172.22.0.4:3306: connect: connection refused 2023-11-27T21:57:08.666376493Z: INFO ▶ migration/Migrate 052 Ran all migrations successfully. 2023-11-27T21:57:08.670324654Z: INFO ▶ cmd/func25 054 Vikunja version v0.21.0 ```
Author
Owner

@kolaente commented on GitHub (Nov 29, 2023):

Does it always happen or only sometimes?

<!-- gh-comment-id:1831638541 --> @kolaente commented on GitHub (Nov 29, 2023): Does it always happen or only sometimes?
Author
Owner

@DaCHack commented on GitHub (Nov 29, 2023):

It happened on two consecutive boots

<!-- gh-comment-id:1831644743 --> @DaCHack commented on GitHub (Nov 29, 2023): It happened on two consecutive boots
Author
Owner

@kolaente commented on GitHub (Dec 1, 2023):

Not sure how to reproduce this - I've never noticed this on any of the instances I run myself. Please report back if you can pin it down to something more clearly. I'll keep an eye on it as well.

<!-- gh-comment-id:1836202575 --> @kolaente commented on GitHub (Dec 1, 2023): Not sure how to reproduce this - I've never noticed this on any of the instances I run myself. Please report back if you can pin it down to something more clearly. I'll keep an eye on it as well.
Author
Owner

@kolaente commented on GitHub (Sep 13, 2024):

Is this still reproducible?

<!-- gh-comment-id:2348378554 --> @kolaente commented on GitHub (Sep 13, 2024): Is this still reproducible?
Author
Owner

@kolaente commented on GitHub (Jan 21, 2025):

Closing as inactive, please ping or open a new issue with relevant information if you still have this problem.

<!-- gh-comment-id:2605225477 --> @kolaente commented on GitHub (Jan 21, 2025): Closing as inactive, please ping or open a new issue with relevant information if you still have this problem.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/vikunja#5963