Gitea depletes Ram + CPU #5823

Closed
opened 2025-11-02 06:36:53 -06:00 by GiteaMirror · 18 comments
Owner

Originally created by @JoeyWrk on GitHub (Aug 3, 2020).

  • Gitea version: Gitea version 1.12.2

  • Git version: Git version 2.28-rc1

  • Operating system: Ubuntu 20.04.1 LTS (on Virtual Machine (HyperV)

  • Database :

    • [x ] MySQL (MariaDB 10.3.22)
  • Can you reproduce the bug at https://try.gitea.io:

    • [ x] Not relevant

Description

Gitea uses up all system resources, even though all System Requirements are met.

I have read this slightly outdated article before -> https://github.com/go-gitea/gitea/issues/4450 , but it did not help me solving my issue.

I checked and changed my Gitea Configs (app.ini) with the Config-Sheet -> https://docs.gitea.io/en-us/config-cheat-sheet/ , hoping this could solve my problem. The changes are applied in my Configs, but all resources are still being used up.

Apart from this, the article above is about 2 years old and a few Gitea Versions were released since then, I would assume that this RAM/CPU/Resource problem should be fixed by now, since I am not the only one experiencing this issue.

I am grateful for all tips and hints.

If you need more detailed/further information, pls let me know.

Thank you all in advance!

...

Screenshots

Gitea

Originally created by @JoeyWrk on GitHub (Aug 3, 2020). - Gitea version: Gitea version 1.12.2 - Git version: Git version 2.28-rc1 - Operating system: Ubuntu 20.04.1 LTS (on Virtual Machine (HyperV) - Database : - [x ] MySQL (MariaDB 10.3.22) - Can you reproduce the bug at https://try.gitea.io: - [ x] Not relevant ### Description Gitea uses up all system resources, even though all System Requirements are met. I have read this slightly outdated article before -> https://github.com/go-gitea/gitea/issues/4450 , but it did not help me solving my issue. I checked and changed my Gitea Configs (app.ini) with the Config-Sheet -> https://docs.gitea.io/en-us/config-cheat-sheet/ , hoping this could solve my problem. The changes are applied in my Configs, but all resources are still being used up. Apart from this, the article above is about 2 years old and a few Gitea Versions were released since then, I would assume that this RAM/CPU/Resource problem should be fixed by now, since I am not the only one experiencing this issue. I am grateful for all tips and hints. If you need more detailed/further information, pls let me know. Thank you all in advance! ... ## Screenshots ![Gitea](https://user-images.githubusercontent.com/69146876/89182114-59b37d00-d595-11ea-8b9c-badf19068a9f.PNG)
Author
Owner

@ashimokawa commented on GitHub (Aug 5, 2020):

@JoeyWrk

This happens when migrating to 1.12 from a previous version, the language statistics generation causes this.
We faced the same problem, see here for discussion;

https://codeberg.org/Codeberg/Community/issues/198

@ashimokawa commented on GitHub (Aug 5, 2020): @JoeyWrk This happens when migrating to 1.12 from a previous version, the language statistics generation causes this. We faced the same problem, see here for discussion; https://codeberg.org/Codeberg/Community/issues/198
Author
Owner

@Ahaus314 commented on GitHub (Aug 6, 2020):

We have the same issue since v1.11.5. I thought to upgrate to 1.12.1 (at that time) would fix the issue. Our server is monitored by Prometheus/Grafana and we got warning stating the CPU is higher than 80% for more than five minutes… I receive this alert all day long. I tried to monitor the services, checked logs, but can't find what cause this issue. We are on Windows Server 2016.

If @ashimokawa cause is true, I hope the https://github.com/go-gitea/gitea/pull/11975 can fix this. I'm tired to plan a service restart after the business hours.

@Ahaus314 commented on GitHub (Aug 6, 2020): We have the same issue since v1.11.5. I thought to upgrate to 1.12.1 (at that time) would fix the issue. Our server is monitored by Prometheus/Grafana and we got warning stating the CPU is higher than 80% for more than five minutes… I receive this alert all day long. I tried to monitor the services, checked logs, but can't find what cause this issue. We are on Windows Server 2016. If @ashimokawa cause is true, I hope the https://github.com/go-gitea/gitea/pull/11975 can fix this. I'm tired to plan a service restart after the business hours.
Author
Owner

@zeripath commented on GitHub (Aug 6, 2020):

You need to differentiate between load that occurs during migration and base load.

@zeripath commented on GitHub (Aug 6, 2020): You need to differentiate between load that occurs during migration and base load.
Author
Owner

@stale[bot] commented on GitHub (Oct 12, 2020):

This issue has been automatically marked as stale because it has not had recent activity. I am here to help clear issues left open even if solved or waiting for more insight. This issue will be closed if no further activity occurs during the next 2 weeks. If the issue is still valid just add a comment to keep it alive. Thank you for your contributions.

@stale[bot] commented on GitHub (Oct 12, 2020): This issue has been automatically marked as stale because it has not had recent activity. I am here to help clear issues left open even if solved or waiting for more insight. This issue will be closed if no further activity occurs during the next 2 weeks. If the issue is still valid just add a comment to keep it alive. Thank you for your contributions.
Author
Owner

@6543 commented on GitHub (Oct 12, 2020):

@JoeyWrk how many user and how many repos do this instance have?

@6543 commented on GitHub (Oct 12, 2020): @JoeyWrk how many user and how many repos do this instance have?
Author
Owner

@maxsnts commented on GitHub (Dec 9, 2020):

Hi.
Going from 1.12.2 to 1.13.0 i faced the same problem.
Server unusable with all RAM (and swap) used.
Downgraded, problem went away.
Its not a huge instance, been working fine with 4GB, increased to 8GB but did not help.

Thanks

@maxsnts commented on GitHub (Dec 9, 2020): Hi. Going from 1.12.2 to 1.13.0 i faced the same problem. Server unusable with all RAM (and swap) used. Downgraded, problem went away. Its not a huge instance, been working fine with 4GB, increased to 8GB but did not help. Thanks
Author
Owner

@zeripath commented on GitHub (Dec 10, 2020):

Hi.
Going from 1.12.2 to 1.13.0 i faced the same problem.
Server unusable with all RAM (and swap) used.
Downgraded, problem went away.
Its not a huge instance, been working fine with 4GB, increased to 8GB but did not help.

Thanks

At the risk of repeating myself...

When did this happen?

Had gitea finished it's migration to 1.13 or was gitea up and running?

What caused the memory to spike? Was it a push? Are you pushing LFS or just normal files? How big?

Etc.

You're not giving us any information to help you.

@zeripath commented on GitHub (Dec 10, 2020): > Hi. > Going from 1.12.2 to 1.13.0 i faced the same problem. > Server unusable with all RAM (and swap) used. > Downgraded, problem went away. > Its not a huge instance, been working fine with 4GB, increased to 8GB but did not help. > > Thanks At the risk of repeating myself... When did this happen? Had gitea finished it's migration to 1.13 or was gitea up and running? What caused the memory to spike? Was it a push? Are you pushing LFS or just normal files? How big? Etc. You're not giving us any information to help you.
Author
Owner

@6543 commented on GitHub (Dec 10, 2020):

Its not a huge instance

1 repo, 10 repos 1000 ?

@6543 commented on GitHub (Dec 10, 2020): > Its not a huge instance 1 repo, 10 repos 1000 ?
Author
Owner

@maxsnts commented on GitHub (Dec 11, 2020):

47 Repos. No one was doing any operation.
I'm running gitea as a daemon, maybe that is why +im not seeing the upgrade process?
Next time i will try running it the first time in the shell so that i can see the output.

@maxsnts commented on GitHub (Dec 11, 2020): 47 Repos. No one was doing any operation. I'm running gitea as a daemon, maybe that is why +im not seeing the upgrade process? Next time i will try running it the first time in the shell so that i can see the output.
Author
Owner

@maxsnts commented on GitHub (Dec 11, 2020):

Its something to do with many pulls at the same time.

We have a repo that gets checked periodically as part of a continuous delivery system.
Its about 150 VMs pulling a repo to see if there are new stuff.
Some times many of them happen to be at the same time.
In 1.12.5, these pulls go by with no problem at all.
In 1.13.0, gitea immediately exhausts all resources (RAM+CPU+SWAP) never recovering by it self.

Tested multiple times, the concurrent pulls is what kills gitea.

@maxsnts commented on GitHub (Dec 11, 2020): Its something to do with many pulls at the same time. We have a repo that gets checked periodically as part of a continuous delivery system. Its about 150 VMs pulling a repo to see if there are new stuff. Some times many of them happen to be at the same time. In 1.12.5, these pulls go by with no problem at all. In 1.13.0, gitea immediately exhausts all resources (RAM+CPU+SWAP) never recovering by it self. Tested multiple times, the concurrent pulls is what kills gitea.
Author
Owner

@zeripath commented on GitHub (Dec 12, 2020):

OK so this is very likely related to the go-git reading objects in to memory - you might want to try the no-go-git PR. (#13673)

@zeripath commented on GitHub (Dec 12, 2020): OK so this is very likely related to the go-git reading objects in to memory - you might want to try the no-go-git PR. (#13673)
Author
Owner

@maxsnts commented on GitHub (Dec 12, 2020):

I'm sorry i'm a bit lost.
Thank you for the help, but when you can, can you point me to what it is "no-go-git PR"?
It may be obvious, but i'm not getting there.

@maxsnts commented on GitHub (Dec 12, 2020): I'm sorry i'm a bit lost. Thank you for the help, but when you can, can you point me to what it is "no-go-git PR"? It may be obvious, but i'm not getting there.
Author
Owner

@zeripath commented on GitHub (Dec 12, 2020):

#13673

@zeripath commented on GitHub (Dec 12, 2020): #13673
Author
Owner

@maxsnts commented on GitHub (Dec 12, 2020):

Got it.
I will try to get a build going.
Thanks.

@maxsnts commented on GitHub (Dec 12, 2020): Got it. I will try to get a build going. Thanks.
Author
Owner

@zeripath commented on GitHub (Dec 12, 2020):

One other thing is that it might be helpful to set in your app.ini:

[server]
...
ENABLE_PPROF = true

Then use:

go tool pprof http://localhost:6060/debug/pprof/heap

and either type: top to get a top memory use and/or web to generate an svg of memory use.

Then at least we'll be able to trace where the memory is being used.

@zeripath commented on GitHub (Dec 12, 2020): One other thing is that it might be helpful to set in your app.ini: ``` [server] ... ENABLE_PPROF = true ``` Then use: ``` go tool pprof http://localhost:6060/debug/pprof/heap ``` and either type: `top` to get a top memory use and/or `web` to generate an svg of memory use. Then at least we'll be able to trace where the memory is being used.
Author
Owner

@dragospe commented on GitHub (Dec 16, 2020):

Just wanted to chime in and say that I was having a similar issue that was fixed by building from #13673.
Thanks zeripath!

@dragospe commented on GitHub (Dec 16, 2020): Just wanted to chime in and say that I was having a similar issue that was fixed by building from #13673. Thanks zeripath!
Author
Owner

@maxsnts commented on GitHub (Dec 17, 2020):

Great to hear, Thank you.

@maxsnts commented on GitHub (Dec 17, 2020): Great to hear, Thank you.
Author
Owner

@techknowlogick commented on GitHub (Dec 18, 2020):

Closing as linked PR is now merged.

@techknowlogick commented on GitHub (Dec 18, 2020): Closing as linked PR is now merged.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/gitea#5823