"data/sessions" folder growed up and used 100% inodes #3044

Closed
opened 2025-11-02 04:58:42 -06:00 by GiteaMirror · 19 comments
Owner

Originally created by @Perflyst on GitHub (Mar 13, 2019).

  • Gitea version (or commit ref): 1.7.3
  • Git version: 2.11.0
  • Operating system: Debian 9
  • Database (use [x]):
    • PostgreSQL
    • MySQL
    • MSSQL
    • SQLite
  • Can you reproduce the bug at https://try.gitea.io:
    • Yes (provide example URL)
    • No
    • Not relevant
  • Log gist:

Description

Today I got a warning from my monitoring that the gitea machine has no more inodes left. Gitea crashed and you couldnt write on the disk anymore. I removed data/sessions/* which had incredible much inodes.

Is there any work around? Is this a bug? Why does it grow soo much?

Originally created by @Perflyst on GitHub (Mar 13, 2019). - Gitea version (or commit ref): 1.7.3 - Git version: 2.11.0 - Operating system: Debian 9 - Database (use `[x]`): - [x] PostgreSQL - [ ] MySQL - [ ] MSSQL - [ ] SQLite - Can you reproduce the bug at https://try.gitea.io: - [ ] Yes (provide example URL) - [ ] No - [x] Not relevant - Log gist: ## Description Today I got a warning from my monitoring that the gitea machine has no more inodes left. Gitea crashed and you couldnt write on the disk anymore. I removed `data/sessions/*` which had incredible much inodes. Is there any work around? Is this a bug? Why does it grow soo much?
GiteaMirror added the type/bug label 2025-11-02 04:58:42 -06:00
Author
Owner

@lunny commented on GitHub (Mar 14, 2019):

If too many users are visiting your gitea site, it will create many session files that maybe use many inodes. You can change your session provider to memory, redis, mysql and etc. Currently you could delete all the session files to let gitea work, but then everyone have to login gitea again.

@lunny commented on GitHub (Mar 14, 2019): If too many users are visiting your gitea site, it will create many session files that maybe use many inodes. You can change your session provider to memory, redis, mysql and etc. Currently you could delete all the session files to let gitea work, but then everyone have to login gitea again.
Author
Owner

@Perflyst commented on GitHub (Mar 14, 2019):

If I use memory than probably all users are logged out if I restart gitea.

It just filled up again 80% inodes. Why do you not delete the session files automatically if the user just visits the site without logging in?

@Perflyst commented on GitHub (Mar 14, 2019): If I use memory than probably all users are logged out if I restart gitea. It just filled up again 80% inodes. Why do you not delete the session files automatically if the user just visits the site without logging in?
Author
Owner

@FloThinksPi commented on GitHub (Mar 15, 2019):

@Perflyst @lunny We also see this problem. We have aprox 200 Users, but Gitea seems to create a session file for each request! The save sessions in file function is obviously broken in 1.7.4. We see around 100 to 1000 Files per second beeing created. Our System Crashed at 20GBs and 5 Million files in the sessions folder. This is due to Docker trying to relable the selinux labels of all the files (:Z option in docker-compose) which leads to the docker deamon to hang/crash.

Our Configuration was:

[session]
PROVIDER_CONFIG = /data/gitea/sessions
PROVIDER        = file

Because we thought maybe the defaults are wrong we tried:

[session]
PROVIDER_CONFIG = /data/gitea/sessions
PROVIDER        = file
GC_INTERVAL_TIME = 3600
SESSION_LIFE_TIME = 86400
COOKIE_NAME = i_like_gitea
COOKIE_SECURE = true
ENABLE_SET_COOKIE = true

But no change whatsoever.

@Perflyst We solved it by changing the method from file to memory, wich is only suitable on singe instance server. If you have multiple gitea servers working in LoadBalancing configuration you would need to setup a redis to work around it.
Everytime you clear the sessions folder all users will have to login again, so next time you can just change to memory where all users have to log in gain also and afterwards delete the sessions folder wich is not needed anymore now.

Interresting also for @inxonic

@FloThinksPi commented on GitHub (Mar 15, 2019): @Perflyst @lunny We also see this problem. We have aprox 200 Users, but Gitea seems to create a session file for each request! The save sessions in file function is obviously broken in 1.7.4. We see around 100 to 1000 Files per second beeing created. Our System Crashed at 20GBs and 5 Million files in the sessions folder. This is due to Docker trying to relable the selinux labels of all the files (:Z option in docker-compose) which leads to the docker deamon to hang/crash. Our Configuration was: ``` [session] PROVIDER_CONFIG = /data/gitea/sessions PROVIDER = file ``` Because we thought maybe the defaults are wrong we tried: ``` [session] PROVIDER_CONFIG = /data/gitea/sessions PROVIDER = file GC_INTERVAL_TIME = 3600 SESSION_LIFE_TIME = 86400 COOKIE_NAME = i_like_gitea COOKIE_SECURE = true ENABLE_SET_COOKIE = true ``` But no change whatsoever. @Perflyst We solved it by changing the method from file to memory, wich is only suitable on singe instance server. If you have multiple gitea servers working in LoadBalancing configuration you would need to setup a redis to work around it. Everytime you clear the sessions folder all users will have to login again, so next time you can just change to memory where all users have to log in gain also and afterwards delete the sessions folder wich is not needed anymore now. Interresting also for @inxonic
Author
Owner

@Perflyst commented on GitHub (Mar 15, 2019):

Just as a side note, I do not use docker (if this is relevant).
I will change to memory but still this can't be right. It worked before without 1.7.3 / .4 these much session files.

@Perflyst commented on GitHub (Mar 15, 2019): Just as a side note, I do not use docker (if this is relevant). I will change to memory but still this can't be right. It worked before without 1.7.3 / .4 these much session files.
Author
Owner

@FloThinksPi commented on GitHub (Mar 15, 2019):

Yours crashed because you reached your inode limit, mine crashed earlier because the docker deamon refused its work :), it is not relevant for the problem. 5 Million Session files would be OK with 5 Million Users Logging in all within a 1 Day timeframe. However in our case and user counts there should be around 100 Session files normally. Something is fishy with session handling when using files.

Last Commit in /sessions/file.go was 9de871a0f8 which Added following lines:

9de871a0f8/vendor/github.com/go-macaron/session/file.go (L84-L87)

But i cannot see how this could case such issues.

@FloThinksPi commented on GitHub (Mar 15, 2019): Yours crashed because you reached your inode limit, mine crashed earlier because the docker deamon refused its work :), it is not relevant for the problem. 5 Million Session files would be OK with 5 Million Users Logging in all within a 1 Day timeframe. However in our case and user counts there should be around 100 Session files normally. Something is fishy with session handling when using files. Last Commit in /sessions/file.go was https://github.com/go-gitea/gitea/commit/9de871a0f8911030f8e06a881803cf722b8798ea which Added following lines: https://github.com/go-gitea/gitea/blob/9de871a0f8911030f8e06a881803cf722b8798ea/vendor/github.com/go-macaron/session/file.go#L84-L87 But i cannot see how this could case such issues.
Author
Owner

@FloThinksPi commented on GitHub (Mar 15, 2019):

@techknowlogick as you seem familliar with the session handling and file method, could you have a look on this ?

@FloThinksPi commented on GitHub (Mar 15, 2019): @techknowlogick as you seem familliar with the session handling and file method, could you have a look on this ?
Author
Owner

@lunny commented on GitHub (Mar 15, 2019):

@Perflyst @FloThinksPi could you confirm that v1.7.2 work for you?

@lunny commented on GitHub (Mar 15, 2019): @Perflyst @FloThinksPi could you confirm that v1.7.2 work for you?
Author
Owner

@FloThinksPi commented on GitHub (Mar 15, 2019):

@lunny our last version was commit 20c54f88b2 . We updated straight from that "non release" version to 1.7.4. In 20c54f88b2 we did not see this issue, though we had other crashed so our instance might crashed before we could obseve the session issue. But i think we would have noticed that many files so i would say in 20c54f88b2 it was not present.

@FloThinksPi commented on GitHub (Mar 15, 2019): @lunny our last version was commit 20c54f88b23bbf56ac3cc3a74baeed98d7ab0844 . We updated straight from that "non release" version to 1.7.4. In 20c54f88b23bbf56ac3cc3a74baeed98d7ab0844 we did not see this issue, though we had other crashed so our instance might crashed before we could obseve the session issue. But i think we would have noticed that many files so i would say in 20c54f88b23bbf56ac3cc3a74baeed98d7ab0844 it was not present.
Author
Owner

@Perflyst commented on GitHub (Mar 15, 2019):

I can confirm that my monitoring never informed me about 80% inodes usage before 1.7.3

@Perflyst commented on GitHub (Mar 15, 2019): I can confirm that my monitoring never informed me about 80% inodes usage before 1.7.3
Author
Owner

@FloThinksPi commented on GitHub (Mar 18, 2019):

@Perflyst @lunny today our instace ran out of memory at 12+GB(8GB+4GB SWAP). So the bug is also present when using memory unfortunally. Though it takes much more time to fill the 12GB RAM+SWAP in comparison to filling 12GB Disk Space when using file.

@FloThinksPi commented on GitHub (Mar 18, 2019): @Perflyst @lunny today our instace ran out of memory at 12+GB(8GB+4GB SWAP). So the bug is also present when using memory unfortunally. Though it takes much more time to fill the 12GB RAM+SWAP in comparison to filling 12GB Disk Space when using file.
Author
Owner

@techknowlogick commented on GitHub (Mar 18, 2019):

@FloThinksPi have you tried using redis as session provider?

@techknowlogick commented on GitHub (Mar 18, 2019): @FloThinksPi have you tried using redis as session provider?
Author
Owner

@zeripath commented on GitHub (Mar 19, 2019):

@techknowlogick I think there is probably a real issue here. Drive by visitors should not be having server sessions created. This never used to happen - something is writing to these sessions and we need to stop it.

@Perflyst is there anything suggestive of the cause in the session files to work out what is adding this data?

@zeripath commented on GitHub (Mar 19, 2019): @techknowlogick I think there is probably a real issue here. Drive by visitors should not be having server sessions created. This never used to happen - something is writing to these sessions and we need to stop it. @Perflyst is there anything suggestive of the cause in the session files to work out what is adding this data?
Author
Owner

@Perflyst commented on GitHub (Mar 19, 2019):

Well, this just kills my terminal

$ cat 1156652f319d4a45
#⎽├⎼☃┼±                                                                                                                                                                           _⎺┌d_┤☃d⎽├⎼☃┼±▮±☃├e▒@±☃├e▒:·/d▒├▒/⎽e⎽⎽☃⎺┼⎽/1/1

I noticed that the session folder also grows if there are no visitors. It seems like the sync from a mirror also fills the session folder.

I don't know if this is relevant, but I see a lot 2019/03/19 08:40:28 [...ules/context/repo.go:625 func1()] [E] GetCommitsCount: Unsupported cached value type: <nil> in the gitea.log file

@Perflyst commented on GitHub (Mar 19, 2019): Well, this just kills my terminal ``` $ cat 1156652f319d4a45 #⎽├⎼☃┼± _⎺┌d_┤☃d⎽├⎼☃┼±▮±☃├e▒@±☃├e▒:·/d▒├▒/⎽e⎽⎽☃⎺┼⎽/1/1 ``` I noticed that the session folder also grows if there are no visitors. It seems like the sync from a mirror also fills the session folder. I don't know if this is relevant, but I see a lot `2019/03/19 08:40:28 [...ules/context/repo.go:625 func1()] [E] GetCommitsCount: Unsupported cached value type: <nil>` in the gitea.log file
Author
Owner

@FloThinksPi commented on GitHub (Mar 19, 2019):

@techknowlogick we only tried memory and file, which seem to behave the same.

I browsed a litte trough the last commits and found something interresting.

820e28c904/vendor/github.com/go-macaron/session/file.go (L126-L164)

The Read function in managers providers does crate a file if none is found yet. So it does not only read, it will also write new sessions. Maybe one should split this functionality but nevermind for this issue.

So in:

9de871a0f8

A validator function got intoduced. Now every call should include this validator function.
However in the same commit where the validator function was introduced, the call to the Read function changed from m.Read to m.provider.read:

image

So in my opinion this would bypass the managers read function and directly call the provides read function which does not validate the session further.

f7f2f12b68/vendor/github.com/go-macaron/session/session.go (L311-L319)

I dont know if this could cause this but maybe invalid sessions are provided here to the read function and it creates them without further checks ? Just a guess here.

@FloThinksPi commented on GitHub (Mar 19, 2019): @techknowlogick we only tried memory and file, which seem to behave the same. I browsed a litte trough the last commits and found something interresting. https://github.com/go-gitea/gitea/blob/820e28c9044bbd9d398dd1617323665bb0935e13/vendor/github.com/go-macaron/session/file.go#L126-L164 The Read function in managers providers does crate a file if none is found yet. So it does not only read, it will also write new sessions. Maybe one should split this functionality but nevermind for this issue. So in: https://github.com/go-gitea/gitea/commit/9de871a0f8911030f8e06a881803cf722b8798ea A validator function got intoduced. Now every call should include this validator function. However in the same commit where the validator function was introduced, the call to the Read function changed from m.Read to m.provider.read: ![image](https://user-images.githubusercontent.com/5863788/54594212-7f87c280-4a30-11e9-83f7-4e06bc392a1c.png) So in my opinion this would bypass the managers read function and directly call the provides read function which does not validate the session further. https://github.com/go-gitea/gitea/blob/f7f2f12b68b16a2902c00e437fe3299f7ab97249/vendor/github.com/go-macaron/session/session.go#L311-L319 I dont know if this could cause this but maybe invalid sessions are provided here to the read function and it creates them without further checks ? Just a guess here.
Author
Owner

@lunny commented on GitHub (Mar 19, 2019):

I found v1.7.2 still has this problem, not only v1.7.4. When I refresh home page, /manifest.json will get a new session id every time.

[Macaron] 2019-03-19 20:04:22: Started GET /manifest.json for [::1]
2019/03/19 20:04:22 [D] Session ID: 1f35a522846589cc
2019/03/19 20:04:22 [D] CSRF Token: z440K8BT9R6lqI0k2b5TL3crBV06MTU1Mjk5NzA2MjMwNzAwNzAwMA==
2019/03/19 20:04:22 [D] Template: pwa/manifest_json
@lunny commented on GitHub (Mar 19, 2019): I found v1.7.2 still has this problem, not only v1.7.4. When I refresh home page, `/manifest.json` will get a new session id every time. ``` [Macaron] 2019-03-19 20:04:22: Started GET /manifest.json for [::1] 2019/03/19 20:04:22 [D] Session ID: 1f35a522846589cc 2019/03/19 20:04:22 [D] CSRF Token: z440K8BT9R6lqI0k2b5TL3crBV06MTU1Mjk5NzA2MjMwNzAwNzAwMA== 2019/03/19 20:04:22 [D] Template: pwa/manifest_json ```
Author
Owner

@Perflyst commented on GitHub (Mar 19, 2019):

it is also present in 1.8rc1

On March 19, 2019 12:06:10 PM UTC, Lunny Xiao notifications@github.com wrote:

I found v1.7.2 still has this problem, not only v1.7.4. When I refresh
home page, /manifest.json will get a new session id every time.

[Macaron] 2019-03-19 20:04:22: Started GET /manifest.json for [::1]
2019/03/19 20:04:22 [D] Session ID: 1f35a522846589cc
2019/03/19 20:04:22 [D] CSRF Token:
z440K8BT9R6lqI0k2b5TL3crBV06MTU1Mjk5NzA2MjMwNzAwNzAwMA==
2019/03/19 20:04:22 [D] Template: pwa/manifest_json

--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/go-gitea/gitea/issues/6320#issuecomment-474334800

@Perflyst commented on GitHub (Mar 19, 2019): it is also present in 1.8rc1 On March 19, 2019 12:06:10 PM UTC, Lunny Xiao <notifications@github.com> wrote: >I found v1.7.2 still has this problem, not only v1.7.4. When I refresh >home page, `/manifest.json` will get a new session id every time. >``` >[Macaron] 2019-03-19 20:04:22: Started GET /manifest.json for [::1] >2019/03/19 20:04:22 [D] Session ID: 1f35a522846589cc >2019/03/19 20:04:22 [D] CSRF Token: >z440K8BT9R6lqI0k2b5TL3crBV06MTU1Mjk5NzA2MjMwNzAwNzAwMA== >2019/03/19 20:04:22 [D] Template: pwa/manifest_json >``` > >-- >You are receiving this because you were mentioned. >Reply to this email directly or view it on GitHub: >https://github.com/go-gitea/gitea/issues/6320#issuecomment-474334800
Author
Owner

@lunny commented on GitHub (Mar 19, 2019):

I think session on this route could be disabled since it's public and only for PWA.

@lunny commented on GitHub (Mar 19, 2019): I think session on this route could be disabled since it's public and only for PWA.
Author
Owner

@lunny commented on GitHub (Mar 19, 2019):

@Perflyst @FloThinksPi could you confirm that #6372 fixed your problem and I will send back port to v1.8 and v1.7.5 after it's merged on master.

@lunny commented on GitHub (Mar 19, 2019): @Perflyst @FloThinksPi could you confirm that #6372 fixed your problem and I will send back port to v1.8 and v1.7.5 after it's merged on master.
Author
Owner

@Perflyst commented on GitHub (Mar 19, 2019):

Can you share a binary? linux-amd64, if possible.

@Perflyst commented on GitHub (Mar 19, 2019): Can you share a binary? linux-amd64, if possible.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/gitea#3044