mirror of
https://github.com/go-gitea/gitea.git
synced 2026-03-22 14:34:54 -05:00
Index page incredibly slow after update to 1.21.8 #12638
Closed
opened 2025-11-02 10:16:41 -06:00 by GiteaMirror
·
44 comments
No Branch/Tag Specified
main
release/v1.25
release/v1.24
release/v1.23
release/v1.22
release/v1.21
release/v1.20
release/v1.19
release/v1.18
release/v1.17
release/v1.16
release/v1.15
release/v1.14
release/v1.13
release/v1.12
release/v1.11
release/v1.10
release/v1.9
release/v1.8
v1.25.3
v1.25.2
v1.25.1
v1.25.0
v1.24.7
v1.25.0-rc0
v1.26.0-dev
v1.24.6
v1.24.5
v1.24.4
v1.24.3
v1.24.2
v1.24.1
v1.24.0
v1.23.8
v1.24.0-rc0
v1.25.0-dev
v1.23.7
v1.23.6
v1.23.5
v1.23.4
v1.23.3
v1.23.2
v1.23.1
v1.23.0
v1.23.0-rc0
v1.24.0-dev
v1.22.6
v1.22.5
v1.22.4
v1.22.3
v1.22.2
v1.22.1
v1.22.0
v1.23.0-dev
v1.22.0-rc1
v1.21.11
v1.22.0-rc0
v1.21.10
v1.21.9
v1.21.8
v1.21.7
v1.21.6
v1.21.5
v1.21.4
v1.21.3
v1.21.2
v1.20.6
v1.21.1
v1.21.0
v1.21.0-rc2
v1.21.0-rc1
v1.20.5
v1.22.0-dev
v1.21.0-rc0
v1.20.4
v1.20.3
v1.20.2
v1.20.1
v1.20.0
v1.19.4
v1.21.0-dev
v1.20.0-rc2
v1.20.0-rc1
v1.20.0-rc0
v1.19.3
v1.19.2
v1.19.1
v1.19.0
v1.19.0-rc1
v1.20.0-dev
v1.19.0-rc0
v1.18.5
v1.18.4
v1.18.3
v1.18.2
v1.18.1
v1.18.0
v1.17.4
v1.18.0-rc1
v1.19.0-dev
v1.18.0-rc0
v1.17.3
v1.17.2
v1.17.1
v1.17.0
v1.17.0-rc2
v1.16.9
v1.17.0-rc1
v1.18.0-dev
v1.16.8
v1.16.7
v1.16.6
v1.16.5
v1.16.4
v1.16.3
v1.16.2
v1.16.1
v1.16.0
v1.15.11
v1.17.0-dev
v1.16.0-rc1
v1.15.10
v1.15.9
v1.15.8
v1.15.7
v1.15.6
v1.15.5
v1.15.4
v1.15.3
v1.15.2
v1.15.1
v1.14.7
v1.15.0
v1.15.0-rc3
v1.14.6
v1.15.0-rc2
v1.14.5
v1.16.0-dev
v1.15.0-rc1
v1.14.4
v1.14.3
v1.14.2
v1.14.1
v1.14.0
v1.13.7
v1.14.0-rc2
v1.13.6
v1.13.5
v1.14.0-rc1
v1.15.0-dev
v1.13.4
v1.13.3
v1.13.2
v1.13.1
v1.13.0
v1.12.6
v1.13.0-rc2
v1.14.0-dev
v1.13.0-rc1
v1.12.5
v1.12.4
v1.12.3
v1.12.2
v1.12.1
v1.11.8
v1.12.0
v1.11.7
v1.12.0-rc2
v1.11.6
v1.12.0-rc1
v1.13.0-dev
v1.11.5
v1.11.4
v1.11.3
v1.10.6
v1.12.0-dev
v1.11.2
v1.10.5
v1.11.1
v1.10.4
v1.11.0
v1.11.0-rc2
v1.10.3
v1.11.0-rc1
v1.10.2
v1.10.1
v1.10.0
v1.9.6
v1.9.5
v1.10.0-rc2
v1.11.0-dev
v1.10.0-rc1
v1.9.4
v1.9.3
v1.9.2
v1.9.1
v1.9.0
v1.9.0-rc2
v1.10.0-dev
v1.9.0-rc1
v1.8.3
v1.8.2
v1.8.1
v1.8.0
v1.8.0-rc3
v1.7.6
v1.8.0-rc2
v1.7.5
v1.8.0-rc1
v1.9.0-dev
v1.7.4
v1.7.3
v1.7.2
v1.7.1
v1.7.0
v1.7.0-rc3
v1.6.4
v1.7.0-rc2
v1.6.3
v1.7.0-rc1
v1.7.0-dev
v1.6.2
v1.6.1
v1.6.0
v1.6.0-rc2
v1.5.3
v1.6.0-rc1
v1.6.0-dev
v1.5.2
v1.5.1
v1.5.0
v1.5.0-rc2
v1.5.0-rc1
v1.5.0-dev
v1.4.3
v1.4.2
v1.4.1
v1.4.0
v1.4.0-rc3
v1.4.0-rc2
v1.3.3
v1.4.0-rc1
v1.3.2
v1.3.1
v1.3.0
v1.3.0-rc2
v1.3.0-rc1
v1.2.3
v1.2.2
v1.2.1
v1.2.0
v1.2.0-rc3
v1.2.0-rc2
v1.1.4
v1.2.0-rc1
v1.1.3
v1.1.2
v1.1.1
v1.1.0
v1.0.2
v1.0.1
v1.0.0
v0.9.99
Labels
Clear labels
$20
$250
$50
$500
backport/done
💎 Bounty
docs-update-needed
good first issue
hacktoberfest
issue/bounty
issue/confirmed
issue/critical
issue/duplicate
issue/needs-feedback
issue/not-a-bug
issue/regression
issue/stale
issue/workaround
lgtm/need 2
modifies/api
modifies/translation
outdated/backport/v1.18
outdated/theme/markdown
outdated/theme/timetracker
performance/bigrepo
performance/cpu
performance/memory
performance/speed
pr/breaking
proposal/accepted
proposal/rejected
pr/wip
pull-request
reviewed/wontfix
💰 Rewarded
skip-changelog
status/blocked
topic/accessibility
topic/api
topic/authentication
topic/build
topic/code-linting
topic/commit-signing
topic/content-rendering
topic/deployment
topic/distribution
topic/federation
topic/gitea-actions
topic/issues
topic/lfs
topic/mobile
topic/moderation
topic/packages
topic/pr
topic/projects
topic/repo
topic/repo-migration
topic/security
topic/theme
topic/ui
topic/ui-interaction
topic/ux
topic/webhooks
topic/wiki
type/bug
type/deprecation
type/docs
type/enhancement
type/feature
type/miscellaneous
type/proposal
type/question
type/refactoring
type/summary
type/testing
type/upstream
Mirrored from GitHub Pull Request
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/gitea#12638
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @SoulSeekkor on GitHub (Mar 13, 2024).
Description
Updated from 1.21.7 to 1.21.8 and now the index page takes almost 10 seconds to load, it's insanely slow now... seems to be the heat map that is holding everything up?
Gitea Version
1.21.8
Can you reproduce the bug on the Gitea demo site?
No
Log Gist
No response
Screenshots
Git Version
2.42.0, Wire Protocol Version 2 Enabled
Operating System
Windows Server 2019
How are you running Gitea?
As a Windows service using the provided binary.
Database
MSSQL
@lunny commented on GitHub (Mar 13, 2024):
Can you force-reload this page and try again?
@SoulSeekkor commented on GitHub (Mar 13, 2024):
Force reload (in Chrome) and it didn't have any effect. Also loaded in Firefox (which I don't use for anything except testing so it's never hit the site before) and it had the same performance problem.
@SoulSeekkor commented on GitHub (Mar 13, 2024):
Quick update, my coworker appears to not have this issue even after hard refreshes. I STILL have the issue however, even after hitting it locally on the server itself with Edge. I have a feeling this is specific to administrative users.
@lunny commented on GitHub (Mar 13, 2024):
Can you provide some backend logs and web browser console -> network screenshots?
@SoulSeekkor commented on GitHub (Mar 13, 2024):
Sure, here is the contents of the log after clearing it and refreshing the main page a few times:
@SoulSeekkor commented on GitHub (Mar 13, 2024):
Here are the screenshots.



@rwalravens commented on GitHub (Mar 14, 2024):
We're experiencing the same, although for some users it already started in 1.21.1
We suspect it has something to do with the action table.
After launching the cron 'Delete all old actions from database', the performance is much better (for our company, the table shrunk from ~2 million to 500k rows).
Of course this bypasses the cause, but it can help while waiting for a fix.
@SoulSeekkor commented on GitHub (Mar 14, 2024):
I know in one particular older version (not sure if it was 1.21.1 off hand), the main page performance dropped drastically but I believe it was the org/repos listing that took forever at that time.
@lunny commented on GitHub (Mar 20, 2024):
#29905 & #29932 maybe helpful.
@SoulSeekkor commented on GitHub (Mar 20, 2024):
Looks promising!
@SoulSeekkor commented on GitHub (Mar 22, 2024):
Just noting that after updating to 1.21.9 this morning, it didn't make any difference on this page load issue for me.
@yp05327 commented on GitHub (Mar 26, 2024):
If heatmap is related, maybe this issue is related: #21045
@SoulSeekkor commented on GitHub (Mar 26, 2024):
Possibly but I doubt it, the heatmap (despite a fair number of repos on our instance) wasn't an issue before until 1.21.8, something changed in that (and seems to be only affecting admins) that absolutely made the main page almost unusable at this point. Also noting (probably expected) that the latest 1.21.10 version performance is unchanged.
@yp05327 commented on GitHub (Mar 27, 2024):
maybe we need a trace level log to detect where takes long time to handle the request.
@SoulSeekkor commented on GitHub (Mar 27, 2024):
I stopped the service, cleared the log, set to trace level, and after starting the service refreshed the main page and stopped it again. Hopefully this helps!
@lunny commented on GitHub (Mar 27, 2024):
Thank you. It's very helpful.
@SoulSeekkor commented on GitHub (May 28, 2024):
Just a quick update on this, I've updated our instance to 1.22.0 and this is still an issue. Before the update I also converted the entirety of the database collation to the recommended Latin1_General_CS_AS as well (to no effect).
@lunny commented on GitHub (May 29, 2024):
Can you post the backend logs when you visit the dashboard after enable database logs? https://docs.gitea.com/next/administration/logging-config?_highlight=log_sql#the-xorm-logger
And if yo can visit the database, please do a count on
actiontable, so that we know how many records there.@SoulSeekkor commented on GitHub (May 29, 2024):
I've attached the requested log, cleared it first followed by service start, refreshed dashboard, stopped service. The count of the action table is 139,460.
gitea.log
@SoulSeekkor commented on GitHub (Jun 14, 2024):
Interesting bit to add to this, I noticed yesterday when I had filled the entire feed with commits that were only mine that the index page loaded in a second or two (normal). Today now that the feed has other commits (synced commits for mirrors in orgs) it's back to being super super slow again to refresh.
@hawkbee commented on GitHub (Jul 5, 2024):
The slow dashboard happened on gitea 1.21.9, some logs may be usefull.
@SoulSeekkor commented on GitHub (Jul 8, 2024):
Just adding that this is still an issue with 1.22.1.
@somera commented on GitHub (Aug 1, 2024):
I see this
to in my gitea 1.22.1 on ubuntu with PostgreSQL.
@somera commented on GitHub (Aug 1, 2024):
The "cache context is expired, may be misused..." log entry I see in #31752.
@lixfel commented on GitHub (Aug 12, 2024):
I have the same issue with the slow home page / user page on my gitea instance. Due to the suspected heatmap, I tested the performance with
ENABLE_USER_HEATMAP = false. This resulted in no noticable change in performance. Due to this I suspect not the heatmap but the user/home/activity feed as the culprit.@somera commented on GitHub (Aug 12, 2024):
@lixfel do you see slow queries in gitea.log?
@lixfel commented on GitHub (Aug 12, 2024):
Yes (MariaDB backend). Time of the query roughly matches reported page load time for (logged in)
/and/user?tab=activitypage:I have also found these queries (just once, not for every slow page load):
@somera commented on GitHub (Aug 12, 2024):
I'm using Gitea with PostgreSQL. I can't check the statements on my setup.
Just try to analyse the statements with ANALYSE (https://mariadb.com/kb/en/analyze-statement/). And post the output here.
@lixfel commented on GitHub (Aug 12, 2024):
Context: ~122k Actions (I have in the meantime set the recommended old action cleanup up, therefore slightly shorter times).
1. Statement
2. Statement
3. Statement
Keys in the action table
@KimonHoffmann commented on GitHub (Sep 13, 2024):
We're running a relatively small instance in Docker, backed by SQLite and are also observing periodic, extremely long delays for quite some time (I don't remember the exact version, but I do remember it was the one where the GoGit was replaced in favor of the git binary, which is why back then I chalked the added delay up to further optimizations being necessary in interacting with the git binary).
Having enabled trace logging now, on 1.22.2 I can confirm the pattern observed by others above.
Please note that the following log entries are complete and have only been annotated with comments and anonymized.
They implicitly show that while this is going on the whole instance is blocked, as indicated by the
upload-packpack requests suffering as a result of this. What is common is that there always is a Dashboard request in the mix, either explicitly in the form of aGET /request, or implicitly in the form of aGET /user/loginrequest, which appear to be the requests actually causing the delays/slowdowns.@KimonHoffmann commented on GitHub (Sep 25, 2024):
One more data point to add to this:
Due to my gut feeling telling me that the SQL queries themselves are probably not slow as such (because the same behavior is consistently reported by many people in this discussion spanning a variety of DB backends), but rather are themselves the "victims" of some other activity slowing down the instance (maybe such as e.g. filesystem access to update some recently changed aspects of the git repositories, when showing the dashboard) I tried massively increasing the RAM allocated to the VM running Gitea and I can report that I have not observed slowdowns as bad as before since then.
Before the increase the VM had 16GB of RAM to work with with never more than 2GB active and the rest used as FS cache. With this configuration we had regular delays exceeding 35 seconds, sometimes even exceeding 1 minute(!).
Now the VM has 64GB of RAM to work with. The active size remains at less than 2GB and the rest used FS cache. With this configuration we've only have very few reports of slowness in general and those that were reported normally remained below 10 seconds only a select few (3 to be exact) took longer and not a single one exceeded 13 seconds.
@somera commented on GitHub (Sep 25, 2024):
But in this case you can switch to PostgreSQL. Will be better when gitea is used concurrently.
@KimonHoffmann commented on GitHub (Sep 25, 2024):
True, but I doubt it would make a difference in the context of this specific issue, as others who have reported the same behavior are using PostgresSQL.
This is one of the reasons why I suspect the root cause of this issue to be something else than the DB backend. The behavior change with extra FS cache alone partially supports this assumption.
@lixfel commented on GitHub (Sep 25, 2024):
I come to a different conclusion than you: The data presented supports the hypothesis that the SQL queries themselves are the problem. My reports on this issue were on a dedicated server with typically 16+GiB free/file cache memory with the slow times persisting after multiple page loads (when all data should be in the file system cache; the database is significantly smaller). A single page load of a frequently displayed page should never require the loading of multiple Gigabytes of data.
Edit: I suspect the reduction in slow page load times after the memory increase might have more to do with larger caches (due to more available memory) in the database software.
@KimonHoffmann commented on GitHub (Sep 25, 2024):
And this is major difference to the situation I observed, which is why I came to a different conclusion.
In our case the slowdowns were regular, but reloading the dashboard immediately after a slow load did not yield any slowdowns at all (page times in the low millisecond range). The correlation appeared to be with the amount of updates to various repositories that have accumulated since the last view of the dashboard.
Edit: But it could of course very well be that we are actually observing two different issues that just happen to surface under similar circumstances.
@SoulSeekkor commented on GitHub (Sep 25, 2024):
In our case it made no difference reloading immediately after the slow dashboard load, it ALWAYS loaded incredibly slow. The frustrating part now is that after many months of problematic dashboard loading at least all of this year out of nowhere the slow dashboard loading seems to have stopped happening. This was a week or two before upgrading to 1.22.2 when that came out, so I don't have a way to repo this now unfortunately.
@lunny commented on GitHub (Sep 25, 2024):
#32127 should be a step to resolve the problem.
@yp05327 commented on GitHub (Dec 3, 2024):
Any updates?
#32127 and the backport has been merged and released.
@lixfel commented on GitHub (Dec 3, 2024):
Updated to 1.22.4. Its better but not perfect yet with a 4.4 s page load time instead of 15 s.
@lunny commented on GitHub (Dec 3, 2024):
Please upload some logs and Pprof diagnose reports.
@lixfel commented on GitHub (Dec 26, 2024):
I did not manage to get pprof running (the port 6060 used by ENABLE_PPROF just returned 404 to me), but got a relevant log excerpt:
Context: 108k Actions total, 29.5k Undeleted actions from my user
Using analyze select on the query:
Removing the
is_deletedclause accelerates the query to instantaneous.@lunny commented on GitHub (Dec 26, 2024):
This problem should be fixed by #31821 and will be released on v1.23
@SoulSeekkor commented on GitHub (Jan 10, 2025):
Can confirm that the issue seems to be resolved all around for me.
@lunny commented on GitHub (Jan 10, 2025):
Closed at the moment. Please fire a new issue if a similar problem occurs.