mirror of
https://github.com/go-gitea/gitea.git
synced 2026-03-12 02:24:21 -05:00
Transfer the ownership 转移仓库所有权 do not work in V1.90~V1.92 #3826
Closed
opened 2025-11-02 05:26:39 -06:00 by GiteaMirror
·
69 comments
No Branch/Tag Specified
main
release/v1.25
release/v1.24
release/v1.23
release/v1.22
release/v1.21
release/v1.20
release/v1.19
release/v1.18
release/v1.17
release/v1.16
release/v1.15
release/v1.14
release/v1.13
release/v1.12
release/v1.11
release/v1.10
release/v1.9
release/v1.8
v1.25.3
v1.25.2
v1.25.1
v1.25.0
v1.24.7
v1.25.0-rc0
v1.26.0-dev
v1.24.6
v1.24.5
v1.24.4
v1.24.3
v1.24.2
v1.24.1
v1.24.0
v1.23.8
v1.24.0-rc0
v1.25.0-dev
v1.23.7
v1.23.6
v1.23.5
v1.23.4
v1.23.3
v1.23.2
v1.23.1
v1.23.0
v1.23.0-rc0
v1.24.0-dev
v1.22.6
v1.22.5
v1.22.4
v1.22.3
v1.22.2
v1.22.1
v1.22.0
v1.23.0-dev
v1.22.0-rc1
v1.21.11
v1.22.0-rc0
v1.21.10
v1.21.9
v1.21.8
v1.21.7
v1.21.6
v1.21.5
v1.21.4
v1.21.3
v1.21.2
v1.20.6
v1.21.1
v1.21.0
v1.21.0-rc2
v1.21.0-rc1
v1.20.5
v1.22.0-dev
v1.21.0-rc0
v1.20.4
v1.20.3
v1.20.2
v1.20.1
v1.20.0
v1.19.4
v1.21.0-dev
v1.20.0-rc2
v1.20.0-rc1
v1.20.0-rc0
v1.19.3
v1.19.2
v1.19.1
v1.19.0
v1.19.0-rc1
v1.20.0-dev
v1.19.0-rc0
v1.18.5
v1.18.4
v1.18.3
v1.18.2
v1.18.1
v1.18.0
v1.17.4
v1.18.0-rc1
v1.19.0-dev
v1.18.0-rc0
v1.17.3
v1.17.2
v1.17.1
v1.17.0
v1.17.0-rc2
v1.16.9
v1.17.0-rc1
v1.18.0-dev
v1.16.8
v1.16.7
v1.16.6
v1.16.5
v1.16.4
v1.16.3
v1.16.2
v1.16.1
v1.16.0
v1.15.11
v1.17.0-dev
v1.16.0-rc1
v1.15.10
v1.15.9
v1.15.8
v1.15.7
v1.15.6
v1.15.5
v1.15.4
v1.15.3
v1.15.2
v1.15.1
v1.14.7
v1.15.0
v1.15.0-rc3
v1.14.6
v1.15.0-rc2
v1.14.5
v1.16.0-dev
v1.15.0-rc1
v1.14.4
v1.14.3
v1.14.2
v1.14.1
v1.14.0
v1.13.7
v1.14.0-rc2
v1.13.6
v1.13.5
v1.14.0-rc1
v1.15.0-dev
v1.13.4
v1.13.3
v1.13.2
v1.13.1
v1.13.0
v1.12.6
v1.13.0-rc2
v1.14.0-dev
v1.13.0-rc1
v1.12.5
v1.12.4
v1.12.3
v1.12.2
v1.12.1
v1.11.8
v1.12.0
v1.11.7
v1.12.0-rc2
v1.11.6
v1.12.0-rc1
v1.13.0-dev
v1.11.5
v1.11.4
v1.11.3
v1.10.6
v1.12.0-dev
v1.11.2
v1.10.5
v1.11.1
v1.10.4
v1.11.0
v1.11.0-rc2
v1.10.3
v1.11.0-rc1
v1.10.2
v1.10.1
v1.10.0
v1.9.6
v1.9.5
v1.10.0-rc2
v1.11.0-dev
v1.10.0-rc1
v1.9.4
v1.9.3
v1.9.2
v1.9.1
v1.9.0
v1.9.0-rc2
v1.10.0-dev
v1.9.0-rc1
v1.8.3
v1.8.2
v1.8.1
v1.8.0
v1.8.0-rc3
v1.7.6
v1.8.0-rc2
v1.7.5
v1.8.0-rc1
v1.9.0-dev
v1.7.4
v1.7.3
v1.7.2
v1.7.1
v1.7.0
v1.7.0-rc3
v1.6.4
v1.7.0-rc2
v1.6.3
v1.7.0-rc1
v1.7.0-dev
v1.6.2
v1.6.1
v1.6.0
v1.6.0-rc2
v1.5.3
v1.6.0-rc1
v1.6.0-dev
v1.5.2
v1.5.1
v1.5.0
v1.5.0-rc2
v1.5.0-rc1
v1.5.0-dev
v1.4.3
v1.4.2
v1.4.1
v1.4.0
v1.4.0-rc3
v1.4.0-rc2
v1.3.3
v1.4.0-rc1
v1.3.2
v1.3.1
v1.3.0
v1.3.0-rc2
v1.3.0-rc1
v1.2.3
v1.2.2
v1.2.1
v1.2.0
v1.2.0-rc3
v1.2.0-rc2
v1.1.4
v1.2.0-rc1
v1.1.3
v1.1.2
v1.1.1
v1.1.0
v1.0.2
v1.0.1
v1.0.0
v0.9.99
Labels
Clear labels
$20
$250
$50
$500
backport/done
💎 Bounty
docs-update-needed
good first issue
hacktoberfest
issue/bounty
issue/confirmed
issue/critical
issue/duplicate
issue/needs-feedback
issue/not-a-bug
issue/regression
issue/stale
issue/workaround
lgtm/need 2
modifies/api
modifies/translation
outdated/backport/v1.18
outdated/theme/markdown
outdated/theme/timetracker
performance/bigrepo
performance/cpu
performance/memory
performance/speed
pr/breaking
proposal/accepted
proposal/rejected
pr/wip
pull-request
reviewed/wontfix
💰 Rewarded
skip-changelog
status/blocked
topic/accessibility
topic/api
topic/authentication
topic/build
topic/code-linting
topic/commit-signing
topic/content-rendering
topic/deployment
topic/distribution
topic/federation
topic/gitea-actions
topic/issues
topic/lfs
topic/mobile
topic/moderation
topic/packages
topic/pr
topic/projects
topic/repo
topic/repo-migration
topic/security
topic/theme
topic/ui
topic/ui-interaction
topic/ux
topic/webhooks
topic/wiki
type/bug
type/deprecation
type/docs
type/enhancement
type/feature
type/miscellaneous
type/proposal
type/question
type/refactoring
type/summary
type/testing
type/upstream
Mirrored from GitHub Pull Request
No Label
type/bug
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/gitea#3826
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @pt2go on GitHub (Aug 23, 2019).
i loopback to v1.83,it works.
@bwenrich commented on GitHub (Sep 1, 2019):
(Please forgive if I read 转移仓库所有权 incorrectly. I am using Gitea with English language settings)
Do you get a specific error why trying to use "Transfer Ownership" from the repository settings?
I just tested in https://try.gitea.io/demoorg1/iss7947 (which is actually v1.10.0 dev build), and was able to transfer this repo from myself, to be owned by a different organization.
In my personal Gitea instance running v1.9.2 I was also able to Transfer Ownership of a repository to a different username.
In the gitea.log, I think it looks something like this:
I didn't encounter a problem when testing it. Hopefully your log may show some interesting messages
@pt2go commented on GitHub (Sep 6, 2019):
log on console:
logfile:
the login ID is admin ,has the rights of all the repo,why Access is denied.
system: win2008 R2
git version 2.22.0.windows.1
gitea:1.92
@lunny commented on GitHub (Sep 6, 2019):
Maybe you should not store you gitea data on C:\ on windows since it's a system disk
@loup-brun commented on GitHub (Sep 7, 2019):
@lunny I'm experiencing frequent 500 errors after transferring repos to an organization.
This is recent as I upgraded to Gitea
1.9.2just last week and coincides with the 500s.I'm currently resolving this by
stoping and restarting my gitea service installd on Raspbian (raspberry pi, arm7).@guillep2k commented on GitHub (Sep 7, 2019):
@loup-brun A "500 error" is a generic code meaning "something unexpected happened". It would be very useful if you could paste here a relevant part of your gitea.log from the moment the error happened.
@loup-brun commented on GitHub (Sep 7, 2019):
Here is a log right after getting my 500 errors:
@lafriks commented on GitHub (Sep 7, 2019):
Seems like you are running out of memory
@loup-brun commented on GitHub (Sep 7, 2019):
@lafriks any suggestion on how to resolve this? That's what I thought, so I searched the Gitea documentation for options to configure memory limit on my gitea installation, but I found no option.
I am experiencing this only on repos recently transferred to an organization (old repos are fine).
@guillep2k commented on GitHub (Sep 8, 2019):
@loup-brun Perhaps those repos have other issues besides having been recently transferred. Are they big? Have many files? Many commits? You could try running some git commands on them yourself directly from the shell and see what happens. For instance, there is git-fsck that does some sanity checks. Gitea can do git-fsck itself, but if you doubt Gitea it could be nice to have a "second opinion" directly from git.
Also, make sure to use the same user as Gitea for those tests. The root user, for instance, will not have the same restrictions in resource usage.
@loup-brun commented on GitHub (Sep 8, 2019):
@guillep2k Small repos (max 4 MB), few files, max 10 commits. Only new projects (created since I updated to Gitea
1.9.2from1.8), which is why I suspect it has something to do with the update.(old repos created before the update not affected by this bug)
@guillep2k commented on GitHub (Sep 8, 2019):
@loup-brun It wouldn't hurt checking them anyway. Could they have become corrupted somehow?
@guillep2k commented on GitHub (Sep 8, 2019):
If you are interested, the standard procedure in my company is for the dev to create the repository, have some structural work done and then transfer it to the company org. We are using 1.9.2 without problems so far.
@lunny commented on GitHub (Sep 9, 2019):
@loup-brun what's your git version and how did you install it?
@loup-brun commented on GitHub (Sep 10, 2019):
@guillep2k I run git fsck without any problem (and have this feature turned on in Gitea).
@lunny I have
git version 2.11.0installed.I simply update my gitea by replacing the binary
/usr/local/bin/gitea(with appropriate group/file permissions) andrestarting the gitea service.I rolled back to gitea
1.8.3(just like @gwnpeter) which works fine, no error whatsoever on those repos which I transferred ownership (tested them all several times a day for 2-3 days now).@nikybiasion commented on GitHub (Oct 1, 2019):
On Windows, seems that some file still locked during os.Rename
@nikybiasion commented on GitHub (Oct 4, 2019):
The file object\XXXXXXXXXXXXXXXXXXXXXX.pack still locked
@guillep2k commented on GitHub (Oct 4, 2019):
This looks like a concurrency problem, and IMHO difficult to solve in a bulletproof way. You have more than one process accessing the repo at the time of migration, so the migration code fails.
If you think your users are not accessing it, please check whether any automated tool either is running things on it or left processes behind locking the file. In Windows I've found that Process Explorer (a Microsoft official utility) has a search handle function that tells you which processes have a particular file open. It may be of help.
Another related and very useful utility (you can check that out) is Process Monitor, although it's a bit more difficult to use. You need to set up the filters to log only the pahts you're interested in.
@nikybiasion commented on GitHub (Oct 7, 2019):
I've tried with Process Monitor, pack file is locked by gitea.exe. I've put a 30sec pause before the rename, close the locked file with Process Monitor and after the pause, the repository is transfered right
@guillep2k commented on GitHub (Oct 7, 2019):
I think that some git operation was transferred from
gitto a go library (go-git?) in 1.9.0. Perhaps that library does some caching or needs some garbage collection in order to free the files?@nikybiasion Please try the following: remove the delay from the code, but wait in the UI 30 seconds before pressing the last confirmation button. If the problem goes away, it means that the lock is produced in some operation from the page rendered before.
@guillep2k commented on GitHub (Oct 7, 2019):
Another interesting test is placing the 30s delay in different parts of the code, from the earliest point to the latest (you know that one works), until we find the point that is introducing the lock. BTW good catch!
@nikybiasion commented on GitHub (Oct 8, 2019):
if i go to transfer page, wait until the lock disapper, click transfer, i get 500 error and the pack file became locked.
@guillep2k commented on GitHub (Oct 8, 2019):
Good to know; if you can confirm that no other accesses were being made to that repo, that should tell us that Gitea is locking itself withing the transfer procedure. That certainly shortens the search.
@philibe commented on GitHub (Oct 17, 2019):
Hello,
Same issue for me.
Windows 7 Pro SP1
SQLite version 3.30.1 2019-10-10 20:19:45
gitea-1.9.4-windows-4.0-amd64
Log Extract
I workaround by
git pushto admin1 (from command line linux).edit:
restart service and delete session don't unlock.
edit2:
In fact I would like transfer 3 repos. For the 1st it worked, but it didn't not for the 2nd and 3rd (except by the workaround delete/recreate).
@lunny commented on GitHub (Oct 18, 2019):
Have you installed antivirus software and enabled it? If that, could you try disable it and try again?
@realslacker commented on GitHub (Oct 18, 2019):
@lunny I'm experiencing the same issue (see #8565), I'm not running any AV software.
@guillep2k commented on GitHub (Oct 18, 2019):
@realslacker Perhaps Tortoise or another local git helper that it's integrated with Windows shell?
@guillep2k commented on GitHub (Oct 18, 2019):
You can use process monitor from Microsoft's sysinternals to try and find what process is holding up the files:
I'd use a filter like this:
This should tell us which other program might be locking the directory, or is it Gitea itself.
@realslacker commented on GitHub (Oct 18, 2019):
The only service on the server is Gitea. This server is a brand new VM spun up for Gitea and I experienced this issue when moving a repo that was freshly imported into the wrong account.
@guillep2k commented on GitHub (Oct 18, 2019):
@realslacker I was hoping to check whether the locker was Gitea's service process or a child from it (e.g. some lingering
gitthat should not be there).@hiteshnayak305 commented on GitHub (Oct 21, 2019):
I think it's problem with windows permission granting to gitea executable. i tried running as
serviceand also manuallyas administratorbut got the same access denied.@guillep2k commented on GitHub (Oct 21, 2019):
The error message is misleading (although it comes from Windows itself). It's almost certainly a file locking problem. We only couldn't find out in what step the directory becomes locked/fails to be released yet. Most UNIX-like filesystems don't care about file/dir locking for renaming/moving, but Windows does.
@Sebazzz commented on GitHub (Nov 1, 2019):
Is there a way to do it manually or a workaround? (Like forking, then deleting the original?)
@ketilnil commented on GitHub (Nov 1, 2019):
I followed the tip from gwnpeter at the top, downgrading to v1.8.3. I then did all the transfering I needed to do, and upgraded back to v1.9.4.
@guillep2k commented on GitHub (Nov 2, 2019):
First of all I must say that I couldn't reproduce the problem (Windows 10, 1.11.0+dev-141-g484edb758). However, I'm just noticing this important part of @nikybiasion 's comment: "pack file".
Now I'm thinking:
gitpack files (although @nikybiasion is clear about this, we don't know if this is true for all cases).models.TransferOwnership()performs nogitoperation whatsoever!So, I think the cause is completely unrelated to the transfer itself. It only happens to affect repo ownership transfer because that operation requires moving the repository folder. Possible causes:
For example, I follow this path:
GetActivityStats()is just the first function I've found callingOpenRepository(); there may be many more. Is it possible that operations like this will keep the repo files open due to lazy garbage collection? Linux users won't notice, but Windows users will. And it will not be easy to reproduce in a simple test installation with only a couple of repositories and a single user.@zeripath commented on GitHub (Nov 2, 2019):
Hmm might it make sense to run a git gc just before moving the git repository? I'm not sure what gc does when a pack file is locked but I think it waits. If so after the GC is done all locks should have been released and you should be able to move the repo.
The only other option I can think of is to fully clone the repo on Windows and do a deferred delete perhaps using the git gc trick above to ensure that when it finishes you're hopefully ready to delete.
Because of our file system architecture with a move its possible on all systems that a transfer of a busy repo could lead to failures of pushes, be those from web edits, merges or external. On Linux that could lead to dangling objects, on windows they should just disappear in to the ether.
The only true way to prevent this from ever happening is that repos are stored by their immutable ID. Cloning would prevent that inconsistent data but would not allow ongoing actions to succeed.
@zeripath commented on GitHub (Nov 2, 2019):
As an aside if we did move to the id approach described above to help users recover from inevitable crashes the .git/config file should be updated to contain the real name and owner of the repository.
I think we could generally further (ab)use our git repositories to store more of our basic information to help people restore from failures.
Even more crazy: The git object db and references are just a hash object store - there's nothing that says you have to store only source files in there, issues and comments could all be there. There's a surprising number of people who manage to hose their DBs.
@Sebazzz commented on GitHub (Nov 2, 2019):
My input based on my situation:
I think that is ruled out, because I was the only one using the system (we are in de pre-steps of migrating from SVN to git). Of course, I was on the settings page of the repository in question 😃
No commits beside the initial import (
git --mirror push <path to git svn cloned repo>). No hooks have also been made.That or the code indexing operation I think. But it is always happens. I haven't checked whether I can do it via the API, perhaps it would work then.
@ketilnil commented on GitHub (Nov 2, 2019):
I'm also the only user of my Gitea installation. The repositories in question were moved from SVN the same day. I decided right away I had to partition the repos into 2 organizations. About half the repos transfered ownership without problems. I could not identify any similarities between the ones that didn't, but it was consistently the same repos that had the problem. I could transfer the other ones back and forth multiple times without issue. Restarting the Gitea service didn't help. I used process explorer on a couple of the attempted transfers, and all of them created a lock on a .pack file the exact moment I pressed the "transfer ownership" button.
@guillep2k commented on GitHub (Nov 2, 2019):
In the light of the new information, I've found out that my claim that "the transfer repo doesn't access the repo files" was .... incomplete. It turns out that as part of Macaron routing, two functions are executed right before
routers/repo/setting.go:SettingsPost()(i.e. before processing the transfer request):RepoAssignment()performs a lot of queries, all of them to repository refs.RepoRefByType()callsGetBranchCommit()andGetCommitsCount(); they both open the repository objects to do their work.If someone is interested, these are the system calls captured by Process Monitor (I still can't reproduce the bug locally):
RepoAssignment.txt
RepoRefByType.txt
@guillep2k commented on GitHub (Nov 2, 2019):
I thought about that, but indexing is only performed after actual changes (i.e. commits, import repos). For what you guys said, I think indexing shuold be long complete at the time of your tests. Transferring the ownership doesn't change the repo ID in the database, so the index doesn't need to be updated (and it doesn't).
@Sebazzz commented on GitHub (Nov 2, 2019):
In my case I verified with
pshandleand Process Monitor no open handles were at the directory or any file below it. Still, I thought it might be relevant 👍@guillep2k commented on GitHub (Nov 2, 2019):
@Sebazzz And it still failed?
@Sebazzz commented on GitHub (Nov 2, 2019):
Yes, transferring the repository failed. As you mentioned, some actions involving reading the repository happen when clicking the button. Then it failed when moving.
@guillep2k commented on GitHub (Nov 2, 2019):
@zeripath Rather than doing very complicate maneuvers to perform a task that works 99% of the time very efficiently, what about this plan?:
@nikybiasion was successful by "brutally" inserting a 30s pause before the rename.
Maybe only for Windows?
@Sebazzz commented on GitHub (Nov 2, 2019):
I could live with that, it is not like I make a habit of moving repositories around. If it takes 30 seconds longer it is at least a temporary fix.
@guillep2k commented on GitHub (Nov 2, 2019):
I don't mean implement a 30 second delay! 😆 Just one and try again. It's only that this has been tested with 30s.
@Sebazzz commented on GitHub (Nov 7, 2019):
I'm hitting this when renaming a repo in an org:
This might be related.
@guillep2k commented on GitHub (Nov 8, 2019):
@Sebazzz I totally think it's related, but I was never able to reproduce. Are you able to reproduce this with a new repo? (e.g. one that has no traffic or a complex history).
@Sebazzz commented on GitHub (Nov 8, 2019):
@guillep2k Yes, I can reproduce the ownership transfer issue and rename issue on a new repository with only 1 commit.
Can I workaround by moving / renaming the directory manually and renaming references in the database?
@guillep2k commented on GitHub (Nov 8, 2019):
@Sebazzz Yes, just for a rename you can; for a transfer it's more complicated because of the team permissions; I'm not sure about the correct procedure. If you're pressed to transfer the repo and you don't care about issues/PRs, you could just import/migrate it into the org and delete it from the user's. You need to enable local file migration in
app.inifor that.However, I'm more interested in finding the root cause. 😁
Can you collect the following information for me about the repo and its folder?
Thank you!
@lunny commented on GitHub (Nov 8, 2019):
Maybe indexer or other task are running.
@Sebazzz commented on GitHub (Nov 8, 2019):
@lunny No, I can rename from Windows Explorer or PowerShell just fine.
Standard NTFS:
Those are fine, I ran an extra recursion to be sure. The gitea user is owner as well.
"C:\Users[service account].ssh\authorized_keys" is an empty file (I assume you are looking for that file). I actually have SSH disabled.
I ran a
dir /b/s gitea.exein both partitions, there is only onegitea.exeNo, just the default gitea installed hooks.
Thank you 😊
@Sebazzz commented on GitHub (Nov 8, 2019):
This is the failing operation:
After the operation has failed I still can rename the dir from cmd/explorer, so I don't think any handles are kept open.
However, based on this issue it appears Windows does not like it when a rename occurs while a handle is still open.
Edit: I think that might be the case, just before writing the log file, a handle to the directory is closed:
Edit 2: It does not happen in the API (
PATCH /repos/{owner}/{repo}), which means it is an issue in the edit page. Since I cannot transfer ownership via the API (hello feature request? 👍 ) I cannot confirm that is also the issue for the transfer ownership.@guillep2k commented on GitHub (Nov 8, 2019):
Thank you for all this useful information. Yes, a lock on the directory due to dangling open handles is the main suspect; it's also a noticeable difference in how Windows handles the file systems compared with Linux.
I'll try my best again to reproduce this problem with this information in mind.
@Sebazzz commented on GitHub (Nov 9, 2019):
Good luck, I have done several attempts just now and I can't put my finger on it.
I have tried:
mssqllike the prod instance does;I still cannot reproduce it, while it still goes wrong on the prod instance.
I'm not familiar with golang but I will try using delve to debug it perhaps.
I debugged it, it happens somewhere in
os.Rename. There the handle to the directory is opened, which subsequently causes the issue to occur.@zeripath commented on GitHub (Nov 9, 2019):
This sounds incredibly frustrating! @guillep2k you know it could be TestPullRequests... pr.testPatch doesn't create a temporary repo just a clean index.
If transfer ownership doesn't do:
c15d371939/models/pull.go (L609-L610)Then it wouldn't know that the repo is supposed to be locked.
(Because of the way transfer ownership is written we probably need to add all of those locks back in everywhere.)
@Sebazzz commented on GitHub (Nov 9, 2019):
Somehow a handle to a pack file is kept open:
I observed it two times: one time one handle was kept open, one time three handles were kept open. It is not
os.Renameafter all. If I break the handle, the process goes fine although the second time managed to crash the gitea process.The handles are opened before SettingsPost and the handles are soon, but not immediately cleaned up after the function returns.
After this line, the handle is cleaned up:
*edit: that is not true, it was simply then that GC happened to trigger... *
@Sebazzz commented on GitHub (Nov 9, 2019):
I got a testcase, somehow. I'm not sure why but it always reproduces. I suspect it has something to do with the git packfiles. The handle is opened in the same thread as the thread that processes the request, so it is not a parallel operation that causes it.
This is a complete testcase, in my case I ran it at
Z:\Dev\Giteabut the paths shouldn't matter.Download link: https://1drv.ms/u/s!AuWWgEGGFWmIpOEXxslyffIglnw_6w?e=C9OKTQ
Username: testuser
Password: testcase
@Sebazzz commented on GitHub (Nov 9, 2019):
This is the function that leaks handles:
In this function the leaking handle is created.
@zeripath commented on GitHub (Nov 9, 2019):
Aha.
Should we be closing the git repo?
1f3ba6919d/modules/context/repo.go (L591)@Sebazzz commented on GitHub (Nov 9, 2019):
I'm beginning to suspect that #6478 might be the cause. It is the only large change in this codepath. RepoRefByType hasn't changed in the last two years except for a name change. In addition, 1.8.3 is the last known version to be working and 1.9.0 is the first version to include #6478 which is essentially using a different library to read the repositories if I read the pull request correctly.
Edit: @guillep2k By reading the code I believe eventually we come to go-git:getFromPackFile:
If we then zoom out back to where the repository is opened, this option is given:
This means the handles to the pack files are not closed explicitly, which is exactly what I observed. Then you then rely on GC to close the handle. The handle is kept open, attempt to rename or move the repository folder is done and then "computer says no".
@zeripath commented on GitHub (Nov 9, 2019):
@Sebazzz #6478 is almost certainly the cause.
@lunny commented on GitHub (Nov 9, 2019):
@zeripath
ctx.Repo.GitRepo, err = git.OpenRepository(repoPath). There is noClosefor therepository.@Sebazzz commented on GitHub (Nov 9, 2019):
No but there is in
storage.dir. That holds the descriptors/handles for the opened packfiles. It should, at least for Windows, be closed prior to rename/TransferOwnership or haveKeepDescriptorsset tofalse. I don't know whether you want to keep this leak for Linux.See also
Storage Options:@guillep2k commented on GitHub (Nov 9, 2019):
@lunny @zeripath I think we should destroy
ctx.Repo.Repositorybefore the rename/migration, as it will be invalid afterwards anyway. We may need to explicitly call gc after that. At least as a workaround. In the meantime, we could ask for aClose()method upstream perhaps?@lunny commented on GitHub (Nov 9, 2019):
@guillep2k Just like what @Sebazzz said. It has
CloseonStorage,but we haven't hold it on. And we useRepositoryKeepDescriptorsastrueonstorage := filesystem.NewStorageWithOptions(fs, cache.NewObjectLRUDefault(), filesystem.Options{KeepDescriptors: true}).@filipnavara Do you remember why use
KeepDescriptorsastruehere? Could we change it tofalsesimply?@zeripath commented on GitHub (Nov 9, 2019):
OK,
filesystem.StoragehasObjectStorageas a field which does have aClosemethod.We set this as a private field in
git.Repositoryhere:a647a54a08/modules/git/repo.go (L111)So we could add a
Close()method togit.Repository@filipnavara commented on GitHub (Nov 9, 2019):
It is performance optimization to avoid constant re-opening of the files [for the duration of one page load]. As @zeripath pointed out there's a
Closemethod that should be called at some point. It probably got lost during one of my rebases when adjusting the code.@Sebazzz commented on GitHub (Nov 9, 2019):
Is that Close method also worth calling at the end of every request instead of relying on the GC in general (beside the solution of calling it early for these two bugs)?