[Proposal][Discuss] Gitea Cluster #6426

Open
opened 2025-11-02 06:55:25 -06:00 by GiteaMirror · 22 comments
Owner

Originally created by @lunny on GitHub (Dec 2, 2020).

How does a Gitea deployment scale? Gitea cluster should resolve part of it.

Currently when running several Gitea instances which shared database, git storage. There is still something needs to resolve.

  • Crons: Now every Gitea instance will run all the crons. It is duplicated work and will waste CPU and disk. The idea is the cron tasks should be spliced into all the Gitea instances.
  • Migrating: You cannot stop the running migrating task. Because you don't know which Gitea instance is running it.
  • Git Storage: A shared/copied git storage is required. Currently, you can use a NFS or a RWX file-system(K8S). If every Gitea instance will only store part of the repositories and when requests come, they will be routed to the right Gitea instance. A better solution is to integrating Gitaly but it's not implemented yet.

comment by @wxiaoguang

  • If there is no session-stick or ip-stick, I guess it would trigger database deadlock more frequently due to transaction conflicts.
  • Actions Crons: And it conflicts with actions cron, there will be multiple tasks.
  • Packages: And it would trigger docker's duplicate insert bug again because there is only a workaround (in process mutex) at the moment.
  • UI notifications: I guess "event source" doesn't work with cluster either
  • Locks: Some packages depend on the ExclusivePool pool, which is also in-process now.
    #31813(based on #31908)
Originally created by @lunny on GitHub (Dec 2, 2020). How does a Gitea deployment scale? Gitea cluster should resolve part of it. Currently when running several Gitea instances which shared database, git storage. There is still something needs to resolve. - [ ] **Crons**: Now every Gitea instance will run all the crons. It is duplicated work and will waste CPU and disk. The idea is the cron tasks should be spliced into all the Gitea instances. - [ ] **Migrating**: You cannot stop the running migrating task. Because you don't know which Gitea instance is running it. - [x] **Git Storage**: A shared/copied git storage is required. Currently, you can use a NFS or a RWX file-system(K8S). If every Gitea instance will only store part of the repositories and when requests come, they will be routed to the right Gitea instance. A better solution is to integrating Gitaly but it's not implemented yet. comment by @wxiaoguang - [ ] If there is no session-stick or ip-stick, I guess it would trigger database deadlock more frequently due to transaction conflicts. - [ ] **Actions Crons**: And it conflicts with actions cron, there will be multiple tasks. - [x] **Packages**: And it would trigger docker's duplicate insert bug again because there is only a workaround (in process mutex) at the moment. - [ ] **UI notifications**: I guess "event source" doesn't work with cluster either - [x] **Locks**: Some packages depend on the `ExclusivePool` pool, which is also in-process now. #31813(based on #31908)
GiteaMirror added the type/proposaltype/summary labels 2025-11-02 06:55:25 -06:00
Author
Owner

@6543 commented on GitHub (Dec 2, 2020):

  • for cron I propose: cron only create tasks, witch are represented in DB (like it's done with migration tasks)

  • for tasks: each instance should have a unique ID (GUID), if an instances fetch tasks from DB and alter there state by changing Status to running + adding GUID & PID into table

  • there must be some way gitea instances can speac to each other by GUID as Identifyer to:

    • send cancle/pause/continue "signals"
  • propose a hardbeat to recover & cleanup tasks of crashed gitea instances:

type Hardbeat struct {
  guid        int64
  beat        int64
  recoverGUID int64 // empty if nothing crashed 
}


on process.Manager creation start hardbeatFunc()

func hardbeatFunc() {
  for {
    x.Where(guid=getGUID()).Update(&Hardbeat{beat=unixtime.Now()})
    
    for _, crash := x.Select(guids with timeout && recoverGUID == "") {
      x.Where(crash.guid).Update(recoverGUID = getGUID())
      // make sure no other instance has taken the recover step - and 
      if !x.Exist(guid=crash.guid, recoverGUID=getGUID()) {ret error}
      
      now reset all tasks with crash.guid

    }
    
    sleep 20sec

  }
}

modules/tasks task will need to be refactor to have an easy interface:
task.Signal(task.CANCEL, guid, pid) <- if guid is not of running instance, send it to the specific one ...
task.Run(t *task)
...

@6543 commented on GitHub (Dec 2, 2020): * for cron I propose: cron only create tasks, witch are represented in DB (like it's done with migration tasks) * for tasks: each instance should have a unique ID (GUID), if an instances fetch tasks from DB and alter there state by changing Status to running + adding GUID & PID into table * there must be some way gitea instances can speac to each other by GUID as Identifyer to: - send cancle/pause/continue "signals" * propose a hardbeat to recover & cleanup tasks of crashed gitea instances: ``` type Hardbeat struct { guid int64 beat int64 recoverGUID int64 // empty if nothing crashed } on process.Manager creation start hardbeatFunc() func hardbeatFunc() { for { x.Where(guid=getGUID()).Update(&Hardbeat{beat=unixtime.Now()}) for _, crash := x.Select(guids with timeout && recoverGUID == "") { x.Where(crash.guid).Update(recoverGUID = getGUID()) // make sure no other instance has taken the recover step - and if !x.Exist(guid=crash.guid, recoverGUID=getGUID()) {ret error} now reset all tasks with crash.guid } sleep 20sec } } ``` modules/tasks task will need to be refactor to have an easy interface: task.Signal(task.CANCEL, guid, pid) <- if guid is not of running instance, send it to the specific one ... task.Run(t *task) ...
Author
Owner

@lafriks commented on GitHub (Dec 2, 2020):

Some kind of git storage layer would be needed imho (something like gitlab has)

@lafriks commented on GitHub (Dec 2, 2020): Some kind of git storage layer would be needed imho (something like gitlab has)
Author
Owner

@6543 commented on GitHub (Dec 2, 2020):

I would fokus on tasks since git data via shared-storage work quite well at the moment

@6543 commented on GitHub (Dec 2, 2020): I would fokus on tasks since git data via shared-storage work quite well at the moment
Author
Owner

@lunny commented on GitHub (Dec 3, 2020):

I would fokus on tasks since git data via shared-storage work quite well at the moment

It is but in fact it's expensive. So a distributed git data storage layer still be a necessary feature of Gitea in future.

@lunny commented on GitHub (Dec 3, 2020): > I would fokus on tasks since git data via shared-storage work quite well at the moment It is but in fact it's expensive. So a distributed git data storage layer still be a necessary feature of Gitea in future.
Author
Owner

@Codeberg-org commented on GitHub (Dec 3, 2020):

I would fokus on tasks since git data via shared-storage work quite well at the moment

+1

Safe distributed/concurrent gitea is surely the highest priority from a user point of view, as off-the-shelf options for distributed SQL databases and distributed file systems are readily available.

@Codeberg-org commented on GitHub (Dec 3, 2020): > I would fokus on tasks since git data via shared-storage work quite well at the moment +1 Safe distributed/concurrent gitea is surely the highest priority from a user point of view, as off-the-shelf options for distributed SQL databases and distributed file systems are readily available.
Author
Owner

@6543 commented on GitHub (Feb 3, 2021):

Roadmap:

  1. master elec
  2. log & com for processManager com
  3. tasks

master elec

done by DBMS: who get SQL select-update query in first

  • need hearbeat table in DB
  • GUID creation in process.GetManager()

~7msg types

  • CANCEL - cancle processes
  • ACK
  • LIST - get running processes
  • IMRUNNING
  • PING - are you alive
  • STATUS - get status of a specific process
  • CREATE - opt. create process on other instance

msg com

some sort of https://nats.io/, https://activemq.apache.org/cross-language-clients, ... over DB, Redis, ... ?

sidenotes

  • only master should start cron jobs
  • ? trigger webhooks ?
@6543 commented on GitHub (Feb 3, 2021): # Roadmap: 1. master elec 2. log & com for processManager com 3. tasks ## master elec done by DBMS: who get SQL select-update query in first * need hearbeat table in DB * GUID creation in process.GetManager() ## ~7msg types * CANCEL - *cancle processes* * ACK * LIST - *get running processes* * IMRUNNING * PING - *are you alive* * STATUS - *get status of a specific process* * CREATE - *opt. create process on other instance* ## msg com some sort of https://nats.io/, https://activemq.apache.org/cross-language-clients, ... over DB, Redis, ... ? # sidenotes * only master should start cron jobs * ? trigger webhooks ?
Author
Owner

@gary-mazz commented on GitHub (Mar 18, 2021):

Interesting discussion. I think this started back in 2017 #2959.

There needs to be recognition of 2 cluster use cases: Load Balancing and High Availability (HA) with 2 types of location configurations: local and remote.

The more distant the cluster participant, data shifts from synchronous (near -real-time) to delayed; creating a spectrum of data synchronization quality levels from highly consistent to eventually consistent.

Technologies picked should be able to operate at a distance as well as on local prem without reconfiguration. Secure communications via tunneling and certificate based authentication between nodes should also be considered.

The "tricky part" is figuring out where to put the replication. Since gitea supports multiple databases, and each employs different and incompatible replication mechanisms, a formalized middle-ware layer is likely required to replicate data. The mid-layer replications also allows different db backend configuration (eg postgresql and Mysql) to provide transparent replication.

Replications will need some type lockout strategy for check-in/outs and zips operations during replication activity. The options are:

  1. lockout client access until replication activity is completed
  2. Lockout replication updates (cache operations) until client activity is complete.
  3. Fail client operations if replication updates touches files in client operations
  4. Delay/pause client operation until replication activities and status checks are complete. (for remote site failover and load balancing..)

With remote site load balancing, it is possible to have check-in collisions causing inconsistencies. The use cases that cause these conditions:

  1. system clocks fall out of sync between servers (both local and remote locations)
  2. remote site load balancing loses replication network connection(s) (two headed monster)
  3. normal networking and load delays, cause race conditions between servers (occurs both local and remote configurations)

I hope this helps some of your design decisions

PS: don't forget config files change pushes..

@gary-mazz commented on GitHub (Mar 18, 2021): Interesting discussion. I think this started back in 2017 #2959. There needs to be recognition of 2 cluster use cases: Load Balancing and High Availability (HA) with 2 types of location configurations: local and remote. The more distant the cluster participant, data shifts from synchronous (near -real-time) to delayed; creating a spectrum of data synchronization quality levels from highly consistent to eventually consistent. Technologies picked should be able to operate at a distance as well as on local prem without reconfiguration. Secure communications via tunneling and certificate based authentication between nodes should also be considered. The "tricky part" is figuring out where to put the replication. Since gitea supports multiple databases, and each employs different and incompatible replication mechanisms, a formalized middle-ware layer is likely required to replicate data. The mid-layer replications also allows different db backend configuration (eg postgresql and Mysql) to provide transparent replication. Replications will need some type lockout strategy for check-in/outs and zips operations during replication activity. The options are: 1) lockout client access until replication activity is completed 2) Lockout replication updates (cache operations) until client activity is complete. 3) Fail client operations if replication updates touches files in client operations 4) Delay/pause client operation until replication activities and status checks are complete. (for remote site failover and load balancing..) With remote site load balancing, it is possible to have check-in collisions causing inconsistencies. The use cases that cause these conditions: 1) system clocks fall out of sync between servers (both local and remote locations) 2) remote site load balancing loses replication network connection(s) (two headed monster) 3) normal networking and load delays, cause race conditions between servers (occurs both local and remote configurations) I hope this helps some of your design decisions PS: don't forget config files change pushes..
Author
Owner

@lafriks commented on GitHub (Mar 22, 2021):

We should probably also need some kind of git repository access layer so that they could be distributed across cluster with local storage

@lafriks commented on GitHub (Mar 22, 2021): We should probably also need some kind of git repository access layer so that they could be distributed across cluster with local storage
Author
Owner

@imacks commented on GitHub (Jun 3, 2021):

Just want to contribute my own experience using Gitea for the last couple of years.

Our first attempt was to run dockerized Gitea in kube, with storage back end provided by NFS. We rely on kube healthcheck to restart an unresponsive Gitea instance, which can run on any tainted host managed by kube. This solves the reliability issue somewhat, though there will be a period of unavailability while the container restarts.

Our v2 setup swaps out NFS for ceph CSI in kube. R/W performance improves dramatically. We also use S3 compat layer in ceph to store LFS data.

My most pressing desire for v3 is HA. We can be less ambitious and work on single local cluster first. There can be a dedicated pod for running cron tasks, so Gitea can concentrate on doing git and webserver stuff. We can also use s3 for storage exclusively for its sync capabilities.

@imacks commented on GitHub (Jun 3, 2021): Just want to contribute my own experience using Gitea for the last couple of years. Our first attempt was to run dockerized Gitea in kube, with storage back end provided by NFS. We rely on kube healthcheck to restart an unresponsive Gitea instance, which can run on any tainted host managed by kube. This solves the reliability issue somewhat, though there will be a period of unavailability while the container restarts. Our v2 setup swaps out NFS for ceph CSI in kube. R/W performance improves dramatically. We also use S3 compat layer in ceph to store LFS data. My most pressing desire for v3 is HA. We can be less ambitious and work on single local cluster first. There can be a dedicated pod for running cron tasks, so Gitea can concentrate on doing git and webserver stuff. We can also use s3 for storage exclusively for its sync capabilities.
Author
Owner

@viceice commented on GitHub (Jan 4, 2023):

Just want to contribute my own experience using Gitea for the last couple of years.

Our first attempt was to run dockerized Gitea in kube, with storage back end provided by NFS. We rely on kube healthcheck to restart an unresponsive Gitea instance, which can run on any tainted host managed by kube. This solves the reliability issue somewhat, though there will be a period of unavailability while the container restarts.

Our v2 setup swaps out NFS for ceph CSI in kube. R/W performance improves dramatically. We also use S3 compat layer in ceph to store LFS data.

My most pressing desire for v3 is HA. We can be less ambitious and work on single local cluster first. There can be a dedicated pod for running cron tasks, so Gitea can concentrate on doing git and webserver stuff. We can also use s3 for storage exclusively for its sync capabilities.

Do you have some hint to move from nfs to ceph CSI? I like to test out the perf. I already use S3 (minio) for all other Gitea storage.

@viceice commented on GitHub (Jan 4, 2023): > Just want to contribute my own experience using Gitea for the last couple of years. > > Our first attempt was to run dockerized Gitea in kube, with storage back end provided by NFS. We rely on kube healthcheck to restart an unresponsive Gitea instance, which can run on any tainted host managed by kube. This solves the reliability issue somewhat, though there will be a period of unavailability while the container restarts. > > Our v2 setup swaps out NFS for ceph CSI in kube. R/W performance improves dramatically. We also use S3 compat layer in ceph to store LFS data. > > My most pressing desire for v3 is HA. We can be less ambitious and work on single local cluster first. There can be a dedicated pod for running cron tasks, so Gitea can concentrate on doing git and webserver stuff. We can also use s3 for storage exclusively for its sync capabilities. Do you have some hint to move from nfs to ceph CSI? I like to test out the perf. I already use S3 (minio) for all other Gitea storage.
Author
Owner

@piamo commented on GitHub (Mar 10, 2023):

Just want to contribute my own experience using Gitea for the last couple of years.

Our first attempt was to run dockerized Gitea in kube, with storage back end provided by NFS. We rely on kube healthcheck to restart an unresponsive Gitea instance, which can run on any tainted host managed by kube. This solves the reliability issue somewhat, though there will be a period of unavailability while the container restarts.

Our v2 setup swaps out NFS for ceph CSI in kube. R/W performance improves dramatically. We also use S3 compat layer in ceph to store LFS data.

My most pressing desire for v3 is HA. We can be less ambitious and work on single local cluster first. There can be a dedicated pod for running cron tasks, so Gitea can concentrate on doing git and webserver stuff. We can also use s3 for storage exclusively for its sync capabilities.

Will there be concurrency problem when using Ceph CSI, since there is no file lock protection?

@piamo commented on GitHub (Mar 10, 2023): > Just want to contribute my own experience using Gitea for the last couple of years. > > Our first attempt was to run dockerized Gitea in kube, with storage back end provided by NFS. We rely on kube healthcheck to restart an unresponsive Gitea instance, which can run on any tainted host managed by kube. This solves the reliability issue somewhat, though there will be a period of unavailability while the container restarts. > > Our v2 setup swaps out NFS for ceph CSI in kube. R/W performance improves dramatically. We also use S3 compat layer in ceph to store LFS data. > > My most pressing desire for v3 is HA. We can be less ambitious and work on single local cluster first. There can be a dedicated pod for running cron tasks, so Gitea can concentrate on doing git and webserver stuff. We can also use s3 for storage exclusively for its sync capabilities. Will there be concurrency problem when using Ceph CSI, since there is no file lock protection?
Author
Owner

@imacks commented on GitHub (Mar 11, 2023):

@piamo no. Only a single instance of gitea runs at any one time, so no locking is necessary. The appropriate ceph volume is auto mounted on whichever host the gitea container runs on. So yeah my setup is not HA, just resilient to host failure.

@imacks commented on GitHub (Mar 11, 2023): @piamo no. Only a single instance of gitea runs at any one time, so no locking is necessary. The appropriate ceph volume is auto mounted on whichever host the gitea container runs on. So yeah my setup is not HA, just resilient to host failure.
Author
Owner

@piamo commented on GitHub (Apr 21, 2023):

@piamo no. Only a single instance of gitea runs at any one time, so no locking is necessary. The appropriate ceph volume is auto mounted on whichever host the gitea container runs on. So yeah my setup is not HA, just resilient to host failure.

@imacks But if two or more concurrent requests try to change the same repo, lock is still necessary.

@piamo commented on GitHub (Apr 21, 2023): > @piamo no. Only a single instance of gitea runs at any one time, so no locking is necessary. The appropriate ceph volume is auto mounted on whichever host the gitea container runs on. So yeah my setup is not HA, just resilient to host failure. @imacks But if two or more concurrent requests try to change the same repo, lock is still necessary.
Author
Owner

@harryzcy commented on GitHub (May 2, 2023):

I think one immediate step for Gitea would be to enable limiting read-only operations and disable cron to somewhat achieve high availability. Many parts can already be deployed in a HA way:

  • database: depend on the replication of databases itself, e.g. when using Postgres replications
  • git storage: NFS or ceph or longhorn are possible solutions (but ReadWriteMany and ReadWriteOnce may have drastically different performance)
  • session: redis cluster already handles that

What we need right now is to allow for disabling cron jobs, then Gitea can be deployed in a cluster with ReadWriteMany storage for git objects. To support ReadWriteOnce storage, the files need to be replicated by Gitea instead of the storage provider. Then Gitea must have a read-only mode and those replicas need to pull changes from master instance. In this case, the read-only operations should be identified so that a load balancer can route traffic properly.

After we have done the above step, then we could try to find some leader election protocols so that a replica can be promoted to master if master is down. This would be the second step.

Only after we have done that, we can start to split cron jobs to multiple instances. I think this is more complicated than the first two steps above.

@harryzcy commented on GitHub (May 2, 2023): I think one immediate step for Gitea would be to enable limiting read-only operations and disable cron to somewhat achieve high availability. Many parts can already be deployed in a HA way: - database: depend on the replication of databases itself, e.g. when using Postgres replications - git storage: NFS or ceph or longhorn are possible solutions (but ReadWriteMany and ReadWriteOnce may have drastically different performance) - session: redis cluster already handles that What we need right now is to allow for disabling cron jobs, then Gitea can be deployed in a cluster with ReadWriteMany storage for git objects. To support ReadWriteOnce storage, the files need to be replicated by Gitea instead of the storage provider. Then Gitea must have a read-only mode and those replicas need to pull changes from master instance. In this case, the read-only operations should be identified so that a load balancer can route traffic properly. After we have done the above step, then we could try to find some leader election protocols so that a replica can be promoted to master if master is down. This would be the second step. Only after we have done that, we can start to split cron jobs to multiple instances. I think this is more complicated than the first two steps above.
Author
Owner

@pat-s commented on GitHub (May 2, 2023):

Just FYI, we have an active WIP for a Gitea-HA setup in the helm-chart going on right now: https://gitea.com/gitea/helm-chart/pulls/437

It is based on Postgres-HA, a RWX file system and redis-cluster.
I think that using a RWX solves some part of the leader-election logic WRT to tasks and communication.

The only thing that is a true issue still are the duplicated cron executions. The biggest issue would be that both do the same thing at the exact same moment and crash therefore.
I haven't yet tested in in practice though.

Maybe implementing a random offset/sleep could help in the first place to at least ensure proper functionality? Even if all jobs would still be executed redundantly but it would at least allow us to make some initial progress.

@pat-s commented on GitHub (May 2, 2023): Just FYI, we have an active WIP for a Gitea-HA setup in the helm-chart going on right now: https://gitea.com/gitea/helm-chart/pulls/437 It is based on Postgres-HA, a RWX file system and redis-cluster. I think that using a RWX solves some part of the leader-election logic WRT to tasks and communication. The only thing that is a true issue still are the duplicated cron executions. The biggest issue would be that both do the same thing at the exact same moment and crash therefore. I haven't yet tested in in practice though. Maybe implementing a random offset/sleep could help in the first place to at least ensure proper functionality? Even if all jobs would still be executed redundantly but it would at least allow us to make some initial progress.
Author
Owner

@lunny commented on GitHub (May 3, 2023):

There are still some locks in fact need to be refactored except cron, see #22176

@lunny commented on GitHub (May 3, 2023): There are still some locks in fact need to be refactored except cron, see #22176
Author
Owner

@wxiaoguang commented on GitHub (May 15, 2023):

  • If there is no session-stick or ip-stick, I guess it would trigger database deadlock more frequently due to transaction conflicts.
  • It conflicts with actions cron, there will be multiple duplicate tasks (related to the cron/task problem above)
  • it would trigger docker's duplicate insert bug again, because there is only a workaround (in process mutex) at the moment.
  • I guess "eventsource" doesn't work with cluster either
  • Some packages depend on ExclusivePool pool, which is also in-process now (mentioned above)
@wxiaoguang commented on GitHub (May 15, 2023): * If there is no session-stick or ip-stick, I guess it would trigger database deadlock more frequently due to transaction conflicts. * It conflicts with actions cron, there will be multiple duplicate tasks (related to the cron/task problem above) * it would trigger docker's duplicate insert bug again, because there is only a workaround (in process mutex) at the moment. * I guess "eventsource" doesn't work with cluster either * Some packages depend on ExclusivePool pool, which is also in-process now (mentioned above)
Author
Owner

@pat-s commented on GitHub (May 15, 2023):

Idk what the "docker's duplicate insert bug" is here and all the other points are also somewhat unclear in terms of severity. I think we need to check and find out in the end.

And to test all of them, we need a (functional) HA cluster first to test on.

I can provide a instance for testing if needed. Are you interested @wxiaoguang @lunny? I could also give you access to the k8s namespace so you can explore the pods yourself.

On the other hand I wonder if this could also be set up and tested using the project funds? A terraform setup which destroys everything again after testing is not a big deal. And the helm chart logic for a HA setup is ready.

@pat-s commented on GitHub (May 15, 2023): Idk what the "docker's duplicate insert bug" is here and all the other points are also somewhat unclear in terms of severity. I think we need to check and find out in the end. And to test all of them, we need a (functional) HA cluster first to test on. I can provide a instance for testing if needed. Are you interested @wxiaoguang @lunny? I could also give you access to the k8s namespace so you can explore the pods yourself. On the other hand I wonder if this could also be set up and tested using the project funds? A terraform setup which destroys everything again after testing is not a big deal. And the helm chart logic for a HA setup is ready.
Author
Owner

@lunny commented on GitHub (May 16, 2023):

I think most problems here are obvious from code level. Maybe we can find more when we start testing. LThank you for you idea about the testing infrastructure. When we need those, we can discuss them. But for now, there are so many problems, maybe we should begin from starting some discuss or sending some PRs.

@lunny commented on GitHub (May 16, 2023): I think most problems here are obvious from code level. Maybe we can find more when we start testing. LThank you for you idea about the testing infrastructure. When we need those, we can discuss them. But for now, there are so many problems, maybe we should begin from starting some discuss or sending some PRs.
Author
Owner

@wxiaoguang commented on GitHub (May 16, 2023):

Idk what the "docker's duplicate insert bug" is here and all the other points are also somewhat unclear in terms of severity. I think we need to check and find out in the end.

Context:

I can provide a instance for testing if needed. Are you interested? I could also give you access to the k8s namespace so you can explore the pods yourself.

I am interested, however, I have a quite long TODO list and many new PRs:

So I don't think I have the bandwidth at the moment.

@wxiaoguang commented on GitHub (May 16, 2023): > Idk what the "docker's duplicate insert bug" is here and all the other points are also somewhat unclear in terms of severity. I think we need to check and find out in the end. Context: * https://github.com/go-gitea/gitea/pull/21862 * https://github.com/go-gitea/gitea/pull/21862/files#diff-239d7f1fa93717ce75c12a91af1cc9f9d585993f7a1da9b5a507606689df4994R39-R41 > I can provide a instance for testing if needed. Are you interested? I could also give you access to the k8s namespace so you can explore the pods yourself. I am interested, however, I have a quite long TODO list and many new PRs: <details> * https://github.com/go-gitea/gitea/issues/created_by/wxiaoguang * https://github.com/go-gitea/gitea/pulls?q=is%3Apr+author%3Awxiaoguang </details> So I don't think I have the bandwidth at the moment.
Author
Owner

@prskr commented on GitHub (Nov 9, 2023):

I didn't check everything in the code so far but I think something like https://github.com/hibiken/asynq could help with the cron issues?

For the shared repo access I was actually wondering why not trying to abstract that e.g. with a S3 compatible storage and use something like redlock to synchronize the access repositories. I'd even assume concurrent read should be fine? It's only about consistence when writing to a repository (presumably)?

@prskr commented on GitHub (Nov 9, 2023): I didn't check everything in the code so far but I think something like https://github.com/hibiken/asynq could help with the cron issues? For the shared repo access I was actually wondering why not trying to abstract that e.g. with a S3 compatible storage and use something like redlock to synchronize the access repositories. I'd even assume concurrent read should be fine? It's only about consistence when writing to a repository (presumably)?
Author
Owner

@anbraten commented on GitHub (Oct 7, 2024):

In #28958 I've started a distributed implementation for the internal notifier. Thereby events such as issue was deleted would be broadcasted across all nodes.

@anbraten commented on GitHub (Oct 7, 2024): In #28958 I've started a distributed implementation for the internal notifier. Thereby events such as `issue was deleted` would be broadcasted across all nodes.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/gitea#6426