Gitea/swag (nginx/fail2ban) not allowing pulls sometimes when using docker compose #12192

Closed
opened 2025-11-02 10:01:36 -06:00 by GiteaMirror · 7 comments
Owner

Originally created by @jessielw on GitHub (Dec 12, 2023).

Description

Since v1.21 I can no longer pull from my package repository. It just hangs there and in the logs it shows the below information. I have no issues pulling from a normal repository or pushing to either of them.

I changed nothing on my configuration other than updating gitea.

Gitea Version

Current nightly build

Can you reproduce the bug on the Gitea demo site?

No

Log Gist

https://gist.github.com/jlw4049/80563ff5c54f578001cb32a0a6568421

Screenshots

No response

Git Version

nightly

Operating System

Docker/UnRaid

How are you running Gitea?

In docker on UnRaid

Database

MySQL/MariaDB

Originally created by @jessielw on GitHub (Dec 12, 2023). ### Description Since `v1.21` I can no longer pull from my package repository. It just hangs there and in the logs it shows the below information. I have no issues pulling from a normal repository or pushing to either of them. I changed nothing on my configuration other than updating gitea. ### Gitea Version Current nightly build ### Can you reproduce the bug on the Gitea demo site? No ### Log Gist https://gist.github.com/jlw4049/80563ff5c54f578001cb32a0a6568421 ### Screenshots _No response_ ### Git Version nightly ### Operating System Docker/UnRaid ### How are you running Gitea? In docker on UnRaid ### Database MySQL/MariaDB
GiteaMirror added the topic/packagesissue/needs-feedback labels 2025-11-02 10:01:36 -06:00
Author
Owner

@KN4CK3R commented on GitHub (Dec 13, 2023):

The client just hangs? There are 6 successfull requests to /v2 and /v2/token. I don't see the responsibility of Gitea at the moment.

@KN4CK3R commented on GitHub (Dec 13, 2023): The client just hangs? There are 6 successfull requests to `/v2` and `/v2/token`. I don't see the responsibility of Gitea at the moment.
Author
Owner

@jessielw commented on GitHub (Dec 13, 2023):

That's what has me confused. I haven't changed anything at all.
I just cleared my docker login.json once, re-logged into gitea was able to do 1 successful pull. Tried again and I'm getting errors on the client.

Error response from daemon: Get "https://jessielw.com/v2/": dial tcp IP:443: connect: connection refused
root@bhds01-ubuntu-4gb:~/services/example# sudo docker compose -f docker-compose-prod.yml up
[+] Running 6/6
 ✘ celery_irc Error                                                                     0.1s
 ✘ create_databases Error                                                               0.1s
 ✘ start_log_rotator Error                                                              0.1s
 ✘ celery_worker Error                                                                  0.1s
 ✘ celery_beat Error                                                                    0.1s
Error response from daemon: Get "https://jessielw.com/v2/": dial tcp IP:443: connect: connection refused
root@bhds01-ubuntu-4gb:~/services/example# sudo micro docker-compose-prod.yml
root@bhds01-ubuntu-4gb:~/services/example# docker pull jessielw.com/jlw_4049/example:latest
Error response from daemon: Get "https://jessielw.com/v2/": dial tcp IP:443: connect: connection refused
root@bhds01-ubuntu-4gb:~/services/example# sudo docker pull jessielw.com/jlw_4049/example
:latest
Error response from daemon: Get "https://jessielw.com/v2/": dial tcp IP:443: connect: connection refused

On the host, which doesn't come up EVERYTIME via gitea logs, I sometimes get this when trying to pull

2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60726, 401 Unauthorized in 0.2ms @ container/container.go:118(container.ReqContainerAccess)
2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60710, 401 Unauthorized in 2.2ms @ container/container.go:118(container.ReqContainerAccess)
2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60734, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess)
2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60746, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess)
2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60762, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess)
2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60770, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess)

I've tested this against 2 clients, but my config didn't change. It's been the same through out.

I'm hosting my gitea from a server through nginx. I'm using a subfolder with a setup like so. Perhaps it's the cause, but I'm not sure why it would break all of a sudden after working for about 4 months since initial setup.

## Version 2023/02/05
# make sure that your gitea container is named gitea
# make sure that gitea is set to work with the base url /gitea/
# The following parameters in /data/gitea/conf/app.ini should be edited to match your setup
# [server]
# SSH_DOMAIN       = example.com:2222
# ROOT_URL         = https://example.com/gitea/
# DOMAIN           = example.com

location /gitea {
    return 301 $scheme://$host/gitea/;
}

location ^~ /gitea/ {
    client_max_body_size 10G;
    include /config/nginx/proxy.conf;
    include /config/nginx/resolver.conf;
    set $upstream_app gitea;
    set $upstream_port 3000;
    set $upstream_proto http;
    proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    rewrite /gitea(.*) $1 break;
}

# This forwards docker traffic to gitea
location /v2/ {
    client_max_body_size 10G;
    proxy_pass http://gitea:3000/v2/;
}
@jessielw commented on GitHub (Dec 13, 2023): That's what has me confused. I haven't changed anything at all. I just cleared my docker login.json once, re-logged into gitea was able to do 1 successful pull. Tried again and I'm getting errors on the client. ``` Error response from daemon: Get "https://jessielw.com/v2/": dial tcp IP:443: connect: connection refused root@bhds01-ubuntu-4gb:~/services/example# sudo docker compose -f docker-compose-prod.yml up [+] Running 6/6 ✘ celery_irc Error 0.1s ✘ create_databases Error 0.1s ✘ start_log_rotator Error 0.1s ✘ celery_worker Error 0.1s ✘ celery_beat Error 0.1s Error response from daemon: Get "https://jessielw.com/v2/": dial tcp IP:443: connect: connection refused root@bhds01-ubuntu-4gb:~/services/example# sudo micro docker-compose-prod.yml root@bhds01-ubuntu-4gb:~/services/example# docker pull jessielw.com/jlw_4049/example:latest Error response from daemon: Get "https://jessielw.com/v2/": dial tcp IP:443: connect: connection refused root@bhds01-ubuntu-4gb:~/services/example# sudo docker pull jessielw.com/jlw_4049/example :latest Error response from daemon: Get "https://jessielw.com/v2/": dial tcp IP:443: connect: connection refused ``` On the host, which doesn't come up EVERYTIME via gitea logs, I sometimes get this when trying to pull ``` 2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60726, 401 Unauthorized in 0.2ms @ container/container.go:118(container.ReqContainerAccess) 2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60710, 401 Unauthorized in 2.2ms @ container/container.go:118(container.ReqContainerAccess) 2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60734, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess) 2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60746, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess) 2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60762, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess) 2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60770, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess) ``` I've tested this against 2 clients, but my config didn't change. It's been the same through out. I'm hosting my gitea from a server through nginx. I'm using a `subfolder` with a setup like so. Perhaps it's the cause, but I'm not sure why it would break all of a sudden after working for about 4 months since initial setup. ``` ## Version 2023/02/05 # make sure that your gitea container is named gitea # make sure that gitea is set to work with the base url /gitea/ # The following parameters in /data/gitea/conf/app.ini should be edited to match your setup # [server] # SSH_DOMAIN = example.com:2222 # ROOT_URL = https://example.com/gitea/ # DOMAIN = example.com location /gitea { return 301 $scheme://$host/gitea/; } location ^~ /gitea/ { client_max_body_size 10G; include /config/nginx/proxy.conf; include /config/nginx/resolver.conf; set $upstream_app gitea; set $upstream_port 3000; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; rewrite /gitea(.*) $1 break; } # This forwards docker traffic to gitea location /v2/ { client_max_body_size 10G; proxy_pass http://gitea:3000/v2/; } ```
Author
Owner

@KN4CK3R commented on GitHub (Dec 13, 2023):

On the host, which doesn't come up EVERYTIME via gitea logs, I sometimes get this when trying to pull

That's ok because the /v2 request is just a "is there a container registry" test which expects a 401 response.

Do you see follow up requests on nginx?

@KN4CK3R commented on GitHub (Dec 13, 2023): > On the host, which doesn't come up EVERYTIME via gitea logs, I sometimes get this when trying to pull That's ok because the `/v2` request is just a "is there a container registry" test which expects a 401 response. Do you see follow up requests on nginx?
Author
Owner

@jessielw commented on GitHub (Dec 13, 2023):

So to reproduce the bug. I can just use a normal docker pull site/jlw_4049/package:latest over and over again, works 100% of the time. However, when I try to run it in a docker compose up command, it will fail like the above errors I showed you after hanging.

Once this happens I can no longer pull it with docker pull for quite some time.

I will then get errors like this on the client Error response from daemon: Get "https://website.com/v2/": dial tcp IP:443: connect: connection refused

Eventually it'll allow the connection again and docker pull will work but docker compose will never work.

Now on the nginx side of things when it fails, it does not appear to be in the nginx access.log. It appears that it never reaches it.

When it works, it shows up in both the nginx logs as well as gitea logs. When it fails, it just does nothing and on the client side says connection refused

I appreciate the responses so far!

@jessielw commented on GitHub (Dec 13, 2023): So to reproduce the bug. I can just use a normal `docker pull site/jlw_4049/package:latest` over and over again, works 100% of the time. However, when I try to run it in a `docker compose up` command, it will fail like the above errors I showed you after hanging. Once this happens I can no longer pull it with `docker pull` for quite some time. I will then get errors like this on the client `Error response from daemon: Get "https://website.com/v2/": dial tcp IP:443: connect: connection refused` Eventually it'll allow the connection again and docker pull will work but docker compose will never work. Now on the nginx side of things when it fails, it does not appear to be in the `nginx` `access.log`. It appears that it never reaches it. When it works, it shows up in both the nginx logs as well as gitea logs. When it fails, it just does nothing and on the client side says `connection refused` I appreciate the responses so far!
Author
Owner

@jessielw commented on GitHub (Dec 14, 2023):

I wanted to say I figured out why this issue was happening. I'm not really sure why it started happening all of a sudden when it's been working just fine for about 4 months.

I was looking in my swag logs, poking around to see if I could figure out traces of where the remote IP was failing. As I checked my firewall and it was passing through there, but being denied somewhere between the firewall and gitea after the initial request.

I found these logs when it was failing being produced via the built in fail2ban inside of swag (my reverse proxy container)

2023-12-14 12:45:36,957 fail2ban.filter         [619]: INFO    [nginx-unauthorized] Found remoteIP - 2023-12-14 12:45:36
2023-12-14 12:45:36,957 fail2ban.filter         [619]: INFO    [nginx-unauthorized] Found remoteIP - 2023-12-14 12:45:36
2023-12-14 12:45:36,958 fail2ban.filter         [619]: INFO    [nginx-unauthorized] Found remoteIP - 2023-12-14 12:45:36
2023-12-14 12:45:36,958 fail2ban.filter         [619]: INFO    [nginx-unauthorized] Found remoteIP - 2023-12-14 12:45:36
2023-12-14 12:45:36,958 fail2ban.filter         [619]: INFO    [nginx-unauthorized] Found remoteIP - 2023-12-14 12:45:36
2023-12-14 12:45:36,958 fail2ban.filter         [619]: INFO    [nginx-unauthorized] Found remoteIP - 2023-12-14 12:45:36

This sent me down the path of trying to figure out why it was blocking it sometimes and would lock me out for quite some time. I found this thread here, allowing me to figure out what was going on.

So back to the error I was having. With docker compose I was wanting to pull the same image from the host 6 times essentially (I only want to pull it once and use it in the compose for 6 services), this causes 6 401's.

2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60726, 401 Unauthorized in 0.2ms @ container/container.go:118(container.ReqContainerAccess)
2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60710, 401 Unauthorized in 2.2ms @ container/container.go:118(container.ReqContainerAccess)
2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60734, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess)
2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60746, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess)
2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60762, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess)
2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60770, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess)

fail2ban is set by default (at least in swag) to only allow 5. The work around here was to modify the fail2ban config, I just bumped this up to 12 for now and it had no issues pulling the compose down.

Is this advised?
Is there a reason why there is 6 "tests" that needs to be sent expecting 401 for the same image?

@jessielw commented on GitHub (Dec 14, 2023): I wanted to say I figured out **why** this issue was happening. I'm not really sure why it started happening all of a sudden when it's been working just fine for about 4 months. I was looking in my **swag** logs, poking around to see if I could figure out traces of where the remote IP was failing. As I checked my firewall and it was passing through there, but being denied somewhere between the firewall and gitea after the initial request. I found these logs when it was failing being produced via the built in `fail2ban` inside of `swag` (my reverse proxy container) ``` 2023-12-14 12:45:36,957 fail2ban.filter [619]: INFO [nginx-unauthorized] Found remoteIP - 2023-12-14 12:45:36 2023-12-14 12:45:36,957 fail2ban.filter [619]: INFO [nginx-unauthorized] Found remoteIP - 2023-12-14 12:45:36 2023-12-14 12:45:36,958 fail2ban.filter [619]: INFO [nginx-unauthorized] Found remoteIP - 2023-12-14 12:45:36 2023-12-14 12:45:36,958 fail2ban.filter [619]: INFO [nginx-unauthorized] Found remoteIP - 2023-12-14 12:45:36 2023-12-14 12:45:36,958 fail2ban.filter [619]: INFO [nginx-unauthorized] Found remoteIP - 2023-12-14 12:45:36 2023-12-14 12:45:36,958 fail2ban.filter [619]: INFO [nginx-unauthorized] Found remoteIP - 2023-12-14 12:45:36 ``` This sent me down the path of trying to figure out why it was blocking it sometimes and would lock me out for quite some time. I found this [thread](https://discourse.linuxserver.io/t/swag-fail2ban-unauthorized-access/7893/3) here, allowing me to figure out what was going on. So back to the error I was having. With docker compose I was wanting to pull the same image from the host **6** times essentially (I only want to pull it once and use it in the compose for 6 services), this causes **6 401's**. ``` 2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60726, 401 Unauthorized in 0.2ms @ container/container.go:118(container.ReqContainerAccess) 2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60710, 401 Unauthorized in 2.2ms @ container/container.go:118(container.ReqContainerAccess) 2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60734, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess) 2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60746, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess) 2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60762, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess) 2023/12/13 14:24:50 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for IP:60770, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess) ``` **fail2ban** is set by default (at least in swag) to only allow **5**. The work around here was to modify the fail2ban config, I just bumped this up to 12 for now and it had no issues pulling the compose down. Is this advised? Is there a reason why there is **6** "tests" that needs to be sent expecting `401` for the same image?
Author
Owner

@KN4CK3R commented on GitHub (Dec 15, 2023):

Great that you found the reason. The 6 requests are from the Docker client, Gitea just answers.

@KN4CK3R commented on GitHub (Dec 15, 2023): Great that you found the reason. The 6 requests are from the Docker client, Gitea just answers.
Author
Owner

@jessielw commented on GitHub (Dec 15, 2023):

Thanks, I'll adjust this title in case someone else comes across the same issues and close this. Thanks for the responses!

@jessielw commented on GitHub (Dec 15, 2023): Thanks, I'll adjust this title in case someone else comes across the same issues and close this. Thanks for the responses!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/gitea#12192