EOF error when pushing Docker images to Gitea 1.24.1 registry (works fine in 1.24.0) #14629

Closed
opened 2025-11-02 11:18:09 -06:00 by GiteaMirror · 30 comments
Owner

Originally created by @thethink3r on GitHub (Jun 20, 2025).

Description

EOF error when pushing Docker images to Gitea 1.24.1 registry (works fine in 1.24.0)

Description

When pushing Docker images to the Gitea registry on version 1.24.1, the push fails with an unexpected EOF error. The same workflow works without issues on version 1.24.0.

Steps to reproduce

  1. Build Docker image with docker build
  2. Log in to the Gitea registry with docker login
  3. Push the image using docker push
  4. Push fails with ERROR: EOF

Expected behavior

Docker image should push successfully, as it does in Gitea 1.24.0.

Actual behavior

Push is interrupted by an unexpected EOF, resulting in incomplete upload.

Environment

  • Gitea version 1.24.1 (failing)
  • Gitea version 1.24.0 (working)
  • Docker version 28.2.2
  • Local network and storage, no MTU or network issues

Please investigate and fix this regression as it breaks CI/CD workflows relying on Docker image pushes.

Gitea Version

1.24.1

Can you reproduce the bug on the Gitea demo site?

No

Log Gist

No response

Screenshots

Version 1.24.1

1.24.1

Version 1.24.0

Version 1.24.0

Git Version

No response

Operating System

Docker Linux x86

How are you running Gitea?

Docker Compose

Database

PostgreSQL

Originally created by @thethink3r on GitHub (Jun 20, 2025). ### Description # EOF error when pushing Docker images to Gitea 1.24.1 registry (works fine in 1.24.0) ## Description When pushing Docker images to the Gitea registry on version 1.24.1, the push fails with an unexpected `EOF` error. The same workflow works without issues on version 1.24.0. ## Steps to reproduce 1. Build Docker image with `docker build` 2. Log in to the Gitea registry with `docker login` 3. Push the image using `docker push` 4. Push fails with `ERROR: EOF` ## Expected behavior Docker image should push successfully, as it does in Gitea 1.24.0. ## Actual behavior Push is interrupted by an unexpected `EOF`, resulting in incomplete upload. ## Environment - Gitea version 1.24.1 (failing) - Gitea version 1.24.0 (working) - Docker version 28.2.2 - Local network and storage, no MTU or network issues Please investigate and fix this regression as it breaks CI/CD workflows relying on Docker image pushes. ### Gitea Version 1.24.1 ### Can you reproduce the bug on the Gitea demo site? No ### Log Gist _No response_ ### Screenshots # Version 1.24.1 ![1.24.1](https://github.com/user-attachments/assets/d55de56f-2323-43e1-8f9c-d56843646251) # Version 1.24.0 ![Version 1.24.0](https://github.com/user-attachments/assets/15c30eb4-253f-44c8-8247-c599adb8afac) ### Git Version _No response_ ### Operating System Docker Linux x86 ### How are you running Gitea? Docker Compose ### Database PostgreSQL
GiteaMirror added the type/bug label 2025-11-02 11:18:09 -06:00
Author
Owner

@navanshu commented on GitHub (Jun 20, 2025):

Same issue here even pushing busy Box doesn't work, tried modifying traefik

@navanshu commented on GitHub (Jun 20, 2025): Same issue here even pushing busy Box doesn't work, tried modifying traefik
Author
Owner

@luke-else commented on GitHub (Jun 20, 2025):

+1, I have also encountered the same issues. Can't seem to track down the reasoning why, going to do a bit more investigation now.

@luke-else commented on GitHub (Jun 20, 2025): +1, I have also encountered the same issues. Can't seem to track down the reasoning why, going to do a bit more investigation now.
Author
Owner

@wxiaoguang commented on GitHub (Jun 20, 2025):

The only container related change is "Fix container range bug (#34725) (#34732)".

In theory it should fix some incorrect behaviors and fixed some problems reported by users.

@wxiaoguang commented on GitHub (Jun 20, 2025): The only container related change is "Fix container range bug (#34725) (#34732)". In theory it should fix some incorrect behaviors and fixed some problems reported by users.
Author
Owner

@hiifong commented on GitHub (Jun 20, 2025):

Please paste your nginx configurations

@hiifong commented on GitHub (Jun 20, 2025): Please paste your nginx configurations
Author
Owner

@thethink3r commented on GitHub (Jun 20, 2025):

Tested NGINX Config (works for 1.24.0 )

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

Tried for v1.24.1 (dosent work)

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 0; # and tried client_max_body_size 512m to
proxy_read_timeout 600;
proxy_connect_timeout 600;
proxy_send_timeout 600;
send_timeout 600;

Do these snippets cover what's needed, or do you require the full NGINX server configuration block? (i use nginx proxy manager)

@thethink3r commented on GitHub (Jun 20, 2025): ### Tested NGINX Config (works for 1.24.0 ) ```nginx proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; ``` ### Tried for v1.24.1 (dosent work) ```nginx proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 0; # and tried client_max_body_size 512m to proxy_read_timeout 600; proxy_connect_timeout 600; proxy_send_timeout 600; send_timeout 600; ``` Do these snippets cover what's needed, or do you require the full NGINX server configuration block? (i use nginx proxy manager)
Author
Owner

@Bevolus commented on GitHub (Jun 20, 2025):

I have the same issues, already tried several changes to traefik. For testing, I also have set up a separate container registry, with the same traefik settings, which works fine.

@Bevolus commented on GitHub (Jun 20, 2025): I have the same issues, already tried several changes to traefik. For testing, I also have set up a separate container registry, with the same traefik settings, which works fine.
Author
Owner

@luke-else commented on GitHub (Jun 20, 2025):

Traefik Config

Traefik

version: "3.8"
services:
  traefik:
    image: "traefik:latest"
    container_name: "traefik"
    command:
      - "--api.dashboard=true"
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--entrypoints.kafka.address=:9093"
      - "--entrypoints.mongo.address=:27017"
      - "--certificatesresolvers.myresolver.acme.tlschallenge=true"
      - "--certificatesresolvers.myresolver.acme.email=contact@luke-else.co.uk"
      - "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
    ports:
      - "80:80"
      - "443:443"
      - "27017:27017"
    volumes:
      - "./letsencrypt:/letsencrypt"
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
    networks:
      - proxy
    labels:
      - "traefik.enable=true"

      - "traefik.http.middlewares.redirect-web-secure.redirectscheme.scheme=https"
      - "traefik.http.routers.traefik-insecure.middlewares=redirect-web-secure"
      - "traefik.http.routers.traefik-insecure.rule=Host(`traefik.luke-else.co.uk`)"
      - "traefik.http.routers.traefik-insecure.entrypoints=web"

      - "traefik.http.routers.traefik.rule=Host(`traefik.luke-else.co.uk`)"
      - "traefik.http.routers.traefik.entrypoints=websecure"
      - "traefik.http.routers.traefik.service=api@internal"
      - "traefik.http.routers.traefik.tls.certresolver=myresolver" 
      - "traefik.http.routers.traefik.middlewares=traefik-auth"

    restart: unless-stopped

networks:
  proxy:
    name: proxy

And then exposing gitea through that proxy with the config below:

version: '3.8'

services:
  #gitea (222)
  gitea:
    image: gitea/gitea:latest
    container_name: gitea
    volumes:
      - ./gitea:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    networks:
      - proxy
    ports:
      - "222:22"
    environment:
      - APP_NAME="gitea"
      - USER_UID=1000
      - USER_GID=1000
      - USER=git
      - RUN_MODE=prod
      - DOMAIN=git.luke-else.co.uk
      - SSH_DOMAIN=git.luke-else.co.uk
      - HTTP_PORT=3000
      - ROOT_URL=https://git.luke-else.co.uk
      - SSH_PORT=222
      - SSH_LISTEN_PORT=22
      - DB_TYPE=sqlite3
      - GITEA_service_DISABLE_REGISTRATION=true
      - GITEA_server_LANDING_PAGE=/luke-else
    labels:
      ## Expose Gitea Through Trefik ##
      - "traefik.enable=true" # <== Enable traefik to proxy this container

      - "traefik.http.middlewares.redirect-web-secure.redirectscheme.scheme=https"
      - "traefik.http.routers.gitea-insecure.middlewares=redirect-web-secure"
      - "traefik.http.routers.gitea-insecure.rule=Host(`git.luke-else.co.uk`)"
      - "traefik.http.routers.gitea-insecure.entrypoints=web"

      - "traefik.http.services.gitea.loadbalancer.server.port=3000"
      - "traefik.http.routers.gitea.rule=Host(`git.luke-else.co.uk`)"
      - "traefik.http.routers.gitea.entrypoints=websecure"
      - "traefik.http.routers.gitea.tls.certresolver=myresolver"
      - "traefik.http.routers.gitea.middlewares=cors-gitea"
    restart: unless-stopped

networks:
  proxy:
    external: true
@luke-else commented on GitHub (Jun 20, 2025): ## Traefik Config ### Traefik ```yaml version: "3.8" services: traefik: image: "traefik:latest" container_name: "traefik" command: - "--api.dashboard=true" - "--providers.docker=true" - "--providers.docker.exposedbydefault=false" - "--entrypoints.web.address=:80" - "--entrypoints.websecure.address=:443" - "--entrypoints.kafka.address=:9093" - "--entrypoints.mongo.address=:27017" - "--certificatesresolvers.myresolver.acme.tlschallenge=true" - "--certificatesresolvers.myresolver.acme.email=contact@luke-else.co.uk" - "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json" ports: - "80:80" - "443:443" - "27017:27017" volumes: - "./letsencrypt:/letsencrypt" - "/var/run/docker.sock:/var/run/docker.sock:ro" networks: - proxy labels: - "traefik.enable=true" - "traefik.http.middlewares.redirect-web-secure.redirectscheme.scheme=https" - "traefik.http.routers.traefik-insecure.middlewares=redirect-web-secure" - "traefik.http.routers.traefik-insecure.rule=Host(`traefik.luke-else.co.uk`)" - "traefik.http.routers.traefik-insecure.entrypoints=web" - "traefik.http.routers.traefik.rule=Host(`traefik.luke-else.co.uk`)" - "traefik.http.routers.traefik.entrypoints=websecure" - "traefik.http.routers.traefik.service=api@internal" - "traefik.http.routers.traefik.tls.certresolver=myresolver" - "traefik.http.routers.traefik.middlewares=traefik-auth" restart: unless-stopped networks: proxy: name: proxy ``` And then exposing gitea through that proxy with the config below: ```yaml version: '3.8' services: #gitea (222) gitea: image: gitea/gitea:latest container_name: gitea volumes: - ./gitea:/data - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro networks: - proxy ports: - "222:22" environment: - APP_NAME="gitea" - USER_UID=1000 - USER_GID=1000 - USER=git - RUN_MODE=prod - DOMAIN=git.luke-else.co.uk - SSH_DOMAIN=git.luke-else.co.uk - HTTP_PORT=3000 - ROOT_URL=https://git.luke-else.co.uk - SSH_PORT=222 - SSH_LISTEN_PORT=22 - DB_TYPE=sqlite3 - GITEA_service_DISABLE_REGISTRATION=true - GITEA_server_LANDING_PAGE=/luke-else labels: ## Expose Gitea Through Trefik ## - "traefik.enable=true" # <== Enable traefik to proxy this container - "traefik.http.middlewares.redirect-web-secure.redirectscheme.scheme=https" - "traefik.http.routers.gitea-insecure.middlewares=redirect-web-secure" - "traefik.http.routers.gitea-insecure.rule=Host(`git.luke-else.co.uk`)" - "traefik.http.routers.gitea-insecure.entrypoints=web" - "traefik.http.services.gitea.loadbalancer.server.port=3000" - "traefik.http.routers.gitea.rule=Host(`git.luke-else.co.uk`)" - "traefik.http.routers.gitea.entrypoints=websecure" - "traefik.http.routers.gitea.tls.certresolver=myresolver" - "traefik.http.routers.gitea.middlewares=cors-gitea" restart: unless-stopped networks: proxy: external: true ```
Author
Owner

@wxiaoguang commented on GitHub (Jun 20, 2025):

Same issue here even pushing busy Box doesn't work, tried modifying traefik

IMO it's not related to reverse proxy. "even pushing busy Box doesn't work" do you have reproducible steps? I need to reproduce it on my side.

@wxiaoguang commented on GitHub (Jun 20, 2025): > Same issue here even pushing busy Box doesn't work, tried modifying traefik IMO it's not related to reverse proxy. "even pushing busy Box doesn't work" do you have reproducible steps? I need to reproduce it on my side.
Author
Owner

@thethink3r commented on GitHub (Jun 20, 2025):

Steps to Reproduce

docker login <your-gitea-domain>/<user>/<repo>
docker pull busybox:latest
docker tag busybox:latest <your-gitea-domain>/<user>/<repo>/busybox:test
docker push <your-gitea-domain>/<user>/<repo>/busybox:test

@thethink3r commented on GitHub (Jun 20, 2025): #### Steps to Reproduce ```bash docker login <your-gitea-domain>/<user>/<repo> docker pull busybox:latest docker tag busybox:latest <your-gitea-domain>/<user>/<repo>/busybox:test docker push <your-gitea-domain>/<user>/<repo>/busybox:test ```
Author
Owner

@wxiaoguang commented on GitHub (Jun 20, 2025):

Really strange ....

Image

@wxiaoguang commented on GitHub (Jun 20, 2025): Really strange .... ![Image](https://github.com/user-attachments/assets/1c366fe8-e425-48a6-8458-a8cfba4ff4e3)
Author
Owner

@luke-else commented on GitHub (Jun 20, 2025):

I found it particularly to be on layers which do not already exist

Really strange ....

Image

So in my case, when part of the image being built has changed, when pushing that it fails.

@luke-else commented on GitHub (Jun 20, 2025): I found it particularly to be on layers which do not already exist > Really strange .... > > ![Image](https://github.com/user-attachments/assets/1c366fe8-e425-48a6-8458-a8cfba4ff4e3) So in my case, when part of the image being built has changed, when pushing that it fails.
Author
Owner

@thethink3r commented on GitHub (Jun 20, 2025):

Image

it does not matter if the image already exists on the server

@thethink3r commented on GitHub (Jun 20, 2025): ![Image](https://github.com/user-attachments/assets/83825df7-75c5-4d13-b964-5f83a661f692) it does not matter if the image already exists on the server
Author
Owner

@wxiaoguang commented on GitHub (Jun 20, 2025):

Sorry for the inconvenience, it's still unclear to me how to reproduce or test .....

Are you able to build you own instance? If yes, could you try to revert some part of "Fix container range bug (https://github.com/go-gitea/gitea/pull/34725) (https://github.com/go-gitea/gitea/pull/34732)" to see which change caused the EOF?

Or, could you try the main nightly (1.25) https://hub.docker.com/r/gitea/gitea/tags?name=nightly ? It contains more container related fixes.

@wxiaoguang commented on GitHub (Jun 20, 2025): Sorry for the inconvenience, it's still unclear to me how to reproduce or test ..... Are you able to build you own instance? If yes, could you try to revert some part of "Fix container range bug (https://github.com/go-gitea/gitea/pull/34725) (https://github.com/go-gitea/gitea/pull/34732)" to see which change caused the EOF? Or, could you try the main nightly (1.25) https://hub.docker.com/r/gitea/gitea/tags?name=nightly ? It contains more container related fixes.
Author
Owner

@wxiaoguang commented on GitHub (Jun 20, 2025):

Feedback from :https://github.com/go-gitea/gitea/issues/34724#issuecomment-2991850421

I believe the issue had to do with the reverse proxies I had in front of my Gitea instance, including Cloudflare for public web access. I have since changed my setup to push Docker images internally without passing through the reverse proxies, which by the way I believe is a more correct approach regardless of the EOF issue.
Since removing reverse proxies in the middle solved the issue for me, I did not spend much time trying to pinpoint the issue.

It seems related to reverse proxy .... maybe the changed header causes some unclear problems. (still unsure)

@wxiaoguang commented on GitHub (Jun 20, 2025): Feedback from :https://github.com/go-gitea/gitea/issues/34724#issuecomment-2991850421 > I believe the issue had to do with the reverse proxies I had in front of my Gitea instance, including Cloudflare for public web access. I have since changed my setup to push Docker images internally without passing through the reverse proxies, which by the way I believe is a more correct approach regardless of the EOF issue. > Since removing reverse proxies in the middle solved the issue for me, I did not spend much time trying to pinpoint the issue. It seems related to reverse proxy .... maybe the changed header causes some unclear problems. (still unsure)
Author
Owner

@thethink3r commented on GitHub (Jun 20, 2025):

nightly build have the same issue

@thethink3r commented on GitHub (Jun 20, 2025): nightly build have the same issue
Author
Owner

@wxiaoguang commented on GitHub (Jun 20, 2025):

Are you able to collect the different Gitea's logs? For 1.24.0 busybox working case, and for 1.24.1 not working case. Maybe logs can tell something.

@wxiaoguang commented on GitHub (Jun 20, 2025): Are you able to collect the different Gitea's logs? For 1.24.0 busybox working case, and for 1.24.1 not working case. Maybe logs can tell something.
Author
Owner

@hiifong commented on GitHub (Jun 20, 2025):

Image

I'm having the same problem.

@hiifong commented on GitHub (Jun 20, 2025): <img width="1064" alt="Image" src="https://github.com/user-attachments/assets/d0aca166-cc79-44ba-a8d5-679e619a2860" /> I'm having the same problem.
Author
Owner

@bytedream commented on GitHub (Jun 20, 2025):

I'm encountering this issue as well with docker. When using podman, it works without any error.

After removing the following if statement and always using respHeaders.Range = fmt.Sprintf("0-%d", uploader.Size()-1) instead, docker also works.
4f32d32812/routers/api/packages/container/container.go (L389-L391)

@bytedream commented on GitHub (Jun 20, 2025): I'm encountering this issue as well with docker. When using podman, it works without any error. After removing the following if statement and always using `respHeaders.Range = fmt.Sprintf("0-%d", uploader.Size()-1) ` instead, docker also works. https://github.com/go-gitea/gitea/blob/4f32d3281218b34e1ddd1e571abd804e62319ff6/routers/api/packages/container/container.go#L389-L391
Author
Owner

@wxiaoguang commented on GitHub (Jun 20, 2025):

Thank you very much. Let's revert it.

It seems to be docker client related ..... maybe some version doesn't follow the standard.

@wxiaoguang commented on GitHub (Jun 20, 2025): Thank you very much. Let's revert it. It seems to be docker client related ..... maybe some version doesn't follow the standard.
Author
Owner

@hiifong commented on GitHub (Jun 20, 2025):

Image

I suspect it has something to do with here.

@hiifong commented on GitHub (Jun 20, 2025): <img width="1363" alt="Image" src="https://github.com/user-attachments/assets/9b4ab015-fb94-49f0-b5d0-e9ef81d405ca" /> ~~I suspect it has something to do with here.~~
Author
Owner

@hiifong commented on GitHub (Jun 20, 2025):

Image This screenshot is a good request. There seems to be a missing request to close the session (PUT)

pushing-a-blob-in-chunks.

@hiifong commented on GitHub (Jun 20, 2025): <img width="1000" alt="Image" src="https://github.com/user-attachments/assets/b2bf2335-35e1-4477-80b8-7bb49a110b6e" /> This screenshot is a good request. There seems to be a missing request to close the session (PUT) [pushing-a-blob-in-chunks](https://github.com/opencontainers/distribution-spec/blob/main/spec.md#pushing-a-blob-in-chunks).
Author
Owner

@wxiaoguang commented on GitHub (Jun 20, 2025):

I proposed a fix: Fix container range bug #34795

  1. Only send "Range" to client when size > 0 (otherwise "0--1" would cause a fixed bug)
  2. Add missing "Location" header in GET request
@wxiaoguang commented on GitHub (Jun 20, 2025): I proposed a fix: Fix container range bug #34795 1. Only send "Range" to client when size > 0 (otherwise "0--1" would cause a fixed bug) 2. Add missing "Location" header in GET request
Author
Owner

@hiifong commented on GitHub (Jun 20, 2025):

I'm encountering this issue as well with docker. When using podman, it works without any error.

After removing the following if statement and always using respHeaders.Range = fmt.Sprintf("0-%d", uploader.Size()-1) instead, docker also works.

gitea/routers/api/packages/container/container.go

Lines 389 to 391 in 4f32d32

if contentRange != "" {
respHeaders.Range = fmt.Sprintf("0-%d", uploader.Size()-1)
}

According to the specification this request must return Range: 0-<end-of-range>.

@hiifong commented on GitHub (Jun 20, 2025): > I'm encountering this issue as well with docker. When using podman, it works without any error. > > After removing the following if statement and always using `respHeaders.Range = fmt.Sprintf("0-%d", uploader.Size()-1) ` instead, docker also works. > > [gitea/routers/api/packages/container/container.go](https://github.com/go-gitea/gitea/blob/4f32d3281218b34e1ddd1e571abd804e62319ff6/routers/api/packages/container/container.go#L389-L391) > > Lines 389 to 391 in [4f32d32](/go-gitea/gitea/commit/4f32d3281218b34e1ddd1e571abd804e62319ff6) > > if contentRange != "" { > respHeaders.Range = fmt.Sprintf("0-%d", uploader.Size()-1) > } According to the specification this request must return `Range: 0-<end-of-range>`.
Author
Owner

@wxiaoguang commented on GitHub (Jun 20, 2025):

Sorry for bothering again.

Although I think " Fix container range bug #34795 " will fix the bug. I am still curious about some details (why the error could happen). My guess is that the problem is possibly related to docker's version. I have tried docker (desktop) client 27.x/28.x, they all succeed to push.

So if you have time, could you take a look at the docker version? And if the client is old and it is possible, could you try to upgrade to a new/latest version to try? Thank you all.

@wxiaoguang commented on GitHub (Jun 20, 2025): Sorry for bothering again. Although I think " Fix container range bug #34795 " will fix the bug. I am still curious about some details (why the error could happen). My guess is that the problem is possibly related to docker's version. I have tried docker (desktop) client 27.x/28.x, they all succeed to push. So if you have time, could you take a look at the `docker version`? And if the client is old and it is possible, could you try to upgrade to a new/latest version to try? Thank you all.
Author
Owner

@thethink3r commented on GitHub (Jun 20, 2025):

28.2.2 (30 May 2025) is the latest stable version

@thethink3r commented on GitHub (Jun 20, 2025): 28.2.2 (30 May 2025) is the latest stable version
Author
Owner

@luke-else commented on GitHub (Jun 20, 2025):

28.1.1 for me, very peculiar

@luke-else commented on GitHub (Jun 20, 2025): 28.1.1 for me, very peculiar
Author
Owner

@wxiaoguang commented on GitHub (Jun 21, 2025):

1.24.2 binary & container image is ready. Could you take a try?

@wxiaoguang commented on GitHub (Jun 21, 2025): 1.24.2 binary & container image is ready. Could you take a try?
Author
Owner

@thethink3r commented on GitHub (Jun 21, 2025):

Works

@thethink3r commented on GitHub (Jun 21, 2025): Works
Author
Owner

@dakennguyen commented on GitHub (Jun 30, 2025):

I'm still having the issue. I can push if I'm inside the node running gitea and using localhost:3000 as the registry. But when i'm on another machine and using my own domain name (configured with cloudflared tunnel), I still got the EOF error.

The weird thing is that last week things are working fine for me on 1.24.0-rootless, then suddenly there was an error for bad range. Now I tried to upgrade to 1.24.2-rootless and I still got EOF

Anyone has an idea?

Here is my docker compose file

services:
  server:
    image: docker.gitea.com/gitea:1.24.2-rootless
    restart: always
    volumes:
      - ./data:/var/lib/gitea
      - ./config:/etc/gitea
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "3000:3000"
      - "2222:2222"
@dakennguyen commented on GitHub (Jun 30, 2025): I'm still having the issue. I can push if I'm inside the node running gitea and using localhost:3000 as the registry. But when i'm on another machine and using my own domain name (configured with cloudflared tunnel), I still got the EOF error. The weird thing is that last week things are working fine for me on 1.24.0-rootless, then suddenly there was an error for bad range. Now I tried to upgrade to 1.24.2-rootless and I still got EOF Anyone has an idea? Here is my docker compose file ``` services: server: image: docker.gitea.com/gitea:1.24.2-rootless restart: always volumes: - ./data:/var/lib/gitea - ./config:/etc/gitea - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro ports: - "3000:3000" - "2222:2222" ```
Author
Owner

@wxiaoguang commented on GitHub (Jun 30, 2025):

But when i'm on another machine and using my own domain name (configured with cloudflared tunnel), I still got the EOF error.

It is another problem. Please check your reverse proxy configuration and ROOT_URL of Gitea's app.ini

@wxiaoguang commented on GitHub (Jun 30, 2025): > But when i'm on another machine and using my own domain name (configured with cloudflared tunnel), I still got the EOF error. It is another problem. Please check your reverse proxy configuration and ROOT_URL of Gitea's app.ini
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/gitea#14629