Registry: Error response from daemon: missing signature key after upgrade to 1.21.0 #12046

Closed
opened 2025-11-02 09:55:59 -06:00 by GiteaMirror · 24 comments
Owner

Originally created by @RogerSik on GitHub (Nov 19, 2023).

Description

Since the upgrade to 1.21.0 the docker image build succeed but the docker pull fails with

docker pull gitea.sikorski.cloud/rogersik/ansible:development
Error response from daemon: missing signature key

I suspect this is because of 1.21.0 because before the upgrade I din't have this problem. I can't

  • Gitea 1.21.0
  • using Act Runner on Kubernetes (with root rights)
  • Minio as S3 storage

Test build locally with docker client

$ docker build . -t gitea.sikorski.cloud/rogersik/ansible:development
failed to fetch metadata: fork/exec /home/rsikorski/.docker/cli-plugins/docker-buildx: no such file or directory

DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
            Install the buildx component to build images with BuildKit:
            https://docs.docker.com/go/buildx/

Sending build context to Docker daemon   10.1MB
Step 1/7 : FROM python:3.11-alpine3.18
 ---> 270f1e4a1f16

## shortened ##

Successfully built fa533b1a9606
Successfully tagged gitea.sikorski.cloud/rogersik/ansible:development
[rsikorski@ALIENWARE Ansible (feature/gitea-workflow-update)]$ docker push gitea.sikorski.cloud/rogersik/ansible:development
The push refers to repository [gitea.sikorski.cloud/rogersik/ansible]
fc95f885118b: Pushed 
23e5afdd0f5b: Pushed 
83a9d089d007: Pushed 
086ab54fdc47: Pushed 
d85eefc84d77: Pushed 
880f9dc6c21c: Pushed 
47818e695d36: Pushed 
a4aa75b591c8: Pushed 
e9e9555ceaa8: Pushed 
6f25d7d19389: Pushed 
cc2447e1835a: Layer already exists 
development: digest: sha256:5648a9fde44c7a13075e23e77c7d0ca56db8d43ad727857d8f5f08a4abb867d4 size: 2621
[rsikorski@ALIENWARE Ansible (feature/gitea-workflow-update)]$ docker push gitea.sikorski.cloud/rogersik/ansible:development
The push refers to repository [gitea.sikorski.cloud/rogersik/ansible]
fc95f885118b: Layer already exists 
23e5afdd0f5b: Layer already exists 
83a9d089d007: Layer already exists 
086ab54fdc47: Layer already exists 
d85eefc84d77: Layer already exists 
880f9dc6c21c: Layer already exists 
47818e695d36: Layer already exists 
a4aa75b591c8: Layer already exists 
e9e9555ceaa8: Layer already exists 
6f25d7d19389: Layer already exists 
cc2447e1835a: Layer already exists 
development: digest: sha256:5648a9fde44c7a13075e23e77c7d0ca56db8d43ad727857d8f5f08a4abb867d4 size: 2621
[rsikorski@ALIENWARE Ansible (feature/gitea-workflow-update)]$ docker push gitea.sikorski.cloud/rogersik/ansible:development
The push refers to repository [gitea.sikorski.cloud/rogersik/ansible]
fc95f885118b: Layer already exists 
23e5afdd0f5b: Layer already exists 
83a9d089d007: Layer already exists 
086ab54fdc47: Layer already exists 
d85eefc84d77: Layer already exists 
880f9dc6c21c: Layer already exists 
47818e695d36: Layer already exists 
a4aa75b591c8: Layer already exists 
e9e9555ceaa8: Layer already exists 
6f25d7d19389: Layer already exists 
cc2447e1835a: Layer already exists 
development: digest: sha256:5648a9fde44c7a13075e23e77c7d0ca56db8d43ad727857d8f5f08a4abb867d4 size: 2621

[rsikorski@ALIENWARE Ansible (feature/gitea-workflow-update)]$ docker pull gitea.sikorski.cloud/rogersik/ansible:development
Error response from daemon: missing signature key

I builded the image now on three different ways

  • Gitea Act Runner: Kaniko
  • Gitea Act Runner: Docker root
  • locally on Ubuntu

I deleted the :development image and rebuild it the same error happens. When using an unused tag example test1 same error.

When downloading a ready image like alpine:latest

docker pull alpine:latest
docker tag alpine:latest gitea.sikorski.cloud/rogersik/ansible:development
docker push gitea.sikorski.cloud/rogersik/ansible:development
docker pull gitea.sikorski.cloud/rogersik/ansible:development
Error response from daemon: missing signature key

Gitea Version

1.21.0

Can you reproduce the bug on the Gitea demo site?

No

Log Gist

No response

Screenshots

No response

Git Version

1.21.0

Operating System

Ubuntu 22.04 / K3s

How are you running Gitea?

Gitea with the official docker container running on K3S.

Database

PostgreSQL

Originally created by @RogerSik on GitHub (Nov 19, 2023). ### Description Since the upgrade to 1.21.0 the docker image build succeed but the docker pull fails with ``` docker pull gitea.sikorski.cloud/rogersik/ansible:development Error response from daemon: missing signature key ``` I suspect this is because of 1.21.0 because before the upgrade I din't have this problem. I can't - Gitea 1.21.0 - using Act Runner on Kubernetes (with root rights) - Minio as S3 storage Test build locally with docker client ``` $ docker build . -t gitea.sikorski.cloud/rogersik/ansible:development failed to fetch metadata: fork/exec /home/rsikorski/.docker/cli-plugins/docker-buildx: no such file or directory DEPRECATED: The legacy builder is deprecated and will be removed in a future release. Install the buildx component to build images with BuildKit: https://docs.docker.com/go/buildx/ Sending build context to Docker daemon 10.1MB Step 1/7 : FROM python:3.11-alpine3.18 ---> 270f1e4a1f16 ## shortened ## Successfully built fa533b1a9606 Successfully tagged gitea.sikorski.cloud/rogersik/ansible:development [rsikorski@ALIENWARE Ansible (feature/gitea-workflow-update)]$ docker push gitea.sikorski.cloud/rogersik/ansible:development The push refers to repository [gitea.sikorski.cloud/rogersik/ansible] fc95f885118b: Pushed 23e5afdd0f5b: Pushed 83a9d089d007: Pushed 086ab54fdc47: Pushed d85eefc84d77: Pushed 880f9dc6c21c: Pushed 47818e695d36: Pushed a4aa75b591c8: Pushed e9e9555ceaa8: Pushed 6f25d7d19389: Pushed cc2447e1835a: Layer already exists development: digest: sha256:5648a9fde44c7a13075e23e77c7d0ca56db8d43ad727857d8f5f08a4abb867d4 size: 2621 [rsikorski@ALIENWARE Ansible (feature/gitea-workflow-update)]$ docker push gitea.sikorski.cloud/rogersik/ansible:development The push refers to repository [gitea.sikorski.cloud/rogersik/ansible] fc95f885118b: Layer already exists 23e5afdd0f5b: Layer already exists 83a9d089d007: Layer already exists 086ab54fdc47: Layer already exists d85eefc84d77: Layer already exists 880f9dc6c21c: Layer already exists 47818e695d36: Layer already exists a4aa75b591c8: Layer already exists e9e9555ceaa8: Layer already exists 6f25d7d19389: Layer already exists cc2447e1835a: Layer already exists development: digest: sha256:5648a9fde44c7a13075e23e77c7d0ca56db8d43ad727857d8f5f08a4abb867d4 size: 2621 [rsikorski@ALIENWARE Ansible (feature/gitea-workflow-update)]$ docker push gitea.sikorski.cloud/rogersik/ansible:development The push refers to repository [gitea.sikorski.cloud/rogersik/ansible] fc95f885118b: Layer already exists 23e5afdd0f5b: Layer already exists 83a9d089d007: Layer already exists 086ab54fdc47: Layer already exists d85eefc84d77: Layer already exists 880f9dc6c21c: Layer already exists 47818e695d36: Layer already exists a4aa75b591c8: Layer already exists e9e9555ceaa8: Layer already exists 6f25d7d19389: Layer already exists cc2447e1835a: Layer already exists development: digest: sha256:5648a9fde44c7a13075e23e77c7d0ca56db8d43ad727857d8f5f08a4abb867d4 size: 2621 [rsikorski@ALIENWARE Ansible (feature/gitea-workflow-update)]$ docker pull gitea.sikorski.cloud/rogersik/ansible:development Error response from daemon: missing signature key ``` I builded the image now on three different ways * Gitea Act Runner: Kaniko * Gitea Act Runner: Docker root * locally on Ubuntu I deleted the :development image and rebuild it the same error happens. When using an unused tag example test1 same error. When downloading a ready image like alpine:latest ``` docker pull alpine:latest docker tag alpine:latest gitea.sikorski.cloud/rogersik/ansible:development docker push gitea.sikorski.cloud/rogersik/ansible:development docker pull gitea.sikorski.cloud/rogersik/ansible:development Error response from daemon: missing signature key ``` ### Gitea Version 1.21.0 ### Can you reproduce the bug on the Gitea demo site? No ### Log Gist _No response_ ### Screenshots _No response_ ### Git Version 1.21.0 ### Operating System Ubuntu 22.04 / K3s ### How are you running Gitea? Gitea with the official docker container running on K3S. ### Database PostgreSQL
GiteaMirror added the topic/packagestype/bugissue/workaround labels 2025-11-02 09:55:59 -06:00
Author
Owner

@KN4CK3R commented on GitHub (Nov 20, 2023):

What is a signature key in the docker context? There wasn't a significant change on the package code for 1.21. Could you show the router log for the failing docker command?

@KN4CK3R commented on GitHub (Nov 20, 2023): What is a `signature key` in the docker context? There wasn't a significant change on the package code for 1.21. Could you show the router log for the failing docker command?
Author
Owner

@RogerSik commented on GitHub (Nov 21, 2023):

I'm also curious. Specially because push is working from Gitea CI or local docker client, but not the pull. On the Demo Site it is working fine.

Could you show the router log for the failing docker command?

What is the router log? How do i get them?

I created a test pipeline which can also be visited: https://gitea.sikorski.cloud/RogerSik/registry-test/actions/runs/1/jobs/0

Somehow it needs to be the client? Because now i got the same error but i wasn't fast enough to assign this image to this repository. So this repository was / is empty and the pull error was the message.

Checking now my reverse proxy traefik. I get following message when using the pull command

2023/11/21 05:41:47 reverseproxy.go:666: httputil: ReverseProxy read error during body copy: unexpected EOF
2023/11/21 05:44:32 reverseproxy.go:666: httputil: ReverseProxy read error during body copy: unexpected EOF
2023/11/21 05:46:48 reverseproxy.go:666: httputil: ReverseProxy read error during body copy: unexpected EOF

Will troubleshooting forward in this direction.

@RogerSik commented on GitHub (Nov 21, 2023): I'm also curious. Specially because push is working from Gitea CI or local docker client, but not the pull. On the Demo Site it is working fine. > Could you show the router log for the failing docker command? What is the router log? How do i get them? I created a test pipeline which can also be visited: https://gitea.sikorski.cloud/RogerSik/registry-test/actions/runs/1/jobs/0 Somehow it needs to be the client? Because now i got the same error but i wasn't fast enough to assign this image to this repository. So this repository was / is empty and the pull error was the message. Checking now my reverse proxy traefik. I get following message when using the pull command ``` 2023/11/21 05:41:47 reverseproxy.go:666: httputil: ReverseProxy read error during body copy: unexpected EOF 2023/11/21 05:44:32 reverseproxy.go:666: httputil: ReverseProxy read error during body copy: unexpected EOF 2023/11/21 05:46:48 reverseproxy.go:666: httputil: ReverseProxy read error during body copy: unexpected EOF ``` Will troubleshooting forward in this direction.
Author
Owner

@EternalDeiwos commented on GitHub (Nov 24, 2023):

@KN4CK3R I am experiencing the same problem with a very similar environment setup as Roger; please see my logs below:

Logs
2023/11/24 11:15:23 ...eb/routing/logger.go:102:func1() [I] router: completed POST /login/oauth/access_token for xxx.yyy.68.38:48514, 200 OK in 85.6ms @ auth/oauth.go:618(auth.AccessTokenOAuth)
2023/11/24 11:15:23 ...eb/routing/logger.go:102:func1() [I] router: completed POST /api/v1/repos/example-org/example-repo/statuses/7de13c723b94eededa4d5b2c9f7e1e729dd51a11 for xxx.yyy.68.38:57028, 201 Created in 19.3ms @ repo/status.go:20(repo.NewCommitStatus)
2023/11/24 11:15:23 ...eb/routing/logger.go:102:func1() [I] router: completed GET /example-org/example-repo.git/info/refs?service=git-upload-pack for xxx.zzz.aaa.bbb:33872, 401 Unauthorized in 1.3ms @ web/githttp.go:16(web.requireSignIn)
2023/11/24 11:15:23 ...eb/routing/logger.go:102:func1() [I] router: completed GET /example-org/example-repo.git/info/refs?service=git-upload-pack for xxx.zzz.aaa.bbb:33872, 200 OK in 19.1ms @ repo/githttp.go:532(repo.GetInfoRefs)
2023/11/24 11:15:23 ...eb/routing/logger.go:102:func1() [I] router: completed POST /example-org/example-repo.git/git-upload-pack for xxx.zzz.aaa.bbb:33872, 200 OK in 16.4ms @ repo/githttp.go:492(repo.ServiceUploadPack)
2023/11/24 11:15:23 ...eb/routing/logger.go:102:func1() [I] router: completed POST /example-org/example-repo.git/git-upload-pack for xxx.zzz.aaa.bbb:33872, 200 OK in 27.9ms @ repo/githttp.go:492(repo.ServiceUploadPack)
2023/11/24 11:15:29 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for xxx.zzz.aaa.bbb:59908, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess)
2023/11/24 11:15:29 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/token?account=example-account-name&client_id=docker&offline_token=true&service=container_registry for xxx.zzz.aaa.bbb:59912, 200 OK in 13.0ms @ container/container.go:142(container.Authenticate)
2023/11/24 11:15:29 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for xxx.zzz.aaa.bbb:59924, 200 OK in 3.0ms @ container/container.go:134(container.DetermineSupport)
2023/11/24 11:15:29 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for xxx.zzz.aaa.bbb:59930, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess)
2023/11/24 11:15:29 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/token?account=example-account-name&scope=repository%3Aother-org%2Fimage-name%3Apull&service=container_registry for xxx.zzz.aaa.bbb:59936, 200 OK in 14.9ms @ container/container.go:142(container.Authenticate)
2023/11/24 11:15:30 ...eb/routing/logger.go:102:func1() [I] router: completed HEAD /v2/other-org/image-name/manifests/image-tag for xxx.zzz.aaa.bbb:59950, 200 OK in 1049.1ms @ container/container.go:601(container.HeadManifest)
2023/11/24 11:15:31 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/other-org/image-name/manifests/sha256:6e86129c475a777a5a9acc9beacc995d379ea4f8f6d938c058629cb6349ee82b for xxx.zzz.aaa.bbb:59956, 307 Temporary Redirect in 173.0ms @ container/container.go:621(container.GetManifest)
2023/11/24 11:15:33 ...eb/routing/logger.go:102:func1() [I] router: completed POST /api/v1/repos/example-org/example-repo/statuses/7de13c723b94eededa4d5b2c9f7e1e729dd51a11 for xxx.yyy.68.38:57028, 201 Created in 26.3ms @ repo/status.go:20(repo.NewCommitStatus)

In this case, I am attempting to run a CI pipeline for example-org/example-repo that pulls a base image from other-org/image-name.

@EternalDeiwos commented on GitHub (Nov 24, 2023): @KN4CK3R I am experiencing the same problem with a very similar environment setup as Roger; please see my logs below: <details> <summary>Logs</summary> ```2023/11/24 11:15:23 ...eb/routing/logger.go:102:func1() [I] router: completed POST /api/v1/repos/example-org/example-repo/statuses/7de13c723b94eededa4d5b2c9f7e1e729dd51a11 for xxx.yyy.68.38:57028, 201 Created in 35.0ms @ repo/status.go:20(repo.NewCommitStatus) 2023/11/24 11:15:23 ...eb/routing/logger.go:102:func1() [I] router: completed POST /login/oauth/access_token for xxx.yyy.68.38:48514, 200 OK in 85.6ms @ auth/oauth.go:618(auth.AccessTokenOAuth) 2023/11/24 11:15:23 ...eb/routing/logger.go:102:func1() [I] router: completed POST /api/v1/repos/example-org/example-repo/statuses/7de13c723b94eededa4d5b2c9f7e1e729dd51a11 for xxx.yyy.68.38:57028, 201 Created in 19.3ms @ repo/status.go:20(repo.NewCommitStatus) 2023/11/24 11:15:23 ...eb/routing/logger.go:102:func1() [I] router: completed GET /example-org/example-repo.git/info/refs?service=git-upload-pack for xxx.zzz.aaa.bbb:33872, 401 Unauthorized in 1.3ms @ web/githttp.go:16(web.requireSignIn) 2023/11/24 11:15:23 ...eb/routing/logger.go:102:func1() [I] router: completed GET /example-org/example-repo.git/info/refs?service=git-upload-pack for xxx.zzz.aaa.bbb:33872, 200 OK in 19.1ms @ repo/githttp.go:532(repo.GetInfoRefs) 2023/11/24 11:15:23 ...eb/routing/logger.go:102:func1() [I] router: completed POST /example-org/example-repo.git/git-upload-pack for xxx.zzz.aaa.bbb:33872, 200 OK in 16.4ms @ repo/githttp.go:492(repo.ServiceUploadPack) 2023/11/24 11:15:23 ...eb/routing/logger.go:102:func1() [I] router: completed POST /example-org/example-repo.git/git-upload-pack for xxx.zzz.aaa.bbb:33872, 200 OK in 27.9ms @ repo/githttp.go:492(repo.ServiceUploadPack) 2023/11/24 11:15:29 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for xxx.zzz.aaa.bbb:59908, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess) 2023/11/24 11:15:29 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/token?account=example-account-name&client_id=docker&offline_token=true&service=container_registry for xxx.zzz.aaa.bbb:59912, 200 OK in 13.0ms @ container/container.go:142(container.Authenticate) 2023/11/24 11:15:29 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for xxx.zzz.aaa.bbb:59924, 200 OK in 3.0ms @ container/container.go:134(container.DetermineSupport) 2023/11/24 11:15:29 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for xxx.zzz.aaa.bbb:59930, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess) 2023/11/24 11:15:29 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/token?account=example-account-name&scope=repository%3Aother-org%2Fimage-name%3Apull&service=container_registry for xxx.zzz.aaa.bbb:59936, 200 OK in 14.9ms @ container/container.go:142(container.Authenticate) 2023/11/24 11:15:30 ...eb/routing/logger.go:102:func1() [I] router: completed HEAD /v2/other-org/image-name/manifests/image-tag for xxx.zzz.aaa.bbb:59950, 200 OK in 1049.1ms @ container/container.go:601(container.HeadManifest) 2023/11/24 11:15:31 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/other-org/image-name/manifests/sha256:6e86129c475a777a5a9acc9beacc995d379ea4f8f6d938c058629cb6349ee82b for xxx.zzz.aaa.bbb:59956, 307 Temporary Redirect in 173.0ms @ container/container.go:621(container.GetManifest) 2023/11/24 11:15:33 ...eb/routing/logger.go:102:func1() [I] router: completed POST /api/v1/repos/example-org/example-repo/statuses/7de13c723b94eededa4d5b2c9f7e1e729dd51a11 for xxx.yyy.68.38:57028, 201 Created in 26.3ms @ repo/status.go:20(repo.NewCommitStatus) ``` </details> In this case, I am attempting to run a CI pipeline for `example-org/example-repo` that pulls a base image from `other-org/image-name`.
Author
Owner

@RogerSik commented on GitHub (Nov 25, 2023):

I did now exclued traefik and connected directly with Gitea. "Interesting" part is that the problem is still there.

$ docker tag alpine:latest localhost:3000/rogersik/gitea-act-runner:development-test

$ docker push localhost:3000/rogersik/gitea-act-runner:development-test
The push refers to repository [localhost:3000/rogersik/gitea-act-runner]
cc2447e1835a: Layer already exists
development-test: digest: sha256:48d9183eb12a05c99bcc0bf44a003607b8e941e1d4f41f9ad12bdcc4b5672f86 size: 528

$ docker pull  localhost:3000/rogersik/gitea-act-runner:development-test
development-test: Pulling from rogersik/gitea-act-runner
Digest: sha256:48d9183eb12a05c99bcc0bf44a003607b8e941e1d4f41f9ad12bdcc4b5672f86
Status: Image is up to date for localhost:3000/rogersik/gitea-act-runner:development-test
localhost:3000/rogersik/gitea-act-runner:development-test

$ docker image rm  localhost:3000/rogersik/gitea-act-runner:development-test
Untagged: localhost:3000/rogersik/gitea-act-runner:development-test
Untagged: localhost:3000/rogersik/gitea-act-runner@sha256:48d9183eb12a05c99bcc0bf44a003607b8e941e1d4f41f9ad12bdcc4b5672f86

$ docker pull  localhost:3000/rogersik/gitea-act-runner:development-test
development-test: Pulling from rogersik/gitea-act-runner
Digest: sha256:48d9183eb12a05c99bcc0bf44a003607b8e941e1d4f41f9ad12bdcc4b5672f86
Status: Downloaded newer image for localhost:3000/rogersik/gitea-act-runner:development-test
localhost:3000/rogersik/gitea-act-runner:development-test

$ docker system prune -af
Deleted Images:

$ docker pull  localhost:3000/rogersik/gitea-act-runner:development-test
Error response from daemon: missing signature key

So something Gitea relevated but with my configuration because of the try.gitea.io it did work.

When executing

$ docker pull localhost:3000/rogersik/gitea-act-runner:development-test

appears in the container log.

2023/11/25 22:44:35 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for 127.0.0.1:44664, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess)

But I was before authenticiated with docker login (push was also successfull). @EternalDeiwos has the same message.

@RogerSik commented on GitHub (Nov 25, 2023): I did now exclued traefik and connected directly with Gitea. "Interesting" part is that the problem is still there. ```bash $ docker tag alpine:latest localhost:3000/rogersik/gitea-act-runner:development-test $ docker push localhost:3000/rogersik/gitea-act-runner:development-test The push refers to repository [localhost:3000/rogersik/gitea-act-runner] cc2447e1835a: Layer already exists development-test: digest: sha256:48d9183eb12a05c99bcc0bf44a003607b8e941e1d4f41f9ad12bdcc4b5672f86 size: 528 $ docker pull localhost:3000/rogersik/gitea-act-runner:development-test development-test: Pulling from rogersik/gitea-act-runner Digest: sha256:48d9183eb12a05c99bcc0bf44a003607b8e941e1d4f41f9ad12bdcc4b5672f86 Status: Image is up to date for localhost:3000/rogersik/gitea-act-runner:development-test localhost:3000/rogersik/gitea-act-runner:development-test $ docker image rm localhost:3000/rogersik/gitea-act-runner:development-test Untagged: localhost:3000/rogersik/gitea-act-runner:development-test Untagged: localhost:3000/rogersik/gitea-act-runner@sha256:48d9183eb12a05c99bcc0bf44a003607b8e941e1d4f41f9ad12bdcc4b5672f86 $ docker pull localhost:3000/rogersik/gitea-act-runner:development-test development-test: Pulling from rogersik/gitea-act-runner Digest: sha256:48d9183eb12a05c99bcc0bf44a003607b8e941e1d4f41f9ad12bdcc4b5672f86 Status: Downloaded newer image for localhost:3000/rogersik/gitea-act-runner:development-test localhost:3000/rogersik/gitea-act-runner:development-test $ docker system prune -af Deleted Images: $ docker pull localhost:3000/rogersik/gitea-act-runner:development-test Error response from daemon: missing signature key ``` So something Gitea relevated but with my configuration because of the try.gitea.io it did work. When executing ```bash $ docker pull localhost:3000/rogersik/gitea-act-runner:development-test ``` appears in the container log. ``` 2023/11/25 22:44:35 ...eb/routing/logger.go:102:func1() [I] router: completed GET /v2/ for 127.0.0.1:44664, 401 Unauthorized in 0.1ms @ container/container.go:118(container.ReqContainerAccess) ``` But I was before authenticiated with docker login (push was also successfull). @EternalDeiwos has the same message.
Author
Owner

@evanreichard commented on GitHub (Nov 25, 2023):

I think this is related at least in some regard, as it started happening after upgrading to 1.21.0 as well.

docker login <server>
[...]
Error response from daemon: Get "https://<server>/v2/": unauthorized: authGroup.Verify

Previously I was using my users credentials. After generating an application specific token, I was able to login with that token and all was good again.

Profile -> Settings -> Applications -> Generate Token

Not sure if it's relevant, but my account uses 2FA.


Edit: Yes, it was relevant:

https://github.com/go-gitea/gitea/issues/27819

@evanreichard commented on GitHub (Nov 25, 2023): I think this is related at least in some regard, as it started happening after upgrading to 1.21.0 as well. ``` docker login <server> [...] Error response from daemon: Get "https://<server>/v2/": unauthorized: authGroup.Verify ``` Previously I was using my users credentials. After generating an application specific token, I was able to login with that token and all was good again. Profile -> Settings -> Applications -> Generate Token Not sure if it's relevant, but my account uses 2FA. ___ **Edit:** Yes, it was relevant: https://github.com/go-gitea/gitea/issues/27819
Author
Owner

@EternalDeiwos commented on GitHub (Nov 26, 2023):

What I find odd here is I am getting the expected 307 that redirects to my S3 storage containing the image layers… and the error message is complaining about a missing signature key. I don’t think this is authentication related but rather a server-side problem.

That said, I’ll play with the auth and see if I can at least rule it out as related.

@EternalDeiwos commented on GitHub (Nov 26, 2023): What I find odd here is I am getting the expected 307 that redirects to my S3 storage containing the image layers… and the error message is complaining about a `missing signature key`. I don’t think this is authentication related but rather a server-side problem. That said, I’ll play with the auth and see if I can at least rule it out as related.
Author
Owner

@RogerSik commented on GitHub (Nov 26, 2023):

@evanreichard I also have 2FA enabled but i was using the application token. For testing I disabled 2FA and re-logged in with the normal user passwort. Sadly same error message:

Error response from daemon: missing signature key

The strange thing (from begin of this issue) is that uploads seems to working (no errors) but the pull is failing.

@RogerSik commented on GitHub (Nov 26, 2023): @evanreichard I also have 2FA enabled but i was using the application token. For testing I disabled 2FA and re-logged in with the normal user passwort. Sadly same error message: ``` Error response from daemon: missing signature key ``` The strange thing (from begin of this issue) is that uploads seems to working (no errors) but the pull is failing.
Author
Owner

@RogerSik commented on GitHub (Nov 26, 2023):

I don’t think this is authentication related but rather a server-side problem.

Good clue. Because of that I tried now Gitea Packages with local storages instead of the currently setup with Minio.

    [storage.packages]
    STORAGE_TYPE = local

Pull is now working. :D :/

26_12-31-03

@RogerSik commented on GitHub (Nov 26, 2023): > I don’t think this is authentication related but rather a server-side problem. Good clue. Because of that I tried now Gitea Packages with local storages instead of the currently setup with Minio. ``` [storage.packages] STORAGE_TYPE = local ``` Pull is now working. :D :/ ![26_12-31-03](https://github.com/go-gitea/gitea/assets/165814/5c36e9b7-5390-4742-8a5f-3bc7ec7d3924)
Author
Owner

@EternalDeiwos commented on GitHub (Nov 27, 2023):

That said, I’ll play with the auth and see if I can at least rule it out as related.

I've tested with fresh admin-level credentials and I am pretty sure it is not auth related. Given local is working for Roger, I'd say that probably indicates something to do with #25543.

Edit: also no change for 1.21.1.

@EternalDeiwos commented on GitHub (Nov 27, 2023): > That said, I’ll play with the auth and see if I can at least rule it out as related. I've tested with fresh admin-level credentials and I am pretty sure it is not auth related. Given local is working for Roger, I'd say that probably indicates something to do with #25543. Edit: also no change for 1.21.1.
Author
Owner

@EternalDeiwos commented on GitHub (Nov 27, 2023):

Further testing; after disabling SERVE_DIRECT for packages I am now able to use the registry again as normal.

@EternalDeiwos commented on GitHub (Nov 27, 2023): Further testing; after disabling `SERVE_DIRECT` for packages I am now able to use the registry again as normal.
Author
Owner

@RogerSik commented on GitHub (Nov 27, 2023):

Further testing; after disabling SERVE_DIRECT for packages I am now able to use the registry again as normal.

Can confirm here the same. 🙌

@RogerSik commented on GitHub (Nov 27, 2023): > Further testing; after disabling `SERVE_DIRECT` for packages I am now able to use the registry again as normal. Can confirm here the same. 🙌
Author
Owner

@KN4CK3R commented on GitHub (Nov 28, 2023):

Thanks for testing. Maybe a newer docker client version is more strict with the HTTP headers of the content. With SERVE_DIRECT the content response looks like this

Docker-Header: xyx
Docker-Header2: xyx
Location: url-where-the-blob-is

But the response from url-where-the-blob-is does not contain the docker headers.

@KN4CK3R commented on GitHub (Nov 28, 2023): Thanks for testing. Maybe a newer docker client version is more strict with the HTTP headers of the content. With `SERVE_DIRECT` the content response looks like this ``` Docker-Header: xyx Docker-Header2: xyx Location: url-where-the-blob-is ``` But the response from `url-where-the-blob-is` does not contain the docker headers.
Author
Owner

@KN4CK3R commented on GitHub (Nov 28, 2023):

May still be wrong because you can't add arbitrary headers to the response:
https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html

Overriding Response Header Values

There are times when you want to override certain response header values in a GET response. For example, you might override the Content-Disposition response header value in your GET request.

You can override values for a set of response headers using the following query parameters. ...

response-content-type
response-content-language
response-expires
response-cache-control
response-content-disposition
response-content-encoding
@KN4CK3R commented on GitHub (Nov 28, 2023): May still be wrong because you can't add arbitrary headers to the response: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html > Overriding Response Header Values > > There are times when you want to override certain response header values in a GET response. For example, you might override the Content-Disposition response header value in your GET request. > > You can override values for a set of response headers using the following query parameters. ... > > response-content-type > response-content-language > response-expires > response-cache-control > response-content-disposition > response-content-encoding
Author
Owner

@KN4CK3R commented on GitHub (Nov 28, 2023):

Looks like the error message is from here:
aabc10ec26/jsonsign.go (L23-L25)

But that is unrelated to the serve direct setting...?

The code is only used in Docker schema1 manifests which were removed in 08/23:
0742b56677 (diff-f65d932ec1e10721d44ebd79b23a64cc70599bb6f24db45b48a9f275c9332147L146)

What Docker version do you use? Does some setting enforces the usage of schema1 (which is not supported in Gitea)?

@KN4CK3R commented on GitHub (Nov 28, 2023): Looks like the error message is from here: https://github.com/docker/libtrust/blob/aabc10ec26b754e797f9028f4589c5b7bd90dc20/jsonsign.go#L23-L25 But that is unrelated to the serve direct setting...? The code is only used in Docker schema1 manifests which were removed in 08/23: https://github.com/distribution/distribution/commit/0742b56677a04d00a97fd298663ab09f80b7583c#diff-f65d932ec1e10721d44ebd79b23a64cc70599bb6f24db45b48a9f275c9332147L146 What Docker version do you use? Does some setting enforces the usage of schema1 (which is not supported in Gitea)?
Author
Owner

@EternalDeiwos commented on GitHub (Nov 28, 2023):

What Docker version do you use?

I'm running Docker CE 24.0.6 locally.

Does some setting enforces the usage of schema1 (which is not supported in Gitea)?

Not as far as I have explicitly configured or am aware of...

@EternalDeiwos commented on GitHub (Nov 28, 2023): > What Docker version do you use? I'm running Docker CE `24.0.6` locally. > Does some setting enforces the usage of schema1 (which is not supported in Gitea)? Not as far as I have explicitly configured or am aware of...
Author
Owner

@jessielw commented on GitHub (Dec 11, 2023):

I've changed nothing other than the update to 1.21. I can push to packages but cannot pull. I tried to re login to docker and it's getting Error response from daemon: ............ connection refused even though it's the same setup as before.

Tried to downgrade but the databases are not compatible. Any work arounds until a fix is out?

Note: out of curiosity I pulled down the nightly docker build to see if the fix was on that, but alas it was not. Same issue and now with the database upgrade I'm stuck on the nightly release haha

@jessielw commented on GitHub (Dec 11, 2023): I've changed nothing other than the update to 1.21. I can push to packages but cannot pull. I tried to re login to docker and it's getting `Error response from daemon: ............ connection refused` even though it's the same setup as before. Tried to downgrade but the databases are not compatible. Any work arounds until a fix is out? Note: out of curiosity I pulled down the `nightly` docker build to see if the fix was on that, but alas it was not. Same issue and now with the database upgrade I'm stuck on the `nightly` release haha
Author
Owner

@lunny commented on GitHub (Dec 12, 2023):

Maybe you need to check whether your ROOT_URL is correct.

@lunny commented on GitHub (Dec 12, 2023): Maybe you need to check whether your `ROOT_URL` is correct.
Author
Owner

@jessielw commented on GitHub (Dec 12, 2023):

Maybe you need to check whether your ROOT_URL is correct.

I'll double-check it in the morning, but it was for the last several months with no configuration changes. Only the last update broke pulling from package registries for me.

@jessielw commented on GitHub (Dec 12, 2023): > Maybe you need to check whether your `ROOT_URL` is correct. I'll double-check it in the morning, but it was for the last several months with no configuration changes. Only the last update broke pulling from package registries for me.
Author
Owner

@lunny commented on GitHub (Dec 12, 2023):

Maybe you have a different problem as this one? If that, you can create a new issue with more description.

@lunny commented on GitHub (Dec 12, 2023): Maybe you have a different problem as this one? If that, you can create a new issue with more description.
Author
Owner

@jessielw commented on GitHub (Dec 12, 2023):

Maybe you have a different problem as this one? If that, you can create a new issue with more description.

It's possible, actually, although it seems like it's similar.

I'll produce the issue again tomorrow and post a more detailed bug in a separate issue.

@jessielw commented on GitHub (Dec 12, 2023): > Maybe you have a different problem as this one? If that, you can create a new issue with more description. It's possible, actually, although it seems like it's similar. I'll produce the issue again tomorrow and post a more detailed bug in a separate issue.
Author
Owner

@EternalDeiwos commented on GitHub (Dec 12, 2023):

@KN4CK3R I have just seen the needs feedback tag, looking for anything specific?

@EternalDeiwos commented on GitHub (Dec 12, 2023): @KN4CK3R I have just seen the needs feedback tag, looking for anything specific?
Author
Owner

@wytzevanderploeg commented on GitHub (Feb 23, 2024):

I had this problem, it turned out I had a rather old version of docker-ce running.

Sending build context to Docker daemon  5.632kB
Step 1/1 : FROM gitea/gitea:1.20
missing signature key

Docker version: Docker version 17.09.1-ce, build 19e2cf6
After upgrading to the latest version (Docker version 25.0.3, build 4debf41) I no longer had any problems pulling/building the image.

@wytzevanderploeg commented on GitHub (Feb 23, 2024): I had this problem, it turned out I had a rather old version of docker-ce running. ``` Sending build context to Docker daemon 5.632kB Step 1/1 : FROM gitea/gitea:1.20 missing signature key ``` Docker version: `Docker version 17.09.1-ce, build 19e2cf6` After upgrading to the latest version (Docker version 25.0.3, build 4debf41) I no longer had any problems pulling/building the image.
Author
Owner

@itzaname commented on GitHub (Jul 4, 2024):

I'm seeing this using Amazon S3 with SERVE_DIRECT enabled. Once SERVE_DIRECT is disabled things work as normal. Gitea 1.22.0.

Podman:

# podman version
Client:       Podman Engine
Version:      5.1.1
API Version:  5.1.1
Go Version:   go1.22.3
Git Commit:   bda6eb03dcbcf12a5b7ae004c1240e38dd056d24-dirty
Built:        Tue Jun  4 18:12:10 2024
OS/Arch:      linux/amd64
# podman pull gitea/org/test:latest                       
Trying to pull gitea/org/test:latest...
Error: initializing image from source docker://gitea/org/test:latest: unsupported schema version 2

Docker:

# docker version
Client:
 Version:           27.0.3
 API version:       1.46
 Go version:        go1.22.4
 Git commit:        7d4bcd863a
 Built:             Mon Jul  1 21:15:54 2024
 OS/Arch:           linux/amd64
 Context:           default

Server:
 Engine:
  Version:          27.0.3
  API version:      1.46 (minimum version 1.24)
  Go version:       go1.22.4
  Git commit:       662f78c0b1
  Built:            Mon Jul  1 21:15:54 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.7.18
  GitCommit:        ae71819c4f5e67bb4d5ae76a6b735f29cc25774e.m
 runc:
  Version:          1.1.13
  GitCommit:        
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
# docker pull gitea/org/test:latest
Error response from daemon: missing signature key
@itzaname commented on GitHub (Jul 4, 2024): I'm seeing this using Amazon S3 with `SERVE_DIRECT` enabled. Once `SERVE_DIRECT` is disabled things work as normal. Gitea `1.22.0`. Podman: ``` # podman version Client: Podman Engine Version: 5.1.1 API Version: 5.1.1 Go Version: go1.22.3 Git Commit: bda6eb03dcbcf12a5b7ae004c1240e38dd056d24-dirty Built: Tue Jun 4 18:12:10 2024 OS/Arch: linux/amd64 ``` ``` # podman pull gitea/org/test:latest Trying to pull gitea/org/test:latest... Error: initializing image from source docker://gitea/org/test:latest: unsupported schema version 2 ``` Docker: ``` # docker version Client: Version: 27.0.3 API version: 1.46 Go version: go1.22.4 Git commit: 7d4bcd863a Built: Mon Jul 1 21:15:54 2024 OS/Arch: linux/amd64 Context: default Server: Engine: Version: 27.0.3 API version: 1.46 (minimum version 1.24) Go version: go1.22.4 Git commit: 662f78c0b1 Built: Mon Jul 1 21:15:54 2024 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.7.18 GitCommit: ae71819c4f5e67bb4d5ae76a6b735f29cc25774e.m runc: Version: 1.1.13 GitCommit: docker-init: Version: 0.19.0 GitCommit: de40ad0 ``` ``` # docker pull gitea/org/test:latest Error response from daemon: missing signature key ```
Author
Owner

@meruiden commented on GitHub (Jul 19, 2024):

same here, minio over blackblaze b2, SERVE_DIRECT=false fixes it

@meruiden commented on GitHub (Jul 19, 2024): same here, minio over blackblaze b2, SERVE_DIRECT=false fixes it
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/gitea#12046