[GH-ISSUE #6510] Performing GET request to registry.ollama.ai/v2/ returns 404 page not found #50609

Open
opened 2026-04-28 16:31:44 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @yeahdongcn on GitHub (Aug 26, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6510

What is the issue?

Background:

Kubernetes 1.31 introduced a new feature: Read-Only Volumes Based on OCI Artifacts. I believe this feature could be very useful for deploying a dedicated model alongside Ollama in Kubernetes.

The currently supported container runtime is CRI-O, which relies on containers/image for all image-related operations. It uses a GET request to the following URL to determine the appropriate schema: e.g. https://registry.ollama.ai/v2/.

I hardcoded the schema to HTTPS and used the Ollama image registry.ollama.ai/library/tinyllama:latest as the OCI image volume. After making some modifications to the modules consumed by CRI-O, I was able to get the pod and container running without any issues.

Please see the following logs:

❯ sudo crictl --timeout=200s --runtime-endpoint unix:///run/crio/crio.sock run ./container.json ./sandbox_config.json
INFO[0005] Pulling container image: registry.docker.com/ollama/ollama:latest 
INFO[0005] Pulling image registry.ollama.ai/library/tinyllama:latest to be mounted to container path: /volume 
7e437894449f6429799cc5ef236c4a4570a69e3769bf324bbf700045e383cae8
❯ sudo crictl --timeout=200s --runtime-endpoint unix:///run/crio/crio.sock ps
CONTAINER           IMAGE                                        CREATED             STATE               NAME                ATTEMPT             POD ID              POD
7e437894449f6       registry.docker.com/ollama/ollama:latest     8 seconds ago       Running             podsandbox-sleep    0                   4d1766fdf286b       unknown
❯ sudo crictl --timeout=200s --runtime-endpoint unix:///run/crio/crio.sock exec -it 7e437894449f6 bash
root@crictl_host:/# cd volume/
root@crictl_host:/volume# ls -l
total 622772
-rw-r--r-- 1 root root 637699456 Aug 26 08:32 model
-rw-r--r-- 1 root root        98 Aug 26 08:32 params
-rw-r--r-- 1 root root        31 Aug 26 08:32 system
-rw-r--r-- 1 root root        70 Aug 26 08:32 template
root@crictl_host:/volume# 

I'm wondering if the Ollama model registry could be slightly updated to handle requests to registry.ollama.ai/v2/. This would allow certain container runtimes to seamlessly integrate Ollama's OCI models without any issues.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.6

Originally created by @yeahdongcn on GitHub (Aug 26, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6510 ### What is the issue? Background: Kubernetes 1.31 introduced a new feature: [Read-Only Volumes Based on OCI Artifacts](https://kubernetes.io/blog/2024/08/16/kubernetes-1-31-image-volume-source/). I believe this feature could be very useful for deploying a dedicated model alongside Ollama in Kubernetes. The currently supported container runtime is [CRI-O](https://github.com/cri-o/cri-o), which relies on [containers/image](https://github.com/containers/image) for all image-related operations. It uses a `GET` request to the following URL to [determine](https://github.com/containers/image/blob/main/docker/docker_client.go#L903) the appropriate schema: e.g. https://registry.ollama.ai/v2/. I hardcoded the schema to `HTTPS` and used the Ollama image `registry.ollama.ai/library/tinyllama:latest` as the OCI image volume. After making some modifications to the modules consumed by CRI-O, I was able to get the pod and container running without any issues. Please see the following logs: ```bash ❯ sudo crictl --timeout=200s --runtime-endpoint unix:///run/crio/crio.sock run ./container.json ./sandbox_config.json INFO[0005] Pulling container image: registry.docker.com/ollama/ollama:latest INFO[0005] Pulling image registry.ollama.ai/library/tinyllama:latest to be mounted to container path: /volume 7e437894449f6429799cc5ef236c4a4570a69e3769bf324bbf700045e383cae8 ❯ sudo crictl --timeout=200s --runtime-endpoint unix:///run/crio/crio.sock ps CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD 7e437894449f6 registry.docker.com/ollama/ollama:latest 8 seconds ago Running podsandbox-sleep 0 4d1766fdf286b unknown ❯ sudo crictl --timeout=200s --runtime-endpoint unix:///run/crio/crio.sock exec -it 7e437894449f6 bash root@crictl_host:/# cd volume/ root@crictl_host:/volume# ls -l total 622772 -rw-r--r-- 1 root root 637699456 Aug 26 08:32 model -rw-r--r-- 1 root root 98 Aug 26 08:32 params -rw-r--r-- 1 root root 31 Aug 26 08:32 system -rw-r--r-- 1 root root 70 Aug 26 08:32 template root@crictl_host:/volume# ``` I'm wondering if the Ollama model registry could be slightly updated to handle requests to `registry.ollama.ai/v2/`. This would allow certain container runtimes to seamlessly integrate Ollama's OCI models without any issues. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.6
GiteaMirror added the bug label 2026-04-28 16:31:44 -05:00
Author
Owner

@yeahdongcn commented on GitHub (Aug 26, 2024):

BTW, I also tried pushing the image to Harbor and Nexus, and they both handle the %s://%s/v2/ URL correctly.

<!-- gh-comment-id:2309691677 --> @yeahdongcn commented on GitHub (Aug 26, 2024): BTW, I also tried pushing the image to Harbor and Nexus, and they both handle the `%s://%s/v2/` URL correctly.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#50609