[GH-ISSUE #1155] Support export/import models as Docker Images to integrate with existing platforms #26346

Open
opened 2026-04-22 02:34:33 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @marcellodesales on GitHub (Nov 16, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1155

Problem

  • Until we can solve the problem of the 403 access #676, there's no way to pull models from the Ollama server
    • At the time I'm writing this, I don't think the Ollama registry (Docker Registry with a different type) is available
  • I was able to work around the 403 problem, demonstrated at #1145, and built/pushed the same digests as docker images layers to a Docker Registry :)
    • And consequently, push my a privately controlled Docker Registry, Pull it, and make the Models available in an isolated Kubernetes environment already managed/scaled to run models

As we have already an established DevSecOps Platform with the automation around Docker Images, Docker Registries, I'd like to take advantage of that and reuse Ollama's model with the existing infrastructure already put in place. Ollama is a great too already for the CLI and Server capability!

Design

  • Given the internals of the blobs are just docker images, why not provide an export/import tool to Docker Images?
    • I can see the models stored very similarly to Docker Images
    • mediaType is the same as docker's vnd.docker.container.image.v1+json

ollama export llama2 marcellodesales/genai/models/llama2

  • Export the model as a docker image tagged by a given name

ollama import llama2 marcellodesales/genai/models/llama2

  • Imports the model as a Docker Image into the models of ollama

Implementation

  • Just some sequence of commands if anyone wants to get on this implementation

    • I can send a PR this weekend if this feature is accepted...
  • List models as usual

$ docker run --network host  -ti -v $(pwd):$(pwd) -w $(pwd) -v $HOME/.ollama:/root/.ollama ollama/ollama list               
NAME            ID              SIZE    MODIFIED     
codellama:code  fc84f39375bc    3.8 GB  16 hours ago    
  • Inspecting the models
$ cat models/manifests/registry.ollama.ai/library/codellama/code
{"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:23fbdb4ea003a1e1c38187539cc4cc8e85c6fb80160a659e25894ca60e781a33","size":455},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:8b2eceb7b7a11c307bc9deed38b263e05015945dc0fa2f50c0744c5d49dd293e","size":3825898144},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:8c17c2ebb0ea011be9981cc3922db8ca8fa61e828c5d3f44cb6ae342bf80460b","size":7020},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:590d74a5569b8a20eb2a8b0aa869d1d1d3faf6a7fdda1955ae827073c7f502fc","size":4790},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:d2b44be9e12117ee2652e9a6c51df28ef408bf487e770b11ee0f7bce8790f3ca","size":31}]}%                                                                               

ollama models export llama2 marcellodesales/genai/models/llama2 Implementation

  • Select the manifest for the model selected and copy to a temporary dir
mkdir selected-model/manifests/registry.ollama.ai/library/llama2/latest 
cp models/manifests/registry.ollama.ai/library/llama2/latest selected-model/manifests/registry.ollama.ai/library/llama2/latest
cp -R listed layers blobs to selected-model/blobs
  • Generate a docker image for the data
FROM busybox AS data

WORKDIR /marcellodesales/platforms/vionix/genai/model

COPY selected-model .
  • Then run the command
docker images | grep marcellodesales
marcellodesales/genai/models/llama2                                                   latest                                                                           da34440295ec   26 hours ago    3.83GB

ollama models import marcellodesales/genai/models/llama2 Implementation

Note

: the import command has the tag information under the manifests so there's no need for a parameter

  • Just proceed to generate a docker image and push
    • Parse a given manifest, select the digets = blobs in the file system
    • For the selected ones, create a Docker Image to be run as a Data Container and push to your registry
    • At the destination running Ollama Server, Pull the docker image and move the data back to the .ollama/models
    • Verify that the new model is installed after by-passing the limitations described
$ docker run -ti -v /Users/mdesales/.ollama/models:/data marcellodesales/genai/models/llama2 cp -Rv /marcellodesales/platforms/vionix/genai/model/blobs /data 
'/marcellodesales/platforms/vionix/genai/model/blobs/sha256:2759286baa875dc22de5394b4a925701b1896a7e3f8e53275c36f75a877a82c9' -> '/data/blobs/sha256:2759286baa875dc22de5394b4a925701b1896a7e3f8e53275c36f75a877a82c9'
'/marcellodesales/platforms/vionix/genai/model/blobs/sha256:5407e3188df9a34504e2071e0743682d859b68b6128f5c90994d0eafae29f722' -> '/data/blobs/sha256:5407e3188df9a34504e2071e0743682d859b68b6128f5c90994d0eafae29f722'
'/marcellodesales/platforms/vionix/genai/model/blobs/sha256:7c23fb36d80141c4ab8cdbb61ee4790102ebd2bf7aeff414453177d4f2110e5d' -> '/data/blobs/sha256:7c23fb36d80141c4ab8cdbb61ee4790102ebd2bf7aeff414453177d4f2110e5d'
'/marcellodesales/platforms/vionix/genai/model/blobs/sha256:22f7f8ef5f4c791c1b03d7eb414399294764d7cc82c7e94aa81a1feb80a983a2' -> '/data/blobs/sha256:22f7f8ef5f4c791c1b03d7eb414399294764d7cc82c7e94aa81a1feb80a983a2'
'/marcellodesales/platforms/vionix/genai/model/blobs/sha256:2e0493f67d0c8c9c68a8aeacdf6a38a2151cb3c4c1d42accf296e19810527988' -> '/data/blobs/sha256:2e0493f67d0c8c9c68a8aeacdf6a38a2151cb3c4c1d42accf296e19810527988'
'/marcellodesales/platforms/vionix/genai/model/blobs/sha256:8c17c2ebb0ea011be9981cc3922db8ca8fa61e828c5d3f44cb6ae342bf80460b' -> '/data/blobs/sha256:8c17c2ebb0ea011be9981cc3922db8ca8fa61e828c5d3f44cb6ae342bf80460b'
'/marcellodesales/platforms/vionix/genai/model/blobs' -> '/data/blobs'

$ docker run -ti -v /Users/mdesales/.ollama/models:/data marcellodesales/genai/models/llama2 cp -Rv /marcellodesales/platforms/vionix/genai/model/manifests /data 
'/marcellodesales/platforms/vionix/genai/model/manifests/registry.ollama.ai/library/llama2/latest' -> '/data/manifests/registry.ollama.ai/library/llama2/latest'
'/marcellodesales/platforms/vionix/genai/model/manifests/registry.ollama.ai/library/llama2' -> '/data/manifests/registry.ollama.ai/library/llama2'
'/marcellodesales/platforms/vionix/genai/model/manifests/registry.ollama.ai/library' -> '/data/manifests/registry.ollama.ai/library'
'/marcellodesales/platforms/vionix/genai/model/manifests/registry.ollama.ai' -> '/data/manifests/registry.ollama.ai'
'/marcellodesales/platforms/vionix/genai/model/manifests' -> '/data/manifests'

Then, just verifying the final result:

$ docker run --network host  -ti -v $(pwd):$(pwd) -w $(pwd) -v $HOME/.ollama:/root/.ollama ollama/ollama list
NAME            ID              SIZE    MODIFIED      
codellama:code  fc84f39375bc    3.8 GB  16 hours ago    
llama2:latest   fe938a131f40    3.8 GB  6 seconds ago   

Questions

  • Any interest on this feature? Is there anyone working on something similar?
  • Are there any plans to open-source the registry server? Is it possible to deploy our own?
  • Any plans to fix the 403 problem discussed in #676?
Originally created by @marcellodesales on GitHub (Nov 16, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1155 # Problem * Until we can solve the problem of the 403 access #676, there's no way to pull models from the Ollama server * At the time I'm writing this, I don't think the Ollama registry (Docker Registry with a different type) is available * I was able to work around the 403 problem, demonstrated at #1145, and built/pushed the same `digests` as docker images layers to a Docker Registry :) * And consequently, push my a privately controlled Docker Registry, Pull it, and make the Models available in an isolated Kubernetes environment already managed/scaled to run models As we have already an established DevSecOps Platform with the automation around Docker Images, Docker Registries, I'd like to take advantage of that and reuse Ollama's model with the existing infrastructure already put in place. Ollama is a great too already for the CLI and Server capability! # Design * Given the internals of the blobs are just docker images, why not provide an export/import tool to Docker Images? * I can see the models stored very similarly to Docker Images * `mediaType` is the same as docker's `vnd.docker.container.image.v1+json` ## `ollama export llama2 marcellodesales/genai/models/llama2` * Export the model as a docker image tagged by a given name ## `ollama import llama2 marcellodesales/genai/models/llama2` * Imports the model as a Docker Image into the models of ollama # Implementation * Just some sequence of commands if anyone wants to get on this implementation * I can send a PR this weekend if this feature is accepted... * List models as usual ```console $ docker run --network host -ti -v $(pwd):$(pwd) -w $(pwd) -v $HOME/.ollama:/root/.ollama ollama/ollama list NAME ID SIZE MODIFIED codellama:code fc84f39375bc 3.8 GB 16 hours ago ``` * Inspecting the models ```console $ cat models/manifests/registry.ollama.ai/library/codellama/code ``` ```json {"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:23fbdb4ea003a1e1c38187539cc4cc8e85c6fb80160a659e25894ca60e781a33","size":455},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:8b2eceb7b7a11c307bc9deed38b263e05015945dc0fa2f50c0744c5d49dd293e","size":3825898144},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:8c17c2ebb0ea011be9981cc3922db8ca8fa61e828c5d3f44cb6ae342bf80460b","size":7020},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:590d74a5569b8a20eb2a8b0aa869d1d1d3faf6a7fdda1955ae827073c7f502fc","size":4790},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:d2b44be9e12117ee2652e9a6c51df28ef408bf487e770b11ee0f7bce8790f3ca","size":31}]}% ``` ## `ollama models export llama2 marcellodesales/genai/models/llama2` Implementation * Select the manifest for the model selected and copy to a temporary dir ```console mkdir selected-model/manifests/registry.ollama.ai/library/llama2/latest cp models/manifests/registry.ollama.ai/library/llama2/latest selected-model/manifests/registry.ollama.ai/library/llama2/latest cp -R listed layers blobs to selected-model/blobs ``` * Generate a docker image for the data ```Dockerfile FROM busybox AS data WORKDIR /marcellodesales/platforms/vionix/genai/model COPY selected-model . ``` * Then run the command ```console docker images | grep marcellodesales marcellodesales/genai/models/llama2 latest da34440295ec 26 hours ago 3.83GB ``` ## `ollama models import marcellodesales/genai/models/llama2` Implementation > **NOTE**: the import command has the tag information under the manifests so there's no need for a parameter * Just proceed to generate a docker image and push * Parse a given manifest, select the digets = blobs in the file system * For the selected ones, create a Docker Image to be run as a Data Container and push to your registry * At the destination running Ollama Server, Pull the docker image and move the data back to the .ollama/models * Verify that the new model is installed after by-passing the limitations described ```console $ docker run -ti -v /Users/mdesales/.ollama/models:/data marcellodesales/genai/models/llama2 cp -Rv /marcellodesales/platforms/vionix/genai/model/blobs /data '/marcellodesales/platforms/vionix/genai/model/blobs/sha256:2759286baa875dc22de5394b4a925701b1896a7e3f8e53275c36f75a877a82c9' -> '/data/blobs/sha256:2759286baa875dc22de5394b4a925701b1896a7e3f8e53275c36f75a877a82c9' '/marcellodesales/platforms/vionix/genai/model/blobs/sha256:5407e3188df9a34504e2071e0743682d859b68b6128f5c90994d0eafae29f722' -> '/data/blobs/sha256:5407e3188df9a34504e2071e0743682d859b68b6128f5c90994d0eafae29f722' '/marcellodesales/platforms/vionix/genai/model/blobs/sha256:7c23fb36d80141c4ab8cdbb61ee4790102ebd2bf7aeff414453177d4f2110e5d' -> '/data/blobs/sha256:7c23fb36d80141c4ab8cdbb61ee4790102ebd2bf7aeff414453177d4f2110e5d' '/marcellodesales/platforms/vionix/genai/model/blobs/sha256:22f7f8ef5f4c791c1b03d7eb414399294764d7cc82c7e94aa81a1feb80a983a2' -> '/data/blobs/sha256:22f7f8ef5f4c791c1b03d7eb414399294764d7cc82c7e94aa81a1feb80a983a2' '/marcellodesales/platforms/vionix/genai/model/blobs/sha256:2e0493f67d0c8c9c68a8aeacdf6a38a2151cb3c4c1d42accf296e19810527988' -> '/data/blobs/sha256:2e0493f67d0c8c9c68a8aeacdf6a38a2151cb3c4c1d42accf296e19810527988' '/marcellodesales/platforms/vionix/genai/model/blobs/sha256:8c17c2ebb0ea011be9981cc3922db8ca8fa61e828c5d3f44cb6ae342bf80460b' -> '/data/blobs/sha256:8c17c2ebb0ea011be9981cc3922db8ca8fa61e828c5d3f44cb6ae342bf80460b' '/marcellodesales/platforms/vionix/genai/model/blobs' -> '/data/blobs' $ docker run -ti -v /Users/mdesales/.ollama/models:/data marcellodesales/genai/models/llama2 cp -Rv /marcellodesales/platforms/vionix/genai/model/manifests /data '/marcellodesales/platforms/vionix/genai/model/manifests/registry.ollama.ai/library/llama2/latest' -> '/data/manifests/registry.ollama.ai/library/llama2/latest' '/marcellodesales/platforms/vionix/genai/model/manifests/registry.ollama.ai/library/llama2' -> '/data/manifests/registry.ollama.ai/library/llama2' '/marcellodesales/platforms/vionix/genai/model/manifests/registry.ollama.ai/library' -> '/data/manifests/registry.ollama.ai/library' '/marcellodesales/platforms/vionix/genai/model/manifests/registry.ollama.ai' -> '/data/manifests/registry.ollama.ai' '/marcellodesales/platforms/vionix/genai/model/manifests' -> '/data/manifests' ``` Then, just verifying the final result: ```console $ docker run --network host -ti -v $(pwd):$(pwd) -w $(pwd) -v $HOME/.ollama:/root/.ollama ollama/ollama list NAME ID SIZE MODIFIED codellama:code fc84f39375bc 3.8 GB 16 hours ago llama2:latest fe938a131f40 3.8 GB 6 seconds ago ``` # Questions * Any interest on this feature? Is there anyone working on something similar? * Are there any plans to open-source the registry server? Is it possible to deploy our own? * Any plans to fix the 403 problem discussed in #676?
GiteaMirror added the feature request label 2026-04-22 02:34:33 -05:00
Author
Owner

@sodre commented on GitHub (Dec 11, 2023):

One option would be to use OCI Registry as a Service

<!-- gh-comment-id:1849532299 --> @sodre commented on GitHub (Dec 11, 2023): One option would be to use [OCI Registry as a Service](https://oras.land)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26346