[GH-ISSUE #2161] Provide Docker images with pre-downloaded models #1235

Closed
opened 2026-04-12 11:00:22 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @eddumelendez on GitHub (Jan 23, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2161

Libraries and frameworks have been built around Ollama such as LangChain4J and pulling models are part of the process to make use of it.

Currently, in order to test the library integration there is a setup done using Testcontainers to start Ollama's Docker image and pull the image to provide the infrastructure needed for the Integration Test. See this code snippet.

I think https://github.com/jmorganca/ollama/issues/2160 could help in this scenario as well but for those who want to run specific models supported by Ollama, it provides a nice getting started experience IMO. There is an effort on the project to maintain those images https://hub.docker.com/u/langchain4j but would be nice to provide images with the model in it.

Originally created by @eddumelendez on GitHub (Jan 23, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2161 Libraries and frameworks have been built around Ollama such as [LangChain4J](https://github.com/langchain4j/langchain4j) and pulling models are part of the process to make use of it. Currently, in order to test the library integration there is a setup done using [Testcontainers](https://testcontainers.com/) to start Ollama's Docker image and pull the image to provide the infrastructure needed for the Integration Test. See [this code snippet](https://github.com/langchain4j/langchain4j/blob/50f32ba1985826ce7dc81300400c6c5fd2c22576/langchain4j-ollama/src/test/java/dev/langchain4j/model/ollama/AbstractOllamaInfrastructure.java#L65-L75). I think https://github.com/jmorganca/ollama/issues/2160 could help in this scenario as well but for those who want to run specific models supported by Ollama, it provides a nice getting started experience IMO. There is an effort on the project to maintain those images https://hub.docker.com/u/langchain4j but would be nice to provide images with the model in it.
Author
Owner

@eddumelendez commented on GitHub (Feb 21, 2024):

bakllava, mistral, orca-mini and phi would be a good start for langchain4j and spring-ai projects

<!-- gh-comment-id:1957098567 --> @eddumelendez commented on GitHub (Feb 21, 2024): `bakllava`, `mistral`, `orca-mini` and `phi` would be a good start for [langchain4j](https://github.com/langchain4j/langchain4j/blob/main/langchain4j-ollama/src/test/java/dev/langchain4j/model/ollama/AbstractOllamaInfrastructure.java#L21) and spring-ai projects
Author
Owner

@mxyng commented on GitHub (Mar 11, 2024):

There's currently no plans to provide official docker images with predownloaded models although you can always augment it to include this functionality

<!-- gh-comment-id:1989238053 --> @mxyng commented on GitHub (Mar 11, 2024): There's currently no plans to provide official docker images with predownloaded models although you can always augment it to include this functionality
Author
Owner

@CWKSC commented on GitHub (Oct 31, 2024):

According to docker - How to build in Mistral model into Ollama permanently? - Stack Overflow

FROM docker.io/ollama/ollama:0.3.14

# https://stackoverflow.com/questions/78688712/how-to-build-in-mistral-model-into-ollama-permanently
RUN apt-get update && \
    DEBIAN_FRONTEND=noninteractive \
    apt-get install --no-install-recommends --assume-yes \
        curl
RUN ollama serve & \
    curl --retry 10 --retry-connrefused --retry-delay 1 http://localhost:11434/ && \
    curl -X POST -d '{"name": "llama3.2:3b"}' http://localhost:11434/api/pull
<!-- gh-comment-id:2449775166 --> @CWKSC commented on GitHub (Oct 31, 2024): According to [docker - How to build in Mistral model into Ollama permanently? - Stack Overflow](https://stackoverflow.com/questions/78688712/how-to-build-in-mistral-model-into-ollama-permanently) ```containerfile FROM docker.io/ollama/ollama:0.3.14 # https://stackoverflow.com/questions/78688712/how-to-build-in-mistral-model-into-ollama-permanently RUN apt-get update && \ DEBIAN_FRONTEND=noninteractive \ apt-get install --no-install-recommends --assume-yes \ curl RUN ollama serve & \ curl --retry 10 --retry-connrefused --retry-delay 1 http://localhost:11434/ && \ curl -X POST -d '{"name": "llama3.2:3b"}' http://localhost:11434/api/pull ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1235