[GH-ISSUE #1555] GGUF in Docker? #26610

Closed
opened 2026-04-22 02:57:56 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @jimmyjam-50066 on GitHub (Dec 15, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1555

To support GGUF files in Docker, could we have a script in the docker that will take the argument and create the Model file for ollama to use?

example with solar-10.7b being the target local model name:

docker exec ollama_cat pull_gguf_from_url.sh solar-10.7b https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF

magic (really creating a new modelfile with the first parameter and downloading the second parameter to a gguf directory or something)
then

docker exec ollama_cat ollama create solar-10.7b -f solar-10.7bModel

Just an idea, i'm sure there's a better way to accomplish this.

Originally created by @jimmyjam-50066 on GitHub (Dec 15, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1555 To support GGUF files in Docker, could we have a script in the docker that will take the argument and create the Model file for ollama to use? example with solar-10.7b being the target local model name: ``` docker exec ollama_cat pull_gguf_from_url.sh solar-10.7b https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF ``` *magic* (really creating a new modelfile with the first parameter and downloading the second parameter to a gguf directory or something) then ``` docker exec ollama_cat ollama create solar-10.7b -f solar-10.7bModel ``` Just an idea, i'm sure there's a better way to accomplish this.
GiteaMirror added the feature request label 2026-04-22 02:57:56 -05:00
Author
Owner

@mxyng commented on GitHub (Jan 22, 2024):

This exists in part as part of the quantize docker image. Creating a Modelfile is out of scope since the base case, i.e. just the binary model file, is trivial while anything more is very model and user specific.

<!-- gh-comment-id:1905041558 --> @mxyng commented on GitHub (Jan 22, 2024): This exists in part as part of the [quantize](https://hub.docker.com/r/ollama/quantize) docker image. Creating a Modelfile is out of scope since the base case, i.e. just the binary model file, is trivial while anything more is very model and user specific.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26610