[GH-ISSUE #7254] Support directly running GGUF files without importing #4610

Open
opened 2026-04-12 15:31:44 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @ahizap on GitHub (Oct 18, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7254

In llama.cpp we can directly run models with llama-cli -m your_model.gguf without having to import the model, It would be great if we can do the same with ollama.

Originally created by @ahizap on GitHub (Oct 18, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7254 In llama.cpp we can directly run models with `llama-cli -m your_model.gguf ` without having to import the model, It would be great if we can do the same with ollama.
GiteaMirror added the feature request label 2026-04-12 15:31:44 -05:00
Author
Owner

@JJones780 commented on GitHub (Dec 17, 2024):

If there was an option to link instead of a copy during import that would be very helpful.

An option to "install" without copying the model files would work as well.

For example, Oogabooga's text-gereation-webui 's download-model.py script has an option--text-onlywhich downloads everything except the model files. Then sneakernet for the win.

<!-- gh-comment-id:2549616598 --> @JJones780 commented on GitHub (Dec 17, 2024): If there was an option to link instead of a copy during import that would be very helpful. An option to "install" without copying the model files would work as well. For example, Oogabooga's text-gereation-webui 's download-model.py script has an option` --text-only `which downloads everything except the model files. Then sneakernet for the win.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4610