[GH-ISSUE #5489] Running Hugging Face Models #3434

Closed
opened 2026-04-12 14:05:33 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @theainerd on GitHub (Jul 4, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5489

Hi team,

Can we load Hugging Face models directly using ollama?

Originally created by @theainerd on GitHub (Jul 4, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5489 Hi team, Can we load Hugging Face models directly using ollama?
GiteaMirror added the feature request label 2026-04-12 14:05:33 -05:00
Author
Owner

@pdevine commented on GitHub (Jul 8, 2024):

@theainerd do you mean something like being able to run ollama run https://huggingface.co/path/to/model or do you mean being able to import models from HF Safetensors?

For the first use case, it's really hard to do that because we don't support many of the models which are available on HF which results in a broken user experience. This is the case for llama.cpp right now. For the second use case, we actually do support converting many models from Safetensors, including Llama2/Llama3, Gemma, and Mistral/Mixtral based models. If you include the path in the FROM line of a model file, you can use ollama create to directly import the models into Ollama.

I'm going to go ahead and close the issue, but feel free to keep commenting. If I misunderstood something we can reopen.

<!-- gh-comment-id:2215473332 --> @pdevine commented on GitHub (Jul 8, 2024): @theainerd do you mean something like being able to run `ollama run https://huggingface.co/path/to/model` or do you mean being able to import models from HF Safetensors? For the first use case, it's really hard to do that because we don't support many of the models which are available on HF which results in a broken user experience. This is the case for llama.cpp right now. For the second use case, we actually do support converting many models from Safetensors, including Llama2/Llama3, Gemma, and Mistral/Mixtral based models. If you include the path in the `FROM` line of a model file, you can use `ollama create` to directly import the models into Ollama. I'm going to go ahead and close the issue, but feel free to keep commenting. If I misunderstood something we can reopen.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3434