[GH-ISSUE #1754] How to add custom LLM models from Huggingface #1005

Closed
opened 2026-04-12 10:42:56 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @yiouyou on GitHub (Jan 1, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/1754

I have some fine-tuned models saved on Huggingface. How to add or convert any custome LLM to ollama fitted version?

Originally created by @yiouyou on GitHub (Jan 1, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/1754 I have some fine-tuned models saved on Huggingface. How to add or convert any custome LLM to ollama fitted version?
GiteaMirror added the question label 2026-04-12 10:42:56 -05:00
Author
Owner

@mongolu commented on GitHub (Jan 2, 2024):

You need to convert it.
Assuming you have all the "requirements" installed and hf model on your local drive, you could inspire from this:
converting hf to gguf
And after successfully converting to gguf, you need to import it in ollama from modelfile

I can confirm I've done it and it works.

<!-- gh-comment-id:1873725209 --> @mongolu commented on GitHub (Jan 2, 2024): You need to convert it. Assuming you have all the "requirements" installed and hf model on your local drive, you could inspire from this: [converting hf to gguf](https://github.com/ggerganov/llama.cpp/blob/edd1ab7bc34c10a780ee7f9a4499f7689cdad36d/scripts/convert-gg.sh#L7) [And after successfully converting to gguf, you need to import it in ollama from modelfile](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md) I can confirm I've done it and it works.
Author
Owner

@BruceMacD commented on GitHub (Jan 2, 2024):

Thanks for opening the issue. Marking this as resolved for now as the question seems to have been answered (thanks to mongolu for that). Please feel free to ask any follow up questions though, I'll keep an eye out.

<!-- gh-comment-id:1873909500 --> @BruceMacD commented on GitHub (Jan 2, 2024): Thanks for opening the issue. Marking this as resolved for now as the question seems to have been answered (thanks to mongolu for that). Please feel free to ask any follow up questions though, I'll keep an eye out.
Author
Owner

@easp commented on GitHub (Jan 2, 2024):

@mongolu There is a better reference for importing models from HF into Ollama: https://github.com/jmorganca/ollama/blob/main/docs/import.md

This includes instructions on an ollama-provided docker image that makes converting and quantizing a single command.

<!-- gh-comment-id:1874607736 --> @easp commented on GitHub (Jan 2, 2024): @mongolu There is a better reference for importing models from HF into Ollama: https://github.com/jmorganca/ollama/blob/main/docs/import.md This includes instructions on an ollama-provided docker image that makes converting and quantizing a single command.
Author
Owner

@mongolu commented on GitHub (Jan 2, 2024):

Didn't "find" it (named "import") into "./docs"... :(, so sorry.
Also it's a bit lenghty, but detailed, i presume it's better.
I used a docker image where I had already (or installed) all the necessary deps for this.

<!-- gh-comment-id:1874651962 --> @mongolu commented on GitHub (Jan 2, 2024): Didn't "find" it (named "import") into "./docs"... :(, so sorry. Also it's a bit lenghty, but detailed, i presume it's better. I used a docker image where I had already (or installed) all the necessary deps for this.
Author
Owner

@ozbillwang commented on GitHub (Jan 28, 2025):

You need to convert it. Assuming you have all the "requirements" installed and hf model on your local drive, you could inspire from this: converting hf to gguf And after successfully converting to gguf, you need to import it in ollama from modelfile

I can confirm I've done it and it works.

this document is not for new model. with current design, we need adjust several files to adapt new model in llama.cpp, I raised a new request in https://github.com/ggerganov/llama.cpp/issues/11460

<!-- gh-comment-id:2617784356 --> @ozbillwang commented on GitHub (Jan 28, 2025): > You need to convert it. Assuming you have all the "requirements" installed and hf model on your local drive, you could inspire from this: [converting hf to gguf](https://github.com/ggerganov/llama.cpp/blob/edd1ab7bc34c10a780ee7f9a4499f7689cdad36d/scripts/convert-gg.sh#L7) [And after successfully converting to gguf, you need to import it in ollama from modelfile](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md) > > I can confirm I've done it and it works. this document is not for new model. with current design, we need adjust several files to adapt new model in llama.cpp, I raised a new request in https://github.com/ggerganov/llama.cpp/issues/11460
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1005