[GH-ISSUE #11028] Is it possible to use custom models fine-tuned on my local dataset with Ollama? #69333

Closed
opened 2026-05-04 17:49:09 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @barancaki on GitHub (Jun 9, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11028

Hi!

First of all, thanks for the awesome project. I’ve been experimenting with Ollama and really enjoy how easy it is to run models locally.

I was wondering:
• Is it currently possible to use a custom LLM that I’ve fine-tuned on my own dataset (e.g., using LoRA or QLoRA)?
• If yes, what would be the recommended process to package and load it into Ollama?
• If no, are there any plans to support this in future releases?

Thanks in advance for your help!

Originally created by @barancaki on GitHub (Jun 9, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11028 Hi! First of all, thanks for the awesome project. I’ve been experimenting with Ollama and really enjoy how easy it is to run models locally. I was wondering: • Is it currently possible to use a custom LLM that I’ve fine-tuned on my own dataset (e.g., using LoRA or QLoRA)? • If yes, what would be the recommended process to package and load it into Ollama? • If no, are there any plans to support this in future releases? Thanks in advance for your help!
GiteaMirror added the feature request label 2026-05-04 17:49:09 -05:00
Author
Owner

@rick-github commented on GitHub (Jun 9, 2025):

Is it currently possible to use a custom LLM that I’ve fine-tuned on my own dataset (e.g., using LoRA or QLoRA)?

Yes, if the base model is supported by ollama.

what would be the recommended process to package and load it into Ollama?

Depends on what the fine-tuning process produces. If it's a LoRA adapter, create a Modelfile as described here. Some fune-tuners support creating a Safetensors file, which can be imported as described here. unlsoth goes as far as creating a GGUF and Modelfile which you can just pull from HF with ollama pull.

<!-- gh-comment-id:2956657201 --> @rick-github commented on GitHub (Jun 9, 2025): > Is it currently possible to use a custom LLM that I’ve fine-tuned on my own dataset (e.g., using LoRA or QLoRA)? Yes, if the base model is supported by ollama. > what would be the recommended process to package and load it into Ollama? Depends on what the fine-tuning process produces. If it's a LoRA adapter, create a Modelfile as described [here](https://github.com/ollama/ollama/blob/main/docs/import.md#Importing-a-fine-tuned-adapter-from-Safetensors-weights). Some fune-tuners support creating a Safetensors file, which can be imported as described [here](https://github.com/ollama/ollama/blob/main/docs/import.md#Importing-a-model-from-Safetensors-weights). unlsoth goes as far as creating a [GGUF and Modelfile](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama) which you can just pull from HF with `ollama pull`.
Author
Owner

@barancaki commented on GitHub (Jun 11, 2025):

Thank you so much @rick-github

<!-- gh-comment-id:2962091397 --> @barancaki commented on GitHub (Jun 11, 2025): Thank you so much @rick-github
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69333