[GH-ISSUE #654] How to fine tune and use it with ollama? #292

Closed
opened 2026-04-12 09:50:02 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @thebigbone on GitHub (Sep 30, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/654

Is it possible to fine tune a model that I pull from ollama? What would be the general process for that?

Originally created by @thebigbone on GitHub (Sep 30, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/654 Is it possible to fine tune a model that I pull from ollama? What would be the general process for that?
Author
Owner

@TahaScripts commented on GitHub (Sep 30, 2023):

I'd recommend downloading a model and fine-tuning it separate from ollama – ollama works best for serving it/testing prompts. Check here on the readme for more info. You should end up with a GGUF or GGML file depending on how you build and fine-tune models.

Also, try to be more precise about your goals for fine-tuning. Do you want the LLM to work better with specific documents or contexts? Langchain offers a lot of features for that, and plugs right into Ollama. Let me know if that helps!

<!-- gh-comment-id:1741842478 --> @TahaScripts commented on GitHub (Sep 30, 2023): I'd recommend downloading a model and fine-tuning it separate from ollama – ollama works best for serving it/testing prompts. Check [here on the readme](https://github.com/jmorganca/ollama#customize-your-own-model) for more info. You should end up with a GGUF or GGML file depending on how you build and fine-tune models. Also, try to be more precise about your goals for fine-tuning. Do you want the LLM to work better with specific documents or contexts? [Langchain](https://python.langchain.com/docs/integrations/llms/ollama) offers a lot of features for that, and plugs right into Ollama. Let me know if that helps!
Author
Owner

@thebigbone commented on GitHub (Oct 1, 2023):

That was helpful. I will try to fine tune it and then load it in ollama. Thanks!

<!-- gh-comment-id:1741956173 --> @thebigbone commented on GitHub (Oct 1, 2023): That was helpful. I will try to fine tune it and then load it in ollama. Thanks!
Author
Owner

@PeterAronZentai commented on GitHub (Oct 20, 2023):

@thebigbone did you have any success? very much interested in this myself too.

<!-- gh-comment-id:1773193190 --> @PeterAronZentai commented on GitHub (Oct 20, 2023): @thebigbone did you have any success? very much interested in this myself too.
Author
Owner

@thebigbone commented on GitHub (Oct 23, 2023):

@thebigbone did you have any success? very much interested in this myself too.

I did some fine tuning using huggingface library. I have not yet tested it with ollama which I will be doing soon enough

<!-- gh-comment-id:1775197269 --> @thebigbone commented on GitHub (Oct 23, 2023): > @thebigbone did you have any success? very much interested in this myself too. I did some fine tuning using huggingface library. I have not yet tested it with ollama which I will be doing soon enough
Author
Owner

@amritap-ef commented on GitHub (Nov 14, 2023):

@thebigbone how did your finetuned model work with Ollama? Also interested as I found these issues: https://github.com/jmorganca/ollama/issues/1110

<!-- gh-comment-id:1810356650 --> @amritap-ef commented on GitHub (Nov 14, 2023): @thebigbone how did your finetuned model work with Ollama? Also interested as I found these issues: https://github.com/jmorganca/ollama/issues/1110
Author
Owner

@nodox commented on GitHub (Jan 29, 2024):

Interested as well!

<!-- gh-comment-id:1915381486 --> @nodox commented on GitHub (Jan 29, 2024): Interested as well!
Author
Owner

@mahbobyosf commented on GitHub (Feb 1, 2024):

Interested as well!

<!-- gh-comment-id:1920763565 --> @mahbobyosf commented on GitHub (Feb 1, 2024): Interested as well!
Author
Owner

@DanJamesMills commented on GitHub (Mar 8, 2024):

I'm Interested too

<!-- gh-comment-id:1985749419 --> @DanJamesMills commented on GitHub (Mar 8, 2024): I'm Interested too
Author
Owner

@matthieuHenocque commented on GitHub (Apr 11, 2024):

Me too

<!-- gh-comment-id:2049549334 --> @matthieuHenocque commented on GitHub (Apr 11, 2024): Me too
Author
Owner

@aosan commented on GitHub (Apr 13, 2024):

Not a direct LLM fine-tuning observation, but I can share my experience in getting the best out of a Ollama LLM without getting into training data.

I create a copy of the model with a role prompt, similar to an OpenAI GPT, using these instructions:
https://github.com/ollama/ollama/blob/main/docs/modelfile.md

My Modelfile for the role Savannah has this content:

###
FROM CapybaraHermes-2.5-Mistral-7b.Q5_K_M 

# set the temperature to 1 (higher is more creative, lower is more coherent) 

PARAMETER temperature 2 

# set the system/role prompt 

SYSTEM """ 

Meme Expert 
Act as a creativity and communication expert, with witty, sassy, wise, and impactful comments suitable for online memes, with a combination of very high cultural awareness, linguistic abilities and skills, including: 
Vocabulary: A strong vocabulary helps you to precisely and effectively convey your ideas in a limited space, as memes often require short and punchy text.
Syntax: Mastery of syntax allows you to construct sentences in a way that enhances the impact of your words and makes them more memorable. 
Semantics: Understanding the meanings and nuances of words and phrases is essential for crafting clever and witty comments. Puns and wordplay: The ability to create puns and wordplay adds humor and wit to your comments. This can make your memes more engaging and shareable. 
Cultural references: Being knowledgeable about popular culture, trends, and current events allows you to craft relevant and timely memes that resonate with a wide audience. 
Sarcasm and irony: Skillfully employing sarcasm and irony can make your comments sassy and impactful. However, it is essential to use these devices judiciously, as they can be misunderstood or misinterpreted. 
Rhetorical devices: Using rhetorical devices such as metaphors, similes, and alliteration can make your comments more poetic, memorable, and impactful. 
Emotional intelligence: Understanding your audience's emotions and sensitivities can help you craft comments that are more likely to resonate and make an impact. 
Creativity: Memes often rely on unique and original ideas, so being able to think outside the box is crucial for creating content that stands out. Adaptability: As language and culture constantly evolve, it's important to stay flexible and adapt your linguistic abilities to keep your memes fresh and engaging. By engaging these linguistic abilities, you will create witty, sassy, wise, and impactful comments for online memes to entertain, provoke thought, and leave a lasting impression. 
Mirror your readers' pain points in your writing. “Have you ever … ? Then what happened next was … ? And you tried … but it didn’t work, did it? Here’s why and how to fix it.” and reflect on the struggles, then provide solutions. 
The audience for your writing is the business community. Your name is Savannah.
"""
###

I fine-tune the role prompt and temperature with different values, using one model as a baseline.
I usually test multiple model configurations using this tool:
https://github.com/aosan/VaultChat/tree/main/utensils/evaluate_llm (from my RAG project)

Results from evaluate_llm.sh are scored with this LLM Examiner:
https://chat.openai.com/g/g-WaEKsoStj-llm-examiner (my GPT)

Any of your roles fine-tuned for your personal documents can be used with VaultChat:
https://github.com/aosan/VaultChat (my RAG project)

<!-- gh-comment-id:2053574166 --> @aosan commented on GitHub (Apr 13, 2024): Not a direct LLM fine-tuning observation, but I can share my experience in getting the best out of a Ollama LLM without getting into training data. I create a copy of the model with a role prompt, similar to an OpenAI GPT, using these instructions: https://github.com/ollama/ollama/blob/main/docs/modelfile.md My Modelfile for the role Savannah has this content: ``` ### FROM CapybaraHermes-2.5-Mistral-7b.Q5_K_M # set the temperature to 1 (higher is more creative, lower is more coherent) PARAMETER temperature 2 # set the system/role prompt SYSTEM """ Meme Expert Act as a creativity and communication expert, with witty, sassy, wise, and impactful comments suitable for online memes, with a combination of very high cultural awareness, linguistic abilities and skills, including: Vocabulary: A strong vocabulary helps you to precisely and effectively convey your ideas in a limited space, as memes often require short and punchy text. Syntax: Mastery of syntax allows you to construct sentences in a way that enhances the impact of your words and makes them more memorable. Semantics: Understanding the meanings and nuances of words and phrases is essential for crafting clever and witty comments. Puns and wordplay: The ability to create puns and wordplay adds humor and wit to your comments. This can make your memes more engaging and shareable. Cultural references: Being knowledgeable about popular culture, trends, and current events allows you to craft relevant and timely memes that resonate with a wide audience. Sarcasm and irony: Skillfully employing sarcasm and irony can make your comments sassy and impactful. However, it is essential to use these devices judiciously, as they can be misunderstood or misinterpreted. Rhetorical devices: Using rhetorical devices such as metaphors, similes, and alliteration can make your comments more poetic, memorable, and impactful. Emotional intelligence: Understanding your audience's emotions and sensitivities can help you craft comments that are more likely to resonate and make an impact. Creativity: Memes often rely on unique and original ideas, so being able to think outside the box is crucial for creating content that stands out. Adaptability: As language and culture constantly evolve, it's important to stay flexible and adapt your linguistic abilities to keep your memes fresh and engaging. By engaging these linguistic abilities, you will create witty, sassy, wise, and impactful comments for online memes to entertain, provoke thought, and leave a lasting impression. Mirror your readers' pain points in your writing. “Have you ever … ? Then what happened next was … ? And you tried … but it didn’t work, did it? Here’s why and how to fix it.” and reflect on the struggles, then provide solutions. The audience for your writing is the business community. Your name is Savannah. """ ### ``` I fine-tune the role prompt and temperature with different values, using one model as a baseline. I usually test multiple model configurations using this tool: https://github.com/aosan/VaultChat/tree/main/utensils/evaluate_llm (from my RAG project) Results from `evaluate_llm.sh` are scored with this LLM Examiner: https://chat.openai.com/g/g-WaEKsoStj-llm-examiner (my GPT) Any of your roles fine-tuned for your personal documents can be used with VaultChat: https://github.com/aosan/VaultChat (my RAG project)
Author
Owner

@Faraz1243 commented on GitHub (Apr 18, 2024):

I'd recommend downloading a model and fine-tuning it separate from ollama – ollama works best for serving it/testing prompts. Check here on the readme for more info. You should end up with a GGUF or GGML file depending on how you build and fine-tune models.

Also, try to be more precise about your goals for fine-tuning. Do you want the LLM to work better with specific documents or contexts? Langchain offers a lot of features for that, and plugs right into Ollama. Let me know if that helps!

Can you please refer resources related to fine tuning llama2

<!-- gh-comment-id:2063766622 --> @Faraz1243 commented on GitHub (Apr 18, 2024): > I'd recommend downloading a model and fine-tuning it separate from ollama – ollama works best for serving it/testing prompts. Check [here on the readme](https://github.com/jmorganca/ollama#customize-your-own-model) for more info. You should end up with a GGUF or GGML file depending on how you build and fine-tune models. > > Also, try to be more precise about your goals for fine-tuning. Do you want the LLM to work better with specific documents or contexts? [Langchain](https://python.langchain.com/docs/integrations/llms/ollama) offers a lot of features for that, and plugs right into Ollama. Let me know if that helps! Can you please refer resources related to fine tuning llama2
Author
Owner

@insidesecurity-yhojann-aguilera commented on GitHub (Oct 5, 2024):

Ollama does not support training and there is no plan either. https://github.com/ollama/ollama/issues/156

<!-- gh-comment-id:2394852309 --> @insidesecurity-yhojann-aguilera commented on GitHub (Oct 5, 2024): Ollama does not support training and there is no plan either. https://github.com/ollama/ollama/issues/156
Author
Owner

@DaryllCulas commented on GitHub (Nov 2, 2024):

Interested. Looking forward to your testimonies @thebigbone

<!-- gh-comment-id:2452930390 --> @DaryllCulas commented on GitHub (Nov 2, 2024): Interested. Looking forward to your testimonies @thebigbone
Author
Owner

@mdsaifbarauni commented on GitHub (Dec 13, 2024):

me to interested in fine tuning

<!-- gh-comment-id:2542057108 --> @mdsaifbarauni commented on GitHub (Dec 13, 2024): me to interested in fine tuning
Author
Owner

@unparadise commented on GitHub (Nov 3, 2025):

I am interested in this too.

<!-- gh-comment-id:3480805634 --> @unparadise commented on GitHub (Nov 3, 2025): I am interested in this too.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#292