[GH-ISSUE #7127] Difference in Function Call Support between Ollama and Unsloth for Llama 3.2 #4527

Closed
opened 2026-04-12 15:27:49 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @Saber120 on GitHub (Oct 8, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7127

What is the issue?

Hi,

I’ve fine-tune a Llama 3.2 model using Unsloth, and when I try to enable function calling with ollama, I receive a message indicating that the model does not support function calls. However, when I download the same Llama 3.2 model directly from Ollama, function calls work without any issues.

Could clarify why the same model behaves differently in terms of function call support when trained with Unsloth versus when downloaded from Ollama? Is there something specific in the way Ollama’s version of the model is configured that enables function call support, or are there certain settings I need to adjust in the Unsloth training process to enable this feature?

Thanks in advance for your help!

OS

Linux

GPU

Other

CPU

Other

Ollama version

0.3.12

Originally created by @Saber120 on GitHub (Oct 8, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7127 ### What is the issue? Hi, I’ve fine-tune a Llama 3.2 model using Unsloth, and when I try to enable function calling with ollama, I receive a message indicating that the model does not support function calls. However, when I download the same Llama 3.2 model directly from Ollama, function calls work without any issues. Could clarify why the same model behaves differently in terms of function call support when trained with Unsloth versus when downloaded from Ollama? Is there something specific in the way Ollama’s version of the model is configured that enables function call support, or are there certain settings I need to adjust in the Unsloth training process to enable this feature? Thanks in advance for your help! ### OS Linux ### GPU Other ### CPU Other ### Ollama version 0.3.12
GiteaMirror added the bug label 2026-04-12 15:27:49 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 8, 2024):

Your finetuned model doesn't have a template that indicates it supports tools. Compare ollama show --template llama3.2 with ollama show --template llama3.2-finetuned. It's probably possible to substitute the template in your finetuned model with the one from the original model.

<!-- gh-comment-id:2399475479 --> @rick-github commented on GitHub (Oct 8, 2024): Your finetuned model doesn't have a template that indicates it supports tools. Compare `ollama show --template llama3.2` with `ollama show --template llama3.2-finetuned`. It's probably possible to substitute the template in your finetuned model with the one from the original model.
Author
Owner

@Saber120 commented on GitHub (Oct 9, 2024):

Your finetuned model doesn't have a template that indicates it supports tools. Compare ollama show --template llama3.2 with ollama show --template llama3.2-finetuned. It's probably possible to substitute the template in your finetuned model with the one from the original model.

Thank you very much for your help, it really worked

<!-- gh-comment-id:2402311253 --> @Saber120 commented on GitHub (Oct 9, 2024): > Your finetuned model doesn't have a template that indicates it supports tools. Compare `ollama show --template llama3.2` with `ollama show --template llama3.2-finetuned`. It's probably possible to substitute the template in your finetuned model with the one from the original model. Thank you very much for your help, it really worked
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4527