[GH-ISSUE #9941] Support for Tools in OpenAI calls for Gemma3 #53019

Open
opened 2026-04-29 01:41:21 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @mmb78 on GitHub (Mar 22, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9941

It would be great if Ollama supported calls from the OpenAI Python library that use tools in the recent models like Gemma 3. For example:

class ImageDescription(BaseModel):
title: str
description: str
keywords: list[str]

schema = ImageDescription.model_json_schema()
response = client.chat.completions.create(
...
tools=[{"type": "function", "function": {"name": "image_info", "parameters": schema}}],
tool_choice={"type": "function", "function": {"name": "image_info"}},
...

    )
Originally created by @mmb78 on GitHub (Mar 22, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9941 It would be great if Ollama supported calls from the OpenAI Python library that use tools in the recent models like Gemma 3. For example: class ImageDescription(BaseModel): title: str description: str keywords: list[str] schema = ImageDescription.model_json_schema() response = client.chat.completions.create( ... tools=[{"type": "function", "function": {"name": "image_info", "parameters": schema}}], tool_choice={"type": "function", "function": {"name": "image_info"}}, ... )
GiteaMirror added the feature request label 2026-04-29 01:41:21 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 22, 2025):

You need to use a template for gemma3 that includes tool support. The position of the Google engineers is that no dedicated tool support is required, you just add it to the model prompts. That means users need to roll their own, see #9680. Unfortunately some of the attempts at a tool supporting template in that issue don't work all that well.

<!-- gh-comment-id:2745248350 --> @rick-github commented on GitHub (Mar 22, 2025): You need to use a template for gemma3 that includes tool support. The position of the [Google engineers](https://huggingface.co/google/gemma-3-12b-it/discussions/11#67d467df3589638cc558a649) is that no dedicated tool support is required, you just add it to the model prompts. That means users need to roll their own, see #9680. Unfortunately some of the attempts at a tool supporting template in that issue don't work all that well.
Author
Owner

@mmb78 commented on GitHub (Mar 22, 2025):

I had success by adding this TEMPLATE:
https://ollama.com/PetrosStav/gemma3-tools:12b/blobs/dbb9d04f85fb

to my models using MakeFile
FROM "gemma3:27b-it-q8_0"
TEMPLATE """ """

how to do that is explained here:
https://github.com/ollama/ollama/blob/main/docs/template.md

In addition, and that seems to be important, I provide more strict instructions to the model in my system message that it has to use the tools.
system = "You have to respond to the prompts only using the provided tools."

I still sometimes get "normal" response from the model that is not "Tools" but rather standard answer in JSON format. So I test for that and parse it, which works well. If even that fails, I run the prompt with a different seed.

Seems to work quite well for Gemma 3 12b q8.

It would be great to provide more extensive tutorial for the "Template" use in Ollama. When to use this and how is this actually passed to the model. This seems to be potentially very useful!

<!-- gh-comment-id:2745370597 --> @mmb78 commented on GitHub (Mar 22, 2025): I had success by adding this TEMPLATE: https://ollama.com/PetrosStav/gemma3-tools:12b/blobs/dbb9d04f85fb to my models using MakeFile FROM "gemma3:27b-it-q8_0" TEMPLATE """ <template from PetrosStav> """ how to do that is explained here: https://github.com/ollama/ollama/blob/main/docs/template.md In addition, and that seems to be important, I provide more strict instructions to the model in my system message that it has to use the tools. system = "You have to respond to the prompts only using the provided tools." I still sometimes get "normal" response from the model that is not "Tools" but rather standard answer in JSON format. So I test for that and parse it, which works well. If even that fails, I run the prompt with a different seed. Seems to work quite well for Gemma 3 12b q8. It would be great to provide more extensive tutorial for the "Template" use in Ollama. When to use this and how is this actually passed to the model. This seems to be potentially very useful!
Author
Owner

@mmb78 commented on GitHub (Mar 23, 2025):

Maybe small update if anyone is trying to get this work and still has some issues ..

As mentioned the temple from:
https://ollama.com/PetrosStav/gemma3-tools:12b/blobs/dbb9d04f85fb
worked almost perfect for me .. but time to time I would not get "tool call" back (maybe one in 200-500 prompts) that I could still parse. Trying to understand why, I noticed one thing..

The template from here:
https://ollama.com/PetrosStav/gemma3-tools:12b/blobs/dbb9d04f85fb
has this instruction:

{{- if .Tools }}
You can use these tools to help answer the user's question:
{{- range .Tools }}
{{ . }}
{{- end }}
When you need to use a tool, format your response as JSON as follows:
<tool>
{"name": "tool_name", "parameters": {"param1": "value1", "param2": "value2"}}
</tool>

However, some other templates instruct the LLM (Gemma 3) that the tool output of the used tools should be wrapped differently (from this comment): https://github.com/ollama/ollama/issues/9680#issuecomment-2722586870

For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>

the key difference is <tool> vs <tool_call>

So I switched to using this as template for Gemma 3 .. and so far so good!

{{- if .System }}<start_of_turn>user
{{ .System }}
{{- if .Tools }}
You can use these tools to help answer the user's question:
{{- range .Tools }}
{{ . }}
{{- end }}
When you need to use a tool, format your response as JSON as follows:
<tool_call>
{"name": "tool_name", "parameters": {"param1": "value1", "param2": "value2"}}
</tool_call>
{{- end }}<end_of_turn>
<start_of_turn>model
I understand.
<end_of_turn>
{{- end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<start_of_turn>user
{{ .Content }}<end_of_turn>
{{ if $last }}<start_of_turn>model
{{ end }}
{{- else if eq .Role "assistant" }}<start_of_turn>model
{{ if .ToolCalls }}
{{- range .ToolCalls }}
<tool>
{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}
</tool>
{{- end }}
{{- else }}
{{ .Content }}
{{- end }}{{ if not $last }}<end_of_turn>
{{ end }}
{{- else if eq .Role "tool" }}<start_of_turn>user
<tool_response>
{{ .Content }}
</tool_response><end_of_turn>
{{ if $last }}<start_of_turn>model
{{ end }}
{{- end }}
{{- end }}
<!-- gh-comment-id:2746116553 --> @mmb78 commented on GitHub (Mar 23, 2025): Maybe small update if anyone is trying to get this work and still has some issues .. As mentioned the temple from: https://ollama.com/PetrosStav/gemma3-tools:12b/blobs/dbb9d04f85fb worked almost perfect for me .. but time to time I would not get "tool call" back (maybe one in 200-500 prompts) that I could still parse. Trying to understand why, I noticed one thing.. The template from here: https://ollama.com/PetrosStav/gemma3-tools:12b/blobs/dbb9d04f85fb has this instruction: ``` {{- if .Tools }} You can use these tools to help answer the user's question: {{- range .Tools }} {{ . }} {{- end }} When you need to use a tool, format your response as JSON as follows: <tool> {"name": "tool_name", "parameters": {"param1": "value1", "param2": "value2"}} </tool> ``` However, some other templates instruct the LLM (Gemma 3) that the tool output of the used tools should be wrapped differently (from this comment): https://github.com/ollama/ollama/issues/9680#issuecomment-2722586870 ``` For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags: <tool_call> {"name": <function-name>, "arguments": <args-json-object>} </tool_call> ``` **the key difference is `<tool> vs <tool_call>`** So I switched to using this as template for Gemma 3 .. and so far so good! ``` {{- if .System }}<start_of_turn>user {{ .System }} {{- if .Tools }} You can use these tools to help answer the user's question: {{- range .Tools }} {{ . }} {{- end }} When you need to use a tool, format your response as JSON as follows: <tool_call> {"name": "tool_name", "parameters": {"param1": "value1", "param2": "value2"}} </tool_call> {{- end }}<end_of_turn> <start_of_turn>model I understand. <end_of_turn> {{- end }} {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if eq .Role "user" }}<start_of_turn>user {{ .Content }}<end_of_turn> {{ if $last }}<start_of_turn>model {{ end }} {{- else if eq .Role "assistant" }}<start_of_turn>model {{ if .ToolCalls }} {{- range .ToolCalls }} <tool> {"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}} </tool> {{- end }} {{- else }} {{ .Content }} {{- end }}{{ if not $last }}<end_of_turn> {{ end }} {{- else if eq .Role "tool" }}<start_of_turn>user <tool_response> {{ .Content }} </tool_response><end_of_turn> {{ if $last }}<start_of_turn>model {{ end }} {{- end }} {{- end }} ```
Author
Owner

@davidshen84 commented on GitHub (Mar 25, 2025):

Based on my understanding, ollama template expects the LLM to accept some kind of tool_call tag so it can confidently use it to extract tool call messages.

According to the links in https://github.com/ollama/ollama/issues/9941#issuecomment-2745248350, Google's engineers are very confident that Gemma3 will just generate the correct "tool call" message.

So, can we give them the benefit of double and make ollama to do a "tool call" without a specific LLM tag?

<!-- gh-comment-id:2749788768 --> @davidshen84 commented on GitHub (Mar 25, 2025): Based on my understanding, ollama template expects the LLM to accept some kind of `tool_call` tag so it can confidently use it to extract tool call messages. According to the links in https://github.com/ollama/ollama/issues/9941#issuecomment-2745248350, Google's engineers are very confident that Gemma3 will just generate the correct "tool call" message. So, can we give them the benefit of double and make ollama to do a "tool call" without a specific LLM tag?
Author
Owner

@mmb78 commented on GitHub (Mar 25, 2025):

One more update from more testing.

I'm prompting to get a description of the content of an image on series of image .. just one by one (thousands of images in folders). I get close to perfect "Tool" use with 27b-q4 model on a first try. But with the exact same script (same prompt, same model template, etc.) with the 12b-q8 model, I get only about 50% of successful "Tool" calls. The rest is mostly just JSON output but not perfect, and thus has to be parsed. About 1% of the prompts fail (unknown reason, probably even more diverse answer than what my JSON parsing can handle. Sometimes, a new prompt with a different seed solves the problem (for about 1% of the overall photo analyses) and sometimes sending the "wrong" answer back and asking for "Tools" helps (also ~1%).

Interestingly, the 27b:q4 model is only about 10-15% slower than the 12b:q8 .. and it is clearly better at following instructions, so it makes no sense to use the smaller model (if you have 32GB VRAM). Smaller models probably need more tweaking of the instructions (I did not try anything beyond "Tools" ... like providing examples) .. or they are just generally not so precise. I did not play around too much with temperature. All my tests were at 0.1, but I can see that going to 0.0, does not seem to help much.

<!-- gh-comment-id:2752012491 --> @mmb78 commented on GitHub (Mar 25, 2025): One more update from more testing. I'm prompting to get a description of the content of an image on series of image .. just one by one (thousands of images in folders). I get close to perfect "Tool" use with 27b-q4 model on a first try. But with the exact same script (same prompt, same model template, etc.) with the 12b-q8 model, I get only about 50% of successful "Tool" calls. The rest is mostly just JSON output but not perfect, and thus has to be parsed. About 1% of the prompts fail (unknown reason, probably even more diverse answer than what my JSON parsing can handle. Sometimes, a new prompt with a different seed solves the problem (for about 1% of the overall photo analyses) and sometimes sending the "wrong" answer back and asking for "Tools" helps (also ~1%). Interestingly, the 27b:q4 model is only about 10-15% slower than the 12b:q8 .. and it is clearly better at following instructions, so it makes no sense to use the smaller model (if you have 32GB VRAM). Smaller models probably need more tweaking of the instructions (I did not try anything beyond "Tools" ... like providing examples) .. or they are just generally not so precise. I did not play around too much with temperature. All my tests were at 0.1, but I can see that going to 0.0, does not seem to help much.
Author
Owner

@davidshen84 commented on GitHub (Mar 26, 2025):

LangGraph CodeAct

Source: YouTube
https://search.app/iM7iw

Maybe this could help? So we just use a normal chat model without tool
calling feature, then let the model do all the coding and executing.

On Wed, 26 Mar 2025, 04:25 mmb78, @.***> wrote:

One more update from more testing.

I'm prompting to get a description of the content of an image on series of
image .. just one by one (thousands of images in folders). I get close to
perfect "Tool" use with 27b-q4 model on a first try. But with the exact
same script (same prompt, same model template, etc.) with the 12b-q8 model,
I get only about 50% of successful "Tool" calls. The rest is mostly just
JSON output but not perfect, and thus has to be parsed. About 1% of the
prompts fail (unknown reason, probably even more diverse answer than what
my JSON parsing can handle. Sometimes, a new prompt with a different seed
solves the problem (for about 1% of the overall photo analyses) and
sometimes sending the "wrong" answer back and asking for "Tools" helps
(also ~1%).

Interestingly, the 27b:q4 model is only about 10-15% slower than the
12b:q8 .. and it is clearly better at following instructions, so it makes
no sense to use the smaller model (if you have 32GB VRAM). Smaller models
probably need more tweaking of the instructions (I did not try anything
beyond "Tools" ... like providing examples) .. or they are just generally
not so precise. I did not play around too much with temperature. All my
tests were at 0.1, but I can see that going to 0.0, does not seem to help
much.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/9941#issuecomment-2752012491,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAQBTOXSGLTKNBBAI26XCT2WF7ORAVCNFSM6AAAAABZRZOYYSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONJSGAYTENBZGE
.
You are receiving this because you commented.Message ID:
@.***>
[image: mmb78]mmb78 left a comment (ollama/ollama#9941)
https://github.com/ollama/ollama/issues/9941#issuecomment-2752012491

One more update from more testing.

I'm prompting to get a description of the content of an image on series of
image .. just one by one (thousands of images in folders). I get close to
perfect "Tool" use with 27b-q4 model on a first try. But with the exact
same script (same prompt, same model template, etc.) with the 12b-q8 model,
I get only about 50% of successful "Tool" calls. The rest is mostly just
JSON output but not perfect, and thus has to be parsed. About 1% of the
prompts fail (unknown reason, probably even more diverse answer than what
my JSON parsing can handle. Sometimes, a new prompt with a different seed
solves the problem (for about 1% of the overall photo analyses) and
sometimes sending the "wrong" answer back and asking for "Tools" helps
(also ~1%).

Interestingly, the 27b:q4 model is only about 10-15% slower than the
12b:q8 .. and it is clearly better at following instructions, so it makes
no sense to use the smaller model (if you have 32GB VRAM). Smaller models
probably need more tweaking of the instructions (I did not try anything
beyond "Tools" ... like providing examples) .. or they are just generally
not so precise. I did not play around too much with temperature. All my
tests were at 0.1, but I can see that going to 0.0, does not seem to help
much.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/9941#issuecomment-2752012491,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAQBTOXSGLTKNBBAI26XCT2WF7ORAVCNFSM6AAAAABZRZOYYSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONJSGAYTENBZGE
.
You are receiving this because you commented.Message ID:
@.***>

<!-- gh-comment-id:2755632394 --> @davidshen84 commented on GitHub (Mar 26, 2025): LangGraph CodeAct Source: YouTube https://search.app/iM7iw Maybe this could help? So we just use a normal chat model without tool calling feature, then let the model do all the coding and executing. On Wed, 26 Mar 2025, 04:25 mmb78, ***@***.***> wrote: > One more update from more testing. > > I'm prompting to get a description of the content of an image on series of > image .. just one by one (thousands of images in folders). I get close to > perfect "Tool" use with 27b-q4 model on a first try. But with the exact > same script (same prompt, same model template, etc.) with the 12b-q8 model, > I get only about 50% of successful "Tool" calls. The rest is mostly just > JSON output but not perfect, and thus has to be parsed. About 1% of the > prompts fail (unknown reason, probably even more diverse answer than what > my JSON parsing can handle. Sometimes, a new prompt with a different seed > solves the problem (for about 1% of the overall photo analyses) and > sometimes sending the "wrong" answer back and asking for "Tools" helps > (also ~1%). > > Interestingly, the 27b:q4 model is only about 10-15% slower than the > 12b:q8 .. and it is clearly better at following instructions, so it makes > no sense to use the smaller model (if you have 32GB VRAM). Smaller models > probably need more tweaking of the instructions (I did not try anything > beyond "Tools" ... like providing examples) .. or they are just generally > not so precise. I did not play around too much with temperature. All my > tests were at 0.1, but I can see that going to 0.0, does not seem to help > much. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/9941#issuecomment-2752012491>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AAAQBTOXSGLTKNBBAI26XCT2WF7ORAVCNFSM6AAAAABZRZOYYSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONJSGAYTENBZGE> > . > You are receiving this because you commented.Message ID: > ***@***.***> > [image: mmb78]*mmb78* left a comment (ollama/ollama#9941) > <https://github.com/ollama/ollama/issues/9941#issuecomment-2752012491> > > One more update from more testing. > > I'm prompting to get a description of the content of an image on series of > image .. just one by one (thousands of images in folders). I get close to > perfect "Tool" use with 27b-q4 model on a first try. But with the exact > same script (same prompt, same model template, etc.) with the 12b-q8 model, > I get only about 50% of successful "Tool" calls. The rest is mostly just > JSON output but not perfect, and thus has to be parsed. About 1% of the > prompts fail (unknown reason, probably even more diverse answer than what > my JSON parsing can handle. Sometimes, a new prompt with a different seed > solves the problem (for about 1% of the overall photo analyses) and > sometimes sending the "wrong" answer back and asking for "Tools" helps > (also ~1%). > > Interestingly, the 27b:q4 model is only about 10-15% slower than the > 12b:q8 .. and it is clearly better at following instructions, so it makes > no sense to use the smaller model (if you have 32GB VRAM). Smaller models > probably need more tweaking of the instructions (I did not try anything > beyond "Tools" ... like providing examples) .. or they are just generally > not so precise. I did not play around too much with temperature. All my > tests were at 0.1, but I can see that going to 0.0, does not seem to help > much. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/9941#issuecomment-2752012491>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AAAQBTOXSGLTKNBBAI26XCT2WF7ORAVCNFSM6AAAAABZRZOYYSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONJSGAYTENBZGE> > . > You are receiving this because you commented.Message ID: > ***@***.***> >
Author
Owner

@JMLX42 commented on GitHub (Mar 26, 2025):

Google just dropped this article:

https://ai.google.dev/gemma/docs/capabilities/function-calling

Ans the Ollama gemma3 model just got an update.

Is function calling on the table now?

<!-- gh-comment-id:2755656104 --> @JMLX42 commented on GitHub (Mar 26, 2025): Google just dropped this article: https://ai.google.dev/gemma/docs/capabilities/function-calling Ans the Ollama gemma3 model just got an update. Is function calling on the table now?
Author
Owner

@nileshtrivedi commented on GitHub (Apr 21, 2025):

This is needed because many agent frameworks (eg: Langflow) check for tool-calling support only as declared by Ollama.

<!-- gh-comment-id:2818212804 --> @nileshtrivedi commented on GitHub (Apr 21, 2025): This is needed because many agent frameworks (eg: Langflow) check for tool-calling support only as declared by Ollama.
Author
Owner

@nileshtrivedi commented on GitHub (Jun 25, 2025):

Came across this repo which claims to solve it but I have not tested it yet: https://github.com/IllFil/gemma3-ollama-tools

<!-- gh-comment-id:3003843322 --> @nileshtrivedi commented on GitHub (Jun 25, 2025): Came across this repo which claims to solve it but I have not tested it yet: https://github.com/IllFil/gemma3-ollama-tools
Author
Owner

@zolakt commented on GitHub (Sep 9, 2025):

I see here some of you say you had success with PetrosStav/gemma3-tools:12b for controling HomeAssistant and Ollama.

I'm having issues with this. Either it doesn't detect that it needs to us a tool, or it outputs the tool call as plain text.
Image

I'm trying to use it in Croatian, maybe that is the problem...

Does anyone have an idea is there some fix to make it work (e.g. some system prompt modifications or something), or a similar model based on gemma3 that works without issues?

I would really like to stick with gemma3, since it's pretty good in Croatian. Also it has vision, so I don't need to run moondream (or some other small model) on the side, for Frigate's GenAI.

The current alternative is qwen3:14b, which works fine with tools, but it's not that great in Croatian. Although, it's better than most other models that I tried. And moondream on the side, for Frigate, which is pretty poor in describing images, compared to gemma.

UPDATE:
I’ve just tried this one, and it seems to work fine lukaspetrik/gemma3-tools. This seems just like a bugfix of the PetroStav version, for the latest Ollama. Unless someone has a better suggestion. I'm sticking with this.

<!-- gh-comment-id:3269632310 --> @zolakt commented on GitHub (Sep 9, 2025): I see here some of you say you had success with PetrosStav/gemma3-tools:12b for controling HomeAssistant and Ollama. I'm having issues with this. Either it doesn't detect that it needs to us a tool, or it outputs the tool call as plain text. <img width="363" height="470" alt="Image" src="https://github.com/user-attachments/assets/8a68def5-a5e7-4c87-9503-b3bdba0c0ea3" /> I'm trying to use it in Croatian, maybe that is the problem... Does anyone have an idea is there some fix to make it work (e.g. some system prompt modifications or something), or a similar model based on gemma3 that works without issues? I would really like to stick with gemma3, since it's pretty good in Croatian. Also it has vision, so I don't need to run moondream (or some other small model) on the side, for Frigate's GenAI. The current alternative is qwen3:14b, which works fine with tools, but it's not that great in Croatian. Although, it's better than most other models that I tried. And moondream on the side, for Frigate, which is pretty poor in describing images, compared to gemma. **UPDATE:** I’ve just tried this one, and it seems to work fine [lukaspetrik/gemma3-tools](https://ollama.com/lukaspetrik/gemma3-tools). This seems just like a bugfix of the PetroStav version, for the latest Ollama. Unless someone has a better suggestion. I'm sticking with this.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53019