[GH-ISSUE #9680] gemma3 lack function calling tag #52829

Closed
opened 2026-04-29 01:04:29 -05:00 by GiteaMirror · 49 comments
Owner

Originally created by @DoiiarX on GitHub (Mar 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9680

Originally assigned to: @ParthSareen on GitHub.

What is the issue?

gemma3 lack function calling tag

Image

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @DoiiarX on GitHub (Mar 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9680 Originally assigned to: @ParthSareen on GitHub. ### What is the issue? gemma3 lack function calling tag ![Image](https://github.com/user-attachments/assets/c4153d16-72d5-4edd-8f02-03fb4790f8bd) ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-29 01:04:29 -05:00
Author
Owner

@Marcisbee commented on GitHub (Mar 12, 2025):

4b+ supports vision. But I do not see that it actuall supports tools, am I missing something.

I get this: {"error":"registry.ollama.ai/library/gemma3:4b does not support tools"}

<!-- gh-comment-id:2717208656 --> @Marcisbee commented on GitHub (Mar 12, 2025): 4b+ supports `vision`. But I do not see that it actuall supports tools, am I missing something. I get this: `{"error":"registry.ollama.ai/library/gemma3:4b does not support tools"}`
Author
Owner

@CesarPetrescu commented on GitHub (Mar 12, 2025):

I have the same issue while on https://blog.google/technology/developers/gemma-3/ it says:
"Create AI-driven workflows using function calling: Gemma 3 supports function calling and structured output to help you automate tasks and build agentic experiences."
Any way to fix it in ollama?

<!-- gh-comment-id:2717455081 --> @CesarPetrescu commented on GitHub (Mar 12, 2025): I have the same issue while on https://blog.google/technology/developers/gemma-3/ it says: "Create AI-driven workflows using function calling: Gemma 3 supports function calling and structured output to help you automate tasks and build agentic experiences." Any way to fix it in ollama?
Author
Owner

@rick-github commented on GitHub (Mar 12, 2025):

Function calling is not mentioned on the HuggingFace repos for gemma3 and the chat_template makes no mention of tools. The template can be modified to support a generic tool use capability, but if the model was actually tuned for tool use, it would be best to use a suitable template. Until Google releases the details I think it's a matter of rolling your own and hoping the results are good enough.

<!-- gh-comment-id:2717622056 --> @rick-github commented on GitHub (Mar 12, 2025): Function calling is not mentioned on the HuggingFace repos for gemma3 and the `chat_template` makes no mention of tools. The template can be modified to support a generic tool use capability, but if the model was actually tuned for tool use, it would be best to use a suitable template. Until Google releases the details I think it's a matter of rolling your own and hoping the results are good enough.
Author
Owner

@rick-github commented on GitHub (Mar 12, 2025):

Parsing the tool calls has nothing to do with why gemma3 doesn't support tools.

<!-- gh-comment-id:2717813984 --> @rick-github commented on GitHub (Mar 12, 2025): Parsing the tool calls has nothing to do with why gemma3 doesn't support tools.
Author
Owner

@rick-github commented on GitHub (Mar 12, 2025):

The last tool calling model published for a half year ago.

phi4-mini was published 11 days ago.

This is nasty, ollama such a toy, It is useless without tools

ollama can be quite useful without tools, but these days it's certainly true that tool uses expands the scope of deployment. Fortunately tool support is quite good, but there's always room for improvement, so we'll see what happens now that 0.6 series has been started.

<!-- gh-comment-id:2717927343 --> @rick-github commented on GitHub (Mar 12, 2025): > The last tool calling model published for a half year ago. [phi4-mini](https://ollama.com/library/phi4-mini) was published 11 days ago. > This is nasty, ollama such a toy, It is useless without tools ollama can be quite useful without tools, but these days it's certainly true that tool uses expands the scope of deployment. Fortunately tool support is [quite good](https://github.com/ollama/ollama/issues/8287#issuecomment-2581625140), but there's always room for improvement, so we'll see what happens now that 0.6 series has been started.
Author
Owner

@tripolskypetr commented on GitHub (Mar 12, 2025):

The last tool calling model published for a half year ago.

phi4-mini was published 11 days ago.

This is nasty, ollama such a toy, It is useless without tools

ollama can be quite useful without tools, but these days it's certainly true that tool uses expands the scope of deployment. Fortunately tool support is quite good, but there's always room for improvement, so we'll see what happens now that 0.6 series has been started.

Tools are the AI development because they are the base for Agent Swarm implementation. Without them, all the model can do is to work in demo mode with obsolete data without third party integrations

<!-- gh-comment-id:2718105115 --> @tripolskypetr commented on GitHub (Mar 12, 2025): > > The last tool calling model published for a half year ago. > > [phi4-mini](https://ollama.com/library/phi4-mini) was published 11 days ago. > > > This is nasty, ollama such a toy, It is useless without tools > > ollama can be quite useful without tools, but these days it's certainly true that tool uses expands the scope of deployment. Fortunately tool support is [quite good](https://github.com/ollama/ollama/issues/8287#issuecomment-2581625140), but there's always room for improvement, so we'll see what happens now that 0.6 series has been started. > Tools are the AI development because they are the base for Agent Swarm implementation. Without them, all the model can do is to work in demo mode with obsolete data without third party integrations
Author
Owner

@rick-github commented on GitHub (Mar 12, 2025):

Tools are the AI development because they are the base for Agent Swarm implementation. Without them, all the model can do is to work in demo mode with obsolete data without third party integrations

Then it's fortunate that ollama supports tool using models and has many third party integrations for use in deploying ollama based systems.

<!-- gh-comment-id:2718162026 --> @rick-github commented on GitHub (Mar 12, 2025): > Tools are the AI development because they are the base for Agent Swarm implementation. Without them, all the model can do is to work in demo mode with obsolete data without third party integrations Then it's fortunate that ollama supports tool using models and has many [third party integrations](https://github.com/ollama/ollama?tab=readme-ov-file#community-integrations) for use in deploying ollama based systems.
Author
Owner

@joaquindas commented on GitHub (Mar 12, 2025):

The model card on HF also doesn't have a role for tools. Does the model inherently support function calls?

<!-- gh-comment-id:2718175439 --> @joaquindas commented on GitHub (Mar 12, 2025): The [model card on HF](https://huggingface.co/google/gemma-3-27b-it/raw/main/tokenizer_config.json) also doesn't have a role for tools. Does the model inherently support function calls?
Author
Owner

@rick-github commented on GitHub (Mar 12, 2025):

The blog post says it does, but neither the HF page nor the chat_template make any indication of support.

<!-- gh-comment-id:2718215216 --> @rick-github commented on GitHub (Mar 12, 2025): The [blog post](https://blog.google/technology/developers/gemma-3/#:~:text=Create%20AI%2Ddriven%20workflows%20using%20function%20calling) says it does, but neither the HF page nor the `chat_template` make any indication of support.
Author
Owner

@tripolskypetr commented on GitHub (Mar 12, 2025):

The model card on HF also doesn't have a role for tools. Does the model inherently support function calls?

It is, try on LMStudio

The problem is Ollama Team does not even test are tool calls working when publishing the model with tools tag. For example, nemotron-mini got the tools tag but it does not call the tools

So they simple started to publish every model without tools label. Even if the model support tool calls

<!-- gh-comment-id:2718395699 --> @tripolskypetr commented on GitHub (Mar 12, 2025): > The [model card on HF](https://huggingface.co/google/gemma-3-27b-it/raw/main/tokenizer_config.json) also doesn't have a role for tools. Does the model inherently support function calls? It is, try on LMStudio The problem is Ollama Team does not even test are tool calls working when publishing the model with `tools` tag. For example, `nemotron-mini` got the tools tag but it does not call the tools So they simple started to publish every model without tools label. Even if the model support tool calls
Author
Owner

@joaquindas commented on GitHub (Mar 12, 2025):

The blog post says it does, but neither the HF page nor the chat_template make any indication of support.

It's a bit confusing bc the google team was the ones that uploaded the model to HF with relevant configs. Either they messed something up or we're missing something?

<!-- gh-comment-id:2718427793 --> @joaquindas commented on GitHub (Mar 12, 2025): > The [blog post](https://blog.google/technology/developers/gemma-3/#:~:text=Create%20AI%2Ddriven%20workflows%20using%20function%20calling) says it does, but neither the HF page nor the `chat_template` make any indication of support. It's a bit confusing bc the google team was the ones that uploaded the model to HF with relevant configs. Either they messed something up or we're missing something?
Author
Owner

@joaquindas commented on GitHub (Mar 12, 2025):

It is, try on LMStudio

I tried looking for it, but couldn't find it here. Double checking that it's not Gemma2 you're talking about?

<!-- gh-comment-id:2718436389 --> @joaquindas commented on GitHub (Mar 12, 2025): > It is, try on LMStudio I tried looking for it, but couldn't find it [here.](https://lmstudio.ai/models) Double checking that it's not Gemma2 you're talking about?
Author
Owner

@rick-github commented on GitHub (Mar 12, 2025):

It is, try on LMStudio

Not available from the LM Studio library. Did you import from elsewhere?

The problem is Ollama Team does not even test are tool calls working when publishing the model with tools tag. For example, nemotron-mini got the tools tag but it does not call the tools

nemotron-mini does support tools, see here.

So they simple started to publish every model without tools label. Even if the model support tool calls

phi4-mini was published 11 days ago and has a tools label.

<!-- gh-comment-id:2718462164 --> @rick-github commented on GitHub (Mar 12, 2025): > It is, try on LMStudio Not available from the LM Studio library. Did you import from elsewhere? > > The problem is Ollama Team does not even test are tool calls working when publishing the model with `tools` tag. For example, `nemotron-mini` got the tools tag but it does not call the tools nemotron-mini does support tools, see [here](https://github.com/ollama/ollama/issues/8287#issuecomment-2572172759). > So they simple started to publish every model without tools label. Even if the model support tool calls [phi4-mini](https://ollama.com/library/phi4-mini) was published 11 days ago and has a `tools` label.
Author
Owner

@rick-github commented on GitHub (Mar 12, 2025):

It's a bit confusing bc the google team was the ones that uploaded the model to HF with relevant configs. Either they messed something up or we're missing something?

I've had read through the tech report and browsed their kaggle,hf and cloud.google sites and there's no concrete examples of tool use. I'm wondering if it's a feature of their aitstudio platform. I've probed the model for tool support and it seems to respond in the right way. I'll see if I can spin a tool-using template, even if not in the format it might have been trained for.

<!-- gh-comment-id:2718500330 --> @rick-github commented on GitHub (Mar 12, 2025): > It's a bit confusing bc the google team was the ones that uploaded the model to HF with relevant configs. Either they messed something up or we're missing something? I've had read through the tech report and browsed their kaggle,hf and cloud.google sites and there's no concrete examples of tool use. I'm wondering if it's a feature of their aitstudio platform. I've probed the model for tool support and it seems to respond in the right way. I'll see if I can spin a tool-using template, even if not in the format it might have been trained for.
Author
Owner

@kucukkanat commented on GitHub (Mar 12, 2025):

@tripolskypetr this is an open source project. the model is an open source one. if you are unhappy stop shitmouthing, contribute or fork, or you can go "entertain" yourself

<!-- gh-comment-id:2718746864 --> @kucukkanat commented on GitHub (Mar 12, 2025): @tripolskypetr this is an open source project. the model is an open source one. if you are unhappy stop shitmouthing, contribute or fork, or you can go "entertain" yourself
Author
Owner

@tripolskypetr commented on GitHub (Mar 12, 2025):

@tripolskypetr this is an open source project. the model is an open source one. if you are unhappy stop shitmouthing, contribute or fork, or you can go "entertain" yourself

This is exactly what I am talking about. The models published to ollama registry are fake: to use ollama you have to download them, fix them and upload them.

And it does not guarantee the model quality: you defenitely will spend a time for writing your own system prompt for a model, but the model itself can be unusable

As an open source contributor a am making these facts public available. People must known the problem with low quality of ollama models still exist and maintainers do nothing about it

<!-- gh-comment-id:2718778100 --> @tripolskypetr commented on GitHub (Mar 12, 2025): > @tripolskypetr this is an open source project. the model is an open source one. if you are unhappy stop shitmouthing, contribute or fork, or you can go "entertain" yourself This is exactly what I am talking about. The models published to ollama registry are fake: to use ollama you have to download them, fix them and upload them. And it does not guarantee the model quality: you defenitely will spend a time for writing your own system prompt for a model, but the model itself can be unusable As an open source contributor a am making these facts public available. People must known the problem with low quality of ollama models still exist and maintainers do nothing about it
Author
Owner

@rick-github commented on GitHub (Mar 12, 2025):

This is exactly what I am talking about. The models published to ollama registry are fake: to use ollama you have to download them, fix them and upload them.

And yet many people are using the default models without a problem.

And it does not guaratee the model quality: you defenitely will spend a time for writing your own system prompt for a model, but the model itself can be unusable

Prompt engineering is a skill that many developers need. However for the default case, the existing template seems to work fine. For example, the nemotron-mini model works with the default system prompt as shown here.

As an open source contributor a am making these facts public available. People must known the problem about low quality of ollama models still exist and maintainers do nothing about it

It there's a genuine problem with a model, the maintainers will fix it. Recently phi4 was not ready out of the gate and it was fixed in a few hours.

<!-- gh-comment-id:2718814625 --> @rick-github commented on GitHub (Mar 12, 2025): > This is exactly what I am talking about. The models published to ollama registry are fake: to use ollama you have to download them, fix them and upload them. And yet many people are using the default models without a problem. > And it does not guaratee the model quality: you defenitely will spend a time for writing your own system prompt for a model, but the model itself can be unusable Prompt engineering is a skill that many developers need. However for the default case, the existing template seems to work fine. For example, the nemotron-mini model works with the default system prompt as shown [here](https://github.com/ollama/ollama/issues/8287#issuecomment-2572172759). > As an open source contributor a am making these facts public available. People must known the problem about low quality of ollama models still exist and maintainers do nothing about it It there's a genuine problem with a model, the maintainers will fix it. Recently phi4 was not ready [out of the gate](https://github.com/ollama/ollama/issues/9412) and it was fixed in a few hours.
Author
Owner

@rick-github commented on GitHub (Mar 12, 2025):

So the model is actually pretty good at generating tool calls, but not so great at processing the result of a tool call. The model doesn't have a tool role or have tokens like ipython or <tool> to indicate to the model that it's getting generated data. Despite what the blog says, I'm leaning towards this model not having been trained in tool use.

<!-- gh-comment-id:2718867816 --> @rick-github commented on GitHub (Mar 12, 2025): So the model is actually pretty good at generating tool calls, but not so great at processing the result of a tool call. The model doesn't have a `tool` role or have tokens like `ipython` or `<tool>` to indicate to the model that it's getting generated data. Despite what the blog says, I'm leaning towards this model not having been trained in tool use.
Author
Owner

@joaquindas commented on GitHub (Mar 13, 2025):

So the model is actually pretty good at generating tool calls, but not so great at processing the result of a tool call. The model doesn't have a tool role or have tokens like ipython or <tool> to indicate to the model that it's getting generated data. Despite what the blog says, I'm leaning towards this model not having been trained in tool use.

How are you testing this if the model doesn't have special tokens for tool inputs or outputs?

<!-- gh-comment-id:2719604321 --> @joaquindas commented on GitHub (Mar 13, 2025): > So the model is actually pretty good at generating tool calls, but not so great at processing the result of a tool call. The model doesn't have a `tool` role or have tokens like `ipython` or `<tool>` to indicate to the model that it's getting generated data. Despite what the blog says, I'm leaning towards this model not having been trained in tool use. How are you testing this if the model doesn't have special tokens for tool inputs or outputs?
Author
Owner

@rick-github commented on GitHub (Mar 13, 2025):

Getting the model to generate tool requests is straightforward, most models are capable of that with a change to the template. Processing tool call results is where the model struggles due to the lack of special tokens. I'm trying out a few variations but so far the results aren't great, but I might hit on the magic sauce, we'll see.

<!-- gh-comment-id:2719625999 --> @rick-github commented on GitHub (Mar 13, 2025): Getting the model to generate tool requests is straightforward, [most models](https://github.com/ollama/ollama/issues/6061#issuecomment-2257075560) are capable of that with a change to the template. Processing tool call results is where the model struggles due to the lack of special tokens. I'm trying out a few variations but so far the results aren't great, but I might hit on the magic sauce, we'll see.
Author
Owner

@ParthSareen commented on GitHub (Mar 13, 2025):

Hey everyone, the Deepmind team worked with us pre-launch and decided to hold off on allowing function calling at the moment. It's being looked into from their end and we'll update the model and modelfiles if that happens.

<!-- gh-comment-id:2719702423 --> @ParthSareen commented on GitHub (Mar 13, 2025): Hey everyone, the Deepmind team worked with us pre-launch and decided to hold off on allowing function calling at the moment. It's being looked into from their end and we'll update the model and modelfiles if that happens.
Author
Owner

@maglat commented on GitHub (Mar 13, 2025):

Hey everyone, the Deepmind team worked with us pre-launch and decided to hold off on allowing function calling at the moment. It's being looked into from their end and we'll update the model and modelfiles if that happens.

Thank you for clarification. Was there any estimation when they plan to integrate function calling?

<!-- gh-comment-id:2720436463 --> @maglat commented on GitHub (Mar 13, 2025): > Hey everyone, the Deepmind team worked with us pre-launch and decided to hold off on allowing function calling at the moment. It's being looked into from their end and we'll update the model and modelfiles if that happens. Thank you for clarification. Was there any estimation when they plan to integrate function calling?
Author
Owner

@tripolskypetr commented on GitHub (Mar 13, 2025):

As mentioned before someone is really waiting for a model with stable tool calling. I hoped this would be deepseek but not

<!-- gh-comment-id:2720942287 --> @tripolskypetr commented on GitHub (Mar 13, 2025): As mentioned before someone is really waiting for a model with stable tool calling. I hoped this would be deepseek but not
Author
Owner

@ParthSareen commented on GitHub (Mar 13, 2025):

Thank you for clarification. Was there any estimation when they plan to integrate function calling?

Not aware of the timeline as of yet, but Ollama will support is as soon as there is official support. Will keep you all posted if there are any updates!

<!-- gh-comment-id:2721599445 --> @ParthSareen commented on GitHub (Mar 13, 2025): > Thank you for clarification. Was there any estimation when they plan to integrate function calling? Not aware of the timeline as of yet, but Ollama will support is as soon as there is official support. Will keep you all posted if there are any updates!
Author
Owner

@eugene-kamenev commented on GitHub (Mar 13, 2025):

{{- $isFirst := true }}
{{- $prevRole := "model"}}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{- $role := .Role -}}
{{- if eq .Role "assistant" }}
{{- $role = "model" -}}
{{- else }}
{{- $role = "user" -}}
{{- end -}}
{{- if ne $prevRole $role -}}{{- if not $isFirst }}<end_of_turn>
{{ end }}<start_of_turn>{{ $role }}
{{- if and (not $isFirst) .ToolCalls }}
{{- range .ToolCalls }}
{{- printf "\n<tool_call>\n{\"name\": \"%s\", \"arguments\": %s}\n</tool_call>" .Function.Name (json .Function.Arguments) }}
{{- end }}
{{- else if and $isFirst $.Tools }}
# Tools

You may call one or more functions to assist with the user query.

You are provided with function signatures within <tools></tools> XML tags:
<tools>
{{- range $.Tools }}
{"type": "function", "function": {{ .Function }}}
{{- end }}
</tools>

For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>

{{- end }}
{{- end }}
{{- if eq .Role "tool" }}
<tool_response>{{.Content}}</tool_response>
{{- else }}
{{.Content}}
{{- end }}
{{- $prevRole = $role }}
{{- $isFirst = false }}
{{- if $last -}}<end_of_turn>
<start_of_turn>model
{{ end }}
{{- end }}
<start_of_turn>user
# Tools

You may call one or more functions to assist with the user query.

You are provided with function signatures within <tools></tools> XML tags:
<tools>
{"type": "function", "function": {"name":"getSquareRoot","description":"Returns square root of a number","parameters":{"type":"object","required":["x"],"properties":{"x":{"type":"number","description":""}}}}}
{"type": "function", "function": {"name":"getSum","description":"Returns sum of two numbers","parameters":{"type":"object","required":["x","y"],"properties":{"x":{"type":"number","description":""},"y":{"type":"number","description":""}}}}}
</tools>

For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>
You are a very helpful AI assistant with tools.
What is the square root of 475695037565? And what is the sum of 44.101523499 and 500.213455?<end_of_turn>
<start_of_turn>model
<tool_call>
{"name": "getSquareRoot", "arguments": {"x":475695037565}}
</tool_call>
<tool_call>
{"name": "getSum", "arguments": {"x":44.101523499,"y":500.213455}}
</tool_call>
<end_of_turn>
<start_of_turn>user
<tool_response>689706.4865324959</tool_response>
<!-- here i have a patch in ollama to not merge subsequent tools, you may need to adjust -->
<tool_response>544.314978499</tool_response><end_of_turn>
<start_of_turn>model

This template seems to work for tool calling and tool response handling. Tested with gemma3:27b, will try others tomorrow.

To test template rendering difference between jinja2 and Ollama I created a simple online tool: ollama-template-test.

<!-- gh-comment-id:2722586870 --> @eugene-kamenev commented on GitHub (Mar 13, 2025): ```go {{- $isFirst := true }} {{- $prevRole := "model"}} {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 -}} {{- $role := .Role -}} {{- if eq .Role "assistant" }} {{- $role = "model" -}} {{- else }} {{- $role = "user" -}} {{- end -}} {{- if ne $prevRole $role -}}{{- if not $isFirst }}<end_of_turn> {{ end }}<start_of_turn>{{ $role }} {{- if and (not $isFirst) .ToolCalls }} {{- range .ToolCalls }} {{- printf "\n<tool_call>\n{\"name\": \"%s\", \"arguments\": %s}\n</tool_call>" .Function.Name (json .Function.Arguments) }} {{- end }} {{- else if and $isFirst $.Tools }} # Tools You may call one or more functions to assist with the user query. You are provided with function signatures within <tools></tools> XML tags: <tools> {{- range $.Tools }} {"type": "function", "function": {{ .Function }}} {{- end }} </tools> For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags: <tool_call> {"name": <function-name>, "arguments": <args-json-object>} </tool_call> {{- end }} {{- end }} {{- if eq .Role "tool" }} <tool_response>{{.Content}}</tool_response> {{- else }} {{.Content}} {{- end }} {{- $prevRole = $role }} {{- $isFirst = false }} {{- if $last -}}<end_of_turn> <start_of_turn>model {{ end }} {{- end }} ``` ```html <start_of_turn>user # Tools You may call one or more functions to assist with the user query. You are provided with function signatures within <tools></tools> XML tags: <tools> {"type": "function", "function": {"name":"getSquareRoot","description":"Returns square root of a number","parameters":{"type":"object","required":["x"],"properties":{"x":{"type":"number","description":""}}}}} {"type": "function", "function": {"name":"getSum","description":"Returns sum of two numbers","parameters":{"type":"object","required":["x","y"],"properties":{"x":{"type":"number","description":""},"y":{"type":"number","description":""}}}}} </tools> For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags: <tool_call> {"name": <function-name>, "arguments": <args-json-object>} </tool_call> You are a very helpful AI assistant with tools. What is the square root of 475695037565? And what is the sum of 44.101523499 and 500.213455?<end_of_turn> <start_of_turn>model <tool_call> {"name": "getSquareRoot", "arguments": {"x":475695037565}} </tool_call> <tool_call> {"name": "getSum", "arguments": {"x":44.101523499,"y":500.213455}} </tool_call> <end_of_turn> <start_of_turn>user <tool_response>689706.4865324959</tool_response> <!-- here i have a patch in ollama to not merge subsequent tools, you may need to adjust --> <tool_response>544.314978499</tool_response><end_of_turn> <start_of_turn>model ``` This template seems to work for tool calling and tool response handling. Tested with gemma3:27b, will try others tomorrow. To test template rendering difference between jinja2 and Ollama I created a simple online tool: [ollama-template-test](https://eugene-kamenev.github.io/ollama-template-test/).
Author
Owner

@ParthSareen commented on GitHub (Mar 13, 2025):

Really cool work @eugene-kamenev! This is super neat!

<!-- gh-comment-id:2722609954 --> @ParthSareen commented on GitHub (Mar 13, 2025): Really cool work @eugene-kamenev! This is super neat!
Author
Owner

@ParthSareen commented on GitHub (Mar 13, 2025):

As I said, the default system prompts in ollama registry are completely unusable. Each of us forced to write our own tool calling prompts, but not everyone got required tech skills to do that

Hey @tripolskypetr,

We've had this discussion before. We'll continue to focus on the support that the model makers have outlined. Which means that if what we're instructed about the model is that there are no tools we will follow that. If you have an issue with this you are welcome to create your own templates or tool calling prompts. This will not be discussed further.

<!-- gh-comment-id:2722611275 --> @ParthSareen commented on GitHub (Mar 13, 2025): > As I said, the default system prompts in ollama registry are completely unusable. Each of us forced to write our own tool calling prompts, but not everyone got required tech skills to do that Hey @tripolskypetr, We've had this discussion before. We'll continue to focus on the support that the model makers have outlined. Which means that if what we're instructed about the model is that there are no tools we will follow that. If you have an issue with this you are welcome to create your own templates or tool calling prompts. This will not be discussed further.
Author
Owner

@brenzel commented on GitHub (Mar 13, 2025):

I have successfully tested this ollama model with tools:

https://ollama.com/PetrosStav/gemma3-tools

<!-- gh-comment-id:2722699068 --> @brenzel commented on GitHub (Mar 13, 2025): I have successfully tested this ollama model with tools: https://ollama.com/PetrosStav/gemma3-tools
Author
Owner

@CesarPetrescu commented on GitHub (Mar 13, 2025):

Hello, for me gemma3 works with LMStudio so it might be an ollama related issue.

https://ollama.com/PetrosStav/gemma3-tools didnt work on Ollama with Flowise for me.

<!-- gh-comment-id:2722727090 --> @CesarPetrescu commented on GitHub (Mar 13, 2025): Hello, for me gemma3 works with LMStudio so it might be an ollama related issue. https://ollama.com/PetrosStav/gemma3-tools didnt work on Ollama with Flowise for me.
Author
Owner

@jmadden91 commented on GitHub (Mar 14, 2025):

I have successfully tested this ollama model with tools:

https://ollama.com/PetrosStav/gemma3-tools

This seems to work perfectly with home assistant assist tool calling

<!-- gh-comment-id:2722968788 --> @jmadden91 commented on GitHub (Mar 14, 2025): > I have successfully tested this ollama model with tools: > > https://ollama.com/PetrosStav/gemma3-tools This seems to work perfectly with home assistant assist tool calling
Author
Owner

@DoiiarX commented on GitHub (Mar 14, 2025):

I have successfully tested this ollama model with tools:

https://ollama.com/PetrosStav/gemma3-tools

work for me. thanks.

<!-- gh-comment-id:2723911168 --> @DoiiarX commented on GitHub (Mar 14, 2025): > I have successfully tested this ollama model with tools: > > https://ollama.com/PetrosStav/gemma3-tools work for me. thanks.
Author
Owner

@CesarPetrescu commented on GitHub (Mar 14, 2025):

Update, https://ollama.com/PetrosStav/gemma3-tools works for me too, maybe at first i had an error made by me. Now everything is fine!

<!-- gh-comment-id:2723938315 --> @CesarPetrescu commented on GitHub (Mar 14, 2025): Update, https://ollama.com/PetrosStav/gemma3-tools works for me too, maybe at first i had an error made by me. Now everything is fine!
Author
Owner

@atoulmin commented on GitHub (Mar 14, 2025):

Hmmm still doesn’t work for me with https://ollama.com/PetrosStav/gemma3-tools

<!-- gh-comment-id:2725939115 --> @atoulmin commented on GitHub (Mar 14, 2025): Hmmm still doesn’t work for me with https://ollama.com/PetrosStav/gemma3-tools
Author
Owner

@oybekdevuz commented on GitHub (Mar 21, 2025):

Based on the PetrosStav/gemma3-tools solution I have just fixed command-r which was also broken(

If your gemma call the not existing tools add those lines like in my template

Image

https://ollama.com/oybekdevuz/command-r

<!-- gh-comment-id:2742827704 --> @oybekdevuz commented on GitHub (Mar 21, 2025): Based on the PetrosStav/gemma3-tools solution I have just fixed command-r which was also broken( > If your gemma call the not existing tools add those lines [like in my template](https://ollama.com/oybekdevuz/command-r) ![Image](https://github.com/user-attachments/assets/e18c832c-eaed-4a13-8a35-827e0fb19c56) https://ollama.com/oybekdevuz/command-r
Author
Owner

@mmb78 commented on GitHub (Mar 22, 2025):

I tried this with: PetrosStav/gemma3-tools:12b

class ImageDescription(BaseModel):
title: str
description: str
keywords: list[str]

schema = ImageDescription.model_json_schema()

    response = client.chat.completions.create(
        model = llm_model,
        messages = messages,
        tools=[{"type": "function", "function": {"name": "image_info", "parameters": schema}}],
        tool_choice={"type": "function", "function": {"name": "image_info"}},
    )

    tool_calls = response.choices[0].message.tool_calls

print(response)

ChatCompletion(id='chatcmpl-385', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Alpine Landscape\n\nAlpine, mountains, landscape, nature, trees, grass, sky, clouds, hills, valley, forest, meadow, wood, rural, outdoors, scenic, panorama, vegetation, foliage, peak, summit, elevation, tranquility, serenity, pastoral, idyllic, green, blue, wood cabin, fence, horizon, daytime.\n', refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=None))], created=1742604503, model='PetrosStav/gemma3-tools:12b', object='chat.completion', service_tier=None, system_fingerprint='fp_ollama', usage=CompletionUsage(completion_tokens=70, prompt_tokens=506, total_tokens=576, completion_tokens_details=None, prompt_tokens_details=None))

Of course "tool_calls" was empty.

The prompt (the same code) works for ChatGPT-4o-mini.
where the response look like this (shortened):
ChatCompletion(id='', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content=None, refusal=None, role='assistant', annotations=[], audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='', function=Function(arguments='{"title":"Man in Window of Wooden House","description":"A man is sitting in a window of a wooden house, smiling and holding an object. The house features a rustic wooden exterior with multiple windows, some of which have wooden shutters. The lower part of the house is painted white, contrasting with the dark wood above. There is a bench below the window and a grassy area in front.","keywords":["man","window","wooden house","shutters","smiling","holding object","rustic","exterior","white","bench","grassy area","multiple windows","dark wood","house","architecture","outdoor","nature","sitting","interior","scenery","view","facade","home","country","rural","landscape","summer","casual","clothing","happy"]}', name='image_info'), type='function')]))], created=1742604423, model='gpt-4o-mini-2024-07-18', object='chat.completion', service_tier='default', system_fingerprint='', usage=CompletionUsage(completion_tokens=163, prompt_tokens=25655, total_tokens=25818, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0)))

Any ideas how to use tools with Ollama properly in a similar way like OpenAI models?
Thank you!

<!-- gh-comment-id:2744736867 --> @mmb78 commented on GitHub (Mar 22, 2025): I tried this with: PetrosStav/gemma3-tools:12b class ImageDescription(BaseModel): title: str description: str keywords: list[str] schema = ImageDescription.model_json_schema() response = client.chat.completions.create( model = llm_model, messages = messages, tools=[{"type": "function", "function": {"name": "image_info", "parameters": schema}}], tool_choice={"type": "function", "function": {"name": "image_info"}}, ) tool_calls = response.choices[0].message.tool_calls print(response) ChatCompletion(id='chatcmpl-385', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Alpine Landscape\n\nAlpine, mountains, landscape, nature, trees, grass, sky, clouds, hills, valley, forest, meadow, wood, rural, outdoors, scenic, panorama, vegetation, foliage, peak, summit, elevation, tranquility, serenity, pastoral, idyllic, green, blue, wood cabin, fence, horizon, daytime.\n', refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=None))], created=1742604503, model='PetrosStav/gemma3-tools:12b', object='chat.completion', service_tier=None, system_fingerprint='fp_ollama', usage=CompletionUsage(completion_tokens=70, prompt_tokens=506, total_tokens=576, completion_tokens_details=None, prompt_tokens_details=None)) Of course "tool_calls" was empty. The prompt (the same code) works for ChatGPT-4o-mini. where the response look like this (shortened): ChatCompletion(id='', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content=None, refusal=None, role='assistant', annotations=[], audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='', function=Function(arguments='{"title":"Man in Window of Wooden House","description":"A man is sitting in a window of a wooden house, smiling and holding an object. The house features a rustic wooden exterior with multiple windows, some of which have wooden shutters. The lower part of the house is painted white, contrasting with the dark wood above. There is a bench below the window and a grassy area in front.","keywords":["man","window","wooden house","shutters","smiling","holding object","rustic","exterior","white","bench","grassy area","multiple windows","dark wood","house","architecture","outdoor","nature","sitting","interior","scenery","view","facade","home","country","rural","landscape","summer","casual","clothing","happy"]}', name='image_info'), type='function')]))], created=1742604423, model='gpt-4o-mini-2024-07-18', object='chat.completion', service_tier='default', system_fingerprint='', usage=CompletionUsage(completion_tokens=163, prompt_tokens=25655, total_tokens=25818, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))) Any ideas how to use tools with Ollama properly in a similar way like OpenAI models? Thank you!
Author
Owner

@tripolskypetr commented on GitHub (Mar 22, 2025):

@mmb78 This repo contains several tool calling projects which can be used with ollama

const NAVIGATE_TOOL = addTool({
  toolName: "navigate-tool",
  call: async (clientId, agentName, { to }) => {
    await changeAgent(to, clientId);
    await execute("Navigation complete. Notify the user", clientId);
  },
  type: "function",
  function: {
    name: "navigate-tool",
    description: "The tool for navigation",
    parameters: {
      type: "object",
      properties: {
        to: {
          type: "string",
          description: "The target agent for navigation",
        },
      },
      required: ["to"],
    },
  },
});

const ollama = new Ollama({ host: process.env.OLLAMA_HOST });

const OLLAMA_COMPLETION = addCompletion({
  completionName: "navigate-completion",
  getCompletion: Adapter.fromOllama(ollama, "nemotron-mini:4b"), // "tripolskypetr/gemma3-tools:4b"
});

const TRIAGE_AGENT = addAgent({
  agentName: "triage-agent",
  completion: OLLAMA_COMPLETION,
  prompt: "You are to triage a users request, and call a tool to transfer to the right agent. There are two agents available: `sales-agent` and `refund-agent`",
  tools: [NAVIGATE_TOOL],
});

https://github.com/tripolskypetr/agent-swarm-kit/blob/master/demo/cohere-token-rotate/src/logic/completion/ollama.completion.ts

<!-- gh-comment-id:2745212343 --> @tripolskypetr commented on GitHub (Mar 22, 2025): @mmb78 [This repo](https://github.com/tripolskypetr/agent-swarm-kit) contains several tool calling projects which can be used with ollama ```typescript const NAVIGATE_TOOL = addTool({ toolName: "navigate-tool", call: async (clientId, agentName, { to }) => { await changeAgent(to, clientId); await execute("Navigation complete. Notify the user", clientId); }, type: "function", function: { name: "navigate-tool", description: "The tool for navigation", parameters: { type: "object", properties: { to: { type: "string", description: "The target agent for navigation", }, }, required: ["to"], }, }, }); const ollama = new Ollama({ host: process.env.OLLAMA_HOST }); const OLLAMA_COMPLETION = addCompletion({ completionName: "navigate-completion", getCompletion: Adapter.fromOllama(ollama, "nemotron-mini:4b"), // "tripolskypetr/gemma3-tools:4b" }); const TRIAGE_AGENT = addAgent({ agentName: "triage-agent", completion: OLLAMA_COMPLETION, prompt: "You are to triage a users request, and call a tool to transfer to the right agent. There are two agents available: `sales-agent` and `refund-agent`", tools: [NAVIGATE_TOOL], }); ``` https://github.com/tripolskypetr/agent-swarm-kit/blob/master/demo/cohere-token-rotate/src/logic/completion/ollama.completion.ts
Author
Owner

@mmb78 commented on GitHub (Mar 22, 2025):

Sorry for potentially stupid question .. but the extra template to make Gemm3 understand tools, is this something that the model receives when loaded to the memory, or is this added to each prompt? My point is that I have "system" part of my prompts, would that override this "template", or is that a separate set of instructions? Can one just add such "template" to any mode? How to do that?

<!-- gh-comment-id:2745289572 --> @mmb78 commented on GitHub (Mar 22, 2025): Sorry for potentially stupid question .. but the extra template to make Gemm3 understand tools, is this something that the model receives when loaded to the memory, or is this added to each prompt? My point is that I have "system" part of my prompts, would that override this "template", or is that a separate set of instructions? Can one just add such "template" to any mode? How to do that?
Author
Owner

@tripolskypetr commented on GitHub (Mar 22, 2025):

@mmb78

There are only two options for tool calls. The first is to patch the modelfile with these lines

Image

Or inject this message on top of each conversation like it made in agent-swarm-kit. This is the easiest way to fix the tools

Image

<!-- gh-comment-id:2745345966 --> @tripolskypetr commented on GitHub (Mar 22, 2025): @mmb78 There are only two options for tool calls. The first is to patch the modelfile with these lines ![Image](https://github.com/user-attachments/assets/2ad12774-16d8-456e-932d-f44cb6714f4b) Or inject this message on top of each conversation like it [made in agent-swarm-kit](https://github.com/tripolskypetr/agent-swarm-kit/blob/master/src/classes/Adapter.ts#L11C14-L11C35). This is the easiest way to fix the tools ![Image](https://github.com/user-attachments/assets/b743ec0b-140c-480a-ab48-015550d9550f)
Author
Owner

@ParthSareen commented on GitHub (Mar 22, 2025):

I tried this with: PetrosStav/gemma3-tools:12b
Any ideas how to use tools with Ollama properly in a similar way like OpenAI models?
Thank you!

Hi @mmb78,

I'd recommend trying out another model for tools https://ollama.com/search?c=tools

Gemma3 does not have official tool support as it was not trained for it. Hope this helps!

<!-- gh-comment-id:2745409269 --> @ParthSareen commented on GitHub (Mar 22, 2025): > I tried this with: PetrosStav/gemma3-tools:12b > Any ideas how to use tools with Ollama properly in a similar way like OpenAI models? > Thank you! Hi @mmb78, I'd recommend trying out another model for tools https://ollama.com/search?c=tools Gemma3 does not have official tool support as it was not trained for it. Hope this helps!
Author
Owner

@mmb78 commented on GitHub (Mar 22, 2025):

Actually ... had quite a good success as explained here:
https://github.com/ollama/ollama/issues/9941#issuecomment-2745370597

<!-- gh-comment-id:2745413082 --> @mmb78 commented on GitHub (Mar 22, 2025): Actually ... had quite a good success as explained here: https://github.com/ollama/ollama/issues/9941#issuecomment-2745370597
Author
Owner

@mmb78 commented on GitHub (Mar 23, 2025):

@mmb78

There are only two options for tool calls. The first is to patch the modelfile with these lines

Image

Or inject this message on top of each conversation like it made in agent-swarm-kit. This is the easiest way to fix the tools

Image

Thank you for your help!
I noticed one thing .. this template (which works well but not perfect for me):
https://ollama.com/PetrosStav/gemma3-tools:12b/blobs/dbb9d04f85fb

has this instruction:

{{- if .Tools }}
You can use these tools to help answer the user's question:
{{- range .Tools }}
{{ . }}
{{- end }}
When you need to use a tool, format your response as JSON as follows:
<tool>
{"name": "tool_name", "parameters": {"param1": "value1", "param2": "value2"}}
</tool>

However, some other templates instruct the LLM that the tool output of used tools should be wrapped differently (as mentioned above): https://github.com/ollama/ollama/issues/9680#issuecomment-2722586870

For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>

the key difference is <tool> vs <tool_call>

I'm not sure how Ollama parses the LLM output to decide if it should return successful "tool call" .. but this small difference may explain why it fails sometimes with this model:
https://ollama.com/PetrosStav/gemma3-tools

<!-- gh-comment-id:2746103757 --> @mmb78 commented on GitHub (Mar 23, 2025): > [@mmb78](https://github.com/mmb78) > > There are only two options for tool calls. The first is to patch the modelfile with these lines > > ![Image](https://github.com/user-attachments/assets/2ad12774-16d8-456e-932d-f44cb6714f4b) > > Or inject this message on top of each conversation like it [made in agent-swarm-kit](https://github.com/tripolskypetr/agent-swarm-kit/blob/master/src/classes/Adapter.ts#L11C14-L11C35). This is the easiest way to fix the tools > > ![Image](https://github.com/user-attachments/assets/b743ec0b-140c-480a-ab48-015550d9550f) Thank you for your help! I noticed one thing .. this template (which works well but not perfect for me): https://ollama.com/PetrosStav/gemma3-tools:12b/blobs/dbb9d04f85fb has this instruction: ``` {{- if .Tools }} You can use these tools to help answer the user's question: {{- range .Tools }} {{ . }} {{- end }} When you need to use a tool, format your response as JSON as follows: <tool> {"name": "tool_name", "parameters": {"param1": "value1", "param2": "value2"}} </tool> ``` However, some other templates instruct the LLM that the tool output of used tools should be wrapped differently (as mentioned above): https://github.com/ollama/ollama/issues/9680#issuecomment-2722586870 ``` For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags: <tool_call> {"name": <function-name>, "arguments": <args-json-object>} </tool_call> ``` **the key difference is `<tool> vs <tool_call>`** I'm not sure how Ollama parses the LLM output to decide if it should return successful "tool call" .. but this small difference may explain why it fails sometimes with this model: https://ollama.com/PetrosStav/gemma3-tools
Author
Owner

@tripolskypetr commented on GitHub (Mar 23, 2025):

@mmb78

If you not so sure, change the third parameter of Adapter

Adapter.fromOllama(ollama, "model", "tool calling prompt")
//                                     ^^^^^^^^^^^^^^^^

The difference of <tool> and <tool_call> was discussed in the next issue: https://github.com/ollama/ollama/issues/8287

Screenshot_20250323_152145_Chrome.jpg

If the tools not being called time to time take 27b version

<!-- gh-comment-id:2746131473 --> @tripolskypetr commented on GitHub (Mar 23, 2025): @mmb78 If you not so sure, change the third parameter of Adapter ```tsx Adapter.fromOllama(ollama, "model", "tool calling prompt") // ^^^^^^^^^^^^^^^^ ``` The difference of `<tool>` and `<tool_call>` was discussed in the next issue: https://github.com/ollama/ollama/issues/8287 ![Screenshot_20250323_152145_Chrome.jpg](https://github.com/user-attachments/assets/b26d567e-9ffa-47bc-8bf0-52381362fe2e) If the tools not being called time to time take 27b version
Author
Owner

@ParthSareen commented on GitHub (Mar 23, 2025):

Closing this issue out for now as it's not within scope. When there are updates to the model I'll follow up here!

<!-- gh-comment-id:2746382220 --> @ParthSareen commented on GitHub (Mar 23, 2025): Closing this issue out for now as it's not within scope. When there are updates to the model I'll follow up here!
Author
Owner

@JMLX42 commented on GitHub (Mar 26, 2025):

Google just dropped this article:

https://ai.google.dev/gemma/docs/capabilities/function-calling

Ans the Ollama gemma3 model just got an update.

Is function calling on the table now?

<!-- gh-comment-id:2755653686 --> @JMLX42 commented on GitHub (Mar 26, 2025): Google just dropped this article: https://ai.google.dev/gemma/docs/capabilities/function-calling Ans the Ollama gemma3 model just got an update. Is function calling on the table now?
Author
Owner

@ParthSareen commented on GitHub (Mar 26, 2025):

Hey @JMLX42 - this is basically what I experimented with trying to template it out as many people have done now. In the article it's mentioned that this is part of the prompt and that it can return output (which would be under the content) as a tool call in Python or JSON. The model is still not trained on the tool focused keywords which means you can't do things like passing in tool results to get the model to explain the result or use it in another way reliably.

So at this time, while we are not officially supporting it we are working with the Gemma team to make sure the experience is the best it can be :) Hope this brings some clarity.

However, planning to test this out a bunch more and see if reliability is "good enough" at a certain size.

<!-- gh-comment-id:2755661916 --> @ParthSareen commented on GitHub (Mar 26, 2025): Hey @JMLX42 - this is basically what I experimented with trying to template it out as many people have done now. In the article it's mentioned that this is part of the prompt and that it can return output (which would be under the `content`) as a tool call in Python or JSON. The model is still not trained on the tool focused keywords which means you can't do things like passing in tool results to get the model to explain the result or use it in another way reliably. So at this time, while we are not officially supporting it we are working with the Gemma team to make sure the experience is the best it can be :) Hope this brings some clarity. However, planning to test this out a bunch more and see if reliability is "good enough" at a certain size.
Author
Owner

@softmarshmallow commented on GitHub (Apr 20, 2025):

Haven't tried, someone made a tool-compat distro.

https://ollama.com/PetrosStav/gemma3-tools:12b

<!-- gh-comment-id:2817068268 --> @softmarshmallow commented on GitHub (Apr 20, 2025): Haven't tried, someone made a tool-compat distro. https://ollama.com/PetrosStav/gemma3-tools:12b
Author
Owner

@tonydamage commented on GitHub (May 19, 2025):

I think it is not just emitting correct tool calling command. Playing with gemma3-tools - and comparing to other models, the Gemma3 tends to analyze output data and propose creating code parser for returned JSON or tables, the others (qwen2.5, qwen3, command-r7b) actually uses returned data to answer users questions.

<!-- gh-comment-id:2889775761 --> @tonydamage commented on GitHub (May 19, 2025): I think it is not just emitting correct tool calling command. Playing with gemma3-tools - and comparing to other models, the Gemma3 tends to analyze output data and propose creating code parser for returned JSON or tables, the others (qwen2.5, qwen3, command-r7b) actually uses returned data to answer users questions.
Author
Owner

@markemus commented on GitHub (Jun 4, 2025):

Is this still not planned? I was really hoping to set up an agent with Gemma 3 and the tool compatible distro above is not working with langchain.

<!-- gh-comment-id:2937824475 --> @markemus commented on GitHub (Jun 4, 2025): Is this still not planned? I was really hoping to set up an agent with Gemma 3 and the tool compatible distro above is not working with langchain.
Author
Owner

@ParthSareen commented on GitHub (Jun 4, 2025):

I really want to have reliable tools in Gemma but finding some difficulty with the 4b model to call tools. Definitely trying to get something working though :D

<!-- gh-comment-id:2937830326 --> @ParthSareen commented on GitHub (Jun 4, 2025): I really want to have reliable tools in Gemma but finding some difficulty with the 4b model to call tools. Definitely trying to get something working though :D
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52829