[GH-ISSUE #9802] Messages[].ToolCalls not passed correctly to the template #52923

Open
opened 2026-04-29 01:24:24 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @alejomongua on GitHub (Mar 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9802

Originally assigned to: @ParthSareen on GitHub.

What is the issue?

Ollama's template system is not correctly processing the tool_calls field in assistant messages. When using a custom template with logic to render tool calls (e.g., if .ToolCalls), the template skips rendering the tool call, even though the request includes a valid tool_calls array in an assistant message. This results in the assistant's turn being empty (<start_of_turn>model<end_of_turn>) instead of showing the expected section with the tool invocation details.

Expected Behavior:

The template should recognize the tool_calls field in the assistant message and render the tool call using the provided logic (e.g., {"name": "tool_name", "parameters": {...}}).

Observed Behavior:

In the parsed output, the assistant's turn where the tool should be invoked is empty, indicating that the .ToolCalls variable is not being populated or recognized in the template context. Testing with a modified template shows that the if .ToolCalls condition takes the "else" path, as if no tool calls are present, despite being included in the request.

Steps to Reproduce:

  1. Use this modelfile to create a new model named Gemma3-tools:4b
FROM gemma3:4b
TEMPLATE """{{- if .System }}<start_of_turn>user
{{ .System }}
{{- if .Tools }}
You can use these tools to answer the user's questions:
{{- range .Tools }}
{{ . }}
{{- end }}
When you need to use a tool, format the request in this way:
<tool>
{"name": "tool_name", "parameters": {"param1": "value1", "param2": "value2"}}
</tool>
{{- end }}<end_of_turn>
<start_of_turn>model
Understood.
<end_of_turn>
{{- end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<start_of_turn>user
{{ .Content }}<end_of_turn>
{{ if $last }}<start_of_turn>model
{{ end }}
{{- else if eq .Role "assistant" }}<start_of_turn>model
{{- if .ToolCalls }}
<tool>
{{- range .ToolCalls }}
{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}
{{- end }}
</tool>
{{- else }}[NO TOOLS CALLED]
{{ .Content }}
{{- end }}{{ if not $last }}<end_of_turn>
{{ end }}
{{- else if eq .Role "tool" }}<start_of_turn>user
<tool_response>
{{ .Content }}
</tool_response><end_of_turn>
{{ if $last }}<start_of_turn>model
{{ end }}
{{- end }}
{{- end }}"""
  1. Enable debug to view the parsed template
  2. Send a request to the model like this one:
curl -X POST \
  -H "Content-Type: application/json" \
  -d @- \
  http://localhost:11434/v1/chat/completions <<EOF
{
  "messages": [
    {
      "role": "system",
      "content": "\nYou are a customer service agent who helps users to solve their problems.\nTo respond to a client, use the \"final_answer\" tool that is specified later.\nThe text outside the tools will be ignored."
    },
    {
      "role": "system",
      "content": "You are getting a request from this client: \n## NAME\nJohn Doe\n\n## EMAIL\njohn.doe@example.com\n\n"
    },
    {
      "role": "user",
      "content": "Hello, I have a problem with my order. I need help to solve it. The order number is 1234567890"
    },
    {
      "role": "assistant",
      "content": "",
      "tool_calls": [
        {
          "id": "call_4yswu3m4",
          "type": "function",
          "function": {
            "name": "get_order",
            "arguments": "{\"number\":\"1234567890\"}"
          }
        }
      ]
    },
    {
      "role": "tool",
      "tool_call_id": "call_4yswu3m4",
      "content": "Order not found."
    }
  ],
  "model": "Gemma3-tools:4b",
  "n": 1,
  "stream": false,
  "tool_choice": "required",
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_order",
        "description": "Gets the order from the client based on the number.",
        "parameters": {
          "properties": {
            "number": {
              "title": "number",
              "type": "string"
            }
          },
          "required": [
            "number"
          ],
          "type": "object",
          "additionalProperties": false
        }
      }
    },
    {
      "type": "function",
      "function": {
        "name": "final_answer",
        "description": "This tool is used to provide the final answer to the user.",
        "parameters": {
          "properties": {
            "response": {
              "title": "Response",
              "type": "string"
            },
            "needs_escalation": {
              "title": "Needs Escalation",
              "type": "boolean"
            },
            "follow_up_required": {
              "title": "Follow Up Required",
              "type": "boolean"
            },
            "sentiment": {
              "description": "Sentiment of the client",
              "title": "Sentiment",
              "type": "string"
            }
          },
          "required": [
            "response",
            "needs_escalation",
            "follow_up_required",
            "sentiment"
          ],
          "title": "ResponseModel",
          "type": "object"
        }
      }
    }
  ]
}
EOF 

Observe than the input to the model (the parsed template) is this:

<start_of_turn>user

You are a customer service agent who helps users to solve their problems.
To respond to a client, use the "final_answer" tool that is specified later.
The text outside the tools will be ignored.

You are getting a request from this client: 
## NAME
John Doe

## EMAIL
john.doe@example.com


You can use these tools to answer the user's questions:
{"type":"function","function":{"name":"get_order","description":"Gets the order from the client based on the number.","parameters":{"type":"object","required":["number"],"properties":{"number":{"type":"string","description":""}}}}}
{"type":"function","function":{"name":"final_answer","description":"This tool is used to provide the final answer to the user.","parameters":{"type":"object","required":["response","needs_escalation","follow_up_required","sentiment"],"properties":{"follow_up_required":{"type":"boolean","description":""},"needs_escalation":{"type":"boolean","description":""},"response":{"type":"string","description":""},"sentiment":{"type":"string","description":"Sentiment of the client"}}}}}
When you need to use a tool, format the request in this way:
<tool>
{"name": "tool_name", "parameters": {"param1": "value1", "param2": "value2"}}
</tool><end_of_turn>
<start_of_turn>model
Understood.
<end_of_turn><start_of_turn>user
Hello, I have a problem with my order. I need help to solve it. The order number is 1234567890<end_of_turn>
<start_of_turn>model[NO TOOLS CALLED]
<end_of_turn>
<start_of_turn>user
<tool_response>
Order not found.
</tool_response><end_of_turn>
<start_of_turn>model

Notice that in the last model turn, it shows [NO TOOLS CALLED] indicating that the condition {{- if .ToolCalls }} is taking the "else" path even when there are tools called in that message.

Additional Context:

  • I've tried several combinations of parameters for the tool_call object in the request, the result is always the same.
  • I’ve tested this with a minimal request and template to isolate the issue.
  • Debugging checks in the template (e.g., printing [ToolCalls not present] if .ToolCalls is empty) confirm that .ToolCalls is not being set as expected.
  • This issue occurs consistently, suggesting a bug in how Ollama processes assistant messages with tool calls in the template system.
  • I am using 0.6.1 version with the Gemma3 models.

Relevant log output


OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.6.1

Originally created by @alejomongua on GitHub (Mar 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9802 Originally assigned to: @ParthSareen on GitHub. ### What is the issue? Ollama's template system is not correctly processing the tool_calls field in assistant messages. When using a custom template with logic to render tool calls (e.g., if .ToolCalls), the template skips rendering the tool call, even though the request includes a valid tool_calls array in an assistant message. This results in the assistant's turn being empty (<start_of_turn>model<end_of_turn>) instead of showing the expected <tool> section with the tool invocation details. ## Expected Behavior: The template should recognize the tool_calls field in the assistant message and render the tool call using the provided logic (e.g., <tool>{"name": "tool_name", "parameters": {...}}</tool>). ## Observed Behavior: In the parsed output, the assistant's turn where the tool should be invoked is empty, indicating that the .ToolCalls variable is not being populated or recognized in the template context. Testing with a modified template shows that the if .ToolCalls condition takes the "else" path, as if no tool calls are present, despite being included in the request. ## Steps to Reproduce: 1. Use this modelfile to create a new model named Gemma3-tools:4b ``` FROM gemma3:4b TEMPLATE """{{- if .System }}<start_of_turn>user {{ .System }} {{- if .Tools }} You can use these tools to answer the user's questions: {{- range .Tools }} {{ . }} {{- end }} When you need to use a tool, format the request in this way: <tool> {"name": "tool_name", "parameters": {"param1": "value1", "param2": "value2"}} </tool> {{- end }}<end_of_turn> <start_of_turn>model Understood. <end_of_turn> {{- end }} {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if eq .Role "user" }}<start_of_turn>user {{ .Content }}<end_of_turn> {{ if $last }}<start_of_turn>model {{ end }} {{- else if eq .Role "assistant" }}<start_of_turn>model {{- if .ToolCalls }} <tool> {{- range .ToolCalls }} {"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}} {{- end }} </tool> {{- else }}[NO TOOLS CALLED] {{ .Content }} {{- end }}{{ if not $last }}<end_of_turn> {{ end }} {{- else if eq .Role "tool" }}<start_of_turn>user <tool_response> {{ .Content }} </tool_response><end_of_turn> {{ if $last }}<start_of_turn>model {{ end }} {{- end }} {{- end }}""" ``` 2. Enable debug to view the parsed template 3. Send a request to the model like this one: ``` curl -X POST \ -H "Content-Type: application/json" \ -d @- \ http://localhost:11434/v1/chat/completions <<EOF { "messages": [ { "role": "system", "content": "\nYou are a customer service agent who helps users to solve their problems.\nTo respond to a client, use the \"final_answer\" tool that is specified later.\nThe text outside the tools will be ignored." }, { "role": "system", "content": "You are getting a request from this client: \n## NAME\nJohn Doe\n\n## EMAIL\njohn.doe@example.com\n\n" }, { "role": "user", "content": "Hello, I have a problem with my order. I need help to solve it. The order number is 1234567890" }, { "role": "assistant", "content": "", "tool_calls": [ { "id": "call_4yswu3m4", "type": "function", "function": { "name": "get_order", "arguments": "{\"number\":\"1234567890\"}" } } ] }, { "role": "tool", "tool_call_id": "call_4yswu3m4", "content": "Order not found." } ], "model": "Gemma3-tools:4b", "n": 1, "stream": false, "tool_choice": "required", "tools": [ { "type": "function", "function": { "name": "get_order", "description": "Gets the order from the client based on the number.", "parameters": { "properties": { "number": { "title": "number", "type": "string" } }, "required": [ "number" ], "type": "object", "additionalProperties": false } } }, { "type": "function", "function": { "name": "final_answer", "description": "This tool is used to provide the final answer to the user.", "parameters": { "properties": { "response": { "title": "Response", "type": "string" }, "needs_escalation": { "title": "Needs Escalation", "type": "boolean" }, "follow_up_required": { "title": "Follow Up Required", "type": "boolean" }, "sentiment": { "description": "Sentiment of the client", "title": "Sentiment", "type": "string" } }, "required": [ "response", "needs_escalation", "follow_up_required", "sentiment" ], "title": "ResponseModel", "type": "object" } } } ] } EOF ``` Observe than the input to the model (the parsed template) is this: ``` <start_of_turn>user You are a customer service agent who helps users to solve their problems. To respond to a client, use the "final_answer" tool that is specified later. The text outside the tools will be ignored. You are getting a request from this client: ## NAME John Doe ## EMAIL john.doe@example.com You can use these tools to answer the user's questions: {"type":"function","function":{"name":"get_order","description":"Gets the order from the client based on the number.","parameters":{"type":"object","required":["number"],"properties":{"number":{"type":"string","description":""}}}}} {"type":"function","function":{"name":"final_answer","description":"This tool is used to provide the final answer to the user.","parameters":{"type":"object","required":["response","needs_escalation","follow_up_required","sentiment"],"properties":{"follow_up_required":{"type":"boolean","description":""},"needs_escalation":{"type":"boolean","description":""},"response":{"type":"string","description":""},"sentiment":{"type":"string","description":"Sentiment of the client"}}}}} When you need to use a tool, format the request in this way: <tool> {"name": "tool_name", "parameters": {"param1": "value1", "param2": "value2"}} </tool><end_of_turn> <start_of_turn>model Understood. <end_of_turn><start_of_turn>user Hello, I have a problem with my order. I need help to solve it. The order number is 1234567890<end_of_turn> <start_of_turn>model[NO TOOLS CALLED] <end_of_turn> <start_of_turn>user <tool_response> Order not found. </tool_response><end_of_turn> <start_of_turn>model ``` Notice that in the last model turn, it shows [NO TOOLS CALLED] indicating that the condition {{- if .ToolCalls }} is taking the "else" path even when there are tools called in that message. ## Additional Context: * I've tried several combinations of parameters for the tool_call object in the request, the result is always the same. * I’ve tested this with a minimal request and template to isolate the issue. * Debugging checks in the template (e.g., printing [ToolCalls not present] if .ToolCalls is empty) confirm that .ToolCalls is not being set as expected. * This issue occurs consistently, suggesting a bug in how Ollama processes assistant messages with tool calls in the template system. * I am using 0.6.1 version with the Gemma3 models. ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.6.1
GiteaMirror added the bug label 2026-04-29 01:24:24 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 17, 2025):

Seems to be an issue with /v1/chat/completion, works as expected with /api/chat:

<start_of_turn>model
Understood.
<end_of_turn><start_of_turn>user
Hello, I have a problem with my order. I need help to solve it. The order number is 1234567890<end_of_turn>
<start_of_turn>model
<tool>
{"name": "get_order", "parameters": {"number":"1234567890"}}
</tool><end_of_turn>
<start_of_turn>user
<tool_response>
Order not found.
</tool_response><end_of_turn>
<start_of_turn>model
<!-- gh-comment-id:2729015735 --> @rick-github commented on GitHub (Mar 17, 2025): Seems to be an issue with `/v1/chat/completion`, works as expected with `/api/chat`: ``` <start_of_turn>model Understood. <end_of_turn><start_of_turn>user Hello, I have a problem with my order. I need help to solve it. The order number is 1234567890<end_of_turn> <start_of_turn>model <tool> {"name": "get_order", "parameters": {"number":"1234567890"}} </tool><end_of_turn> <start_of_turn>user <tool_response> Order not found. </tool_response><end_of_turn> <start_of_turn>model ```
Author
Owner

@ParthSareen commented on GitHub (Mar 17, 2025):

Hey @alejomongua, sorry you ran into this. Couple quick comments while I dig into the issue:

  1. Gemma was not trained to have tools so aside from the possibility of a bug there is also a great chance that the just does not output a tool.
  2. Maybe try out using the regular /api/chat endpoint and see if that resolves the issue.

Will dig in and report back otherwise!

<!-- gh-comment-id:2730506957 --> @ParthSareen commented on GitHub (Mar 17, 2025): Hey @alejomongua, sorry you ran into this. Couple quick comments while I dig into the issue: 1. Gemma was not trained to have tools so aside from the possibility of a bug there is also a great chance that the just does not output a tool. 2. Maybe try out using the regular `/api/chat` endpoint and see if that resolves the issue. Will dig in and report back otherwise!
Author
Owner

@alejomongua commented on GitHub (Mar 17, 2025):

Thank you @ParthSareen. As @rick-github said, with /api/chat the behavior is the expected one, but I had to change the arguments from a string to a map like this:

"arguments": {
    "number": "1234567890"
}

Instead of this

"arguments": "{\"number\":\"1234567890\"}"

But this is incompatible with the OpenAI API.

As with respect to the Gemma3 capacity to use tools, it is no problem, even with this bug I've been able to make Gemma3 do some simple agentic tasks, but the fact that it does not parse the template correctly just bugs me.

<!-- gh-comment-id:2730778751 --> @alejomongua commented on GitHub (Mar 17, 2025): Thank you @ParthSareen. As @rick-github said, with /api/chat the behavior is the expected one, but I had to change the arguments from a string to a map like this: ``` "arguments": { "number": "1234567890" } ``` Instead of this ``` "arguments": "{\"number\":\"1234567890\"}" ``` But this is incompatible with the OpenAI API. As with respect to the Gemma3 capacity to use tools, it is no problem, even with this bug I've been able to make Gemma3 do some simple agentic tasks, but the fact that it does not parse the template correctly just bugs me.
Author
Owner

@ParthSareen commented on GitHub (Mar 17, 2025):

Okay this is really helpful - thanks @alejomongua I'll dig into it :D

<!-- gh-comment-id:2730815794 --> @ParthSareen commented on GitHub (Mar 17, 2025): Okay this is really helpful - thanks @alejomongua I'll dig into it :D
Author
Owner

@alejomongua commented on GitHub (Mar 17, 2025):

I found the problem and made a pull request

https://github.com/ollama/ollama/pull/9834

I built it and now it behaves as expected.

<!-- gh-comment-id:2730896339 --> @alejomongua commented on GitHub (Mar 17, 2025): I found the problem and made a pull request https://github.com/ollama/ollama/pull/9834 I built it and now it behaves as expected.
Author
Owner

@alejomongua commented on GitHub (Mar 17, 2025):

After working in the problem I got the following conclutions:

In the OpenAI documentation, in the examples they either send content or tool_calls:

https://platform.openai.com/docs/api-reference/fine-tuning/chat-input

Ollama is programed in a way that, if there is content it does not check for tool_calls.

But I've using PydanticAI framework and it sends both content (empty string) and tool calls and there is where the problem occurs.

In my pull request I added a condition so it works also with a content string, in that way it will work as expected for agents created using PydanticAI

<!-- gh-comment-id:2730960134 --> @alejomongua commented on GitHub (Mar 17, 2025): After working in the problem I got the following conclutions: In the OpenAI documentation, in the examples they either send content or tool_calls: https://platform.openai.com/docs/api-reference/fine-tuning/chat-input Ollama is programed in a way that, if there is content it does not check for tool_calls. But I've using PydanticAI framework and it sends both content (empty string) and tool calls and there is where the problem occurs. In my pull request I added a condition so it works also with a content string, in that way it will work as expected for agents created using PydanticAI
Author
Owner

@RedAch2000 commented on GitHub (Sep 17, 2025):

hi @alejomongua , i have a dump question: if you are using ubuntu, can you tell us how did you enable debug mode for ollama ?

<!-- gh-comment-id:3302355505 --> @RedAch2000 commented on GitHub (Sep 17, 2025): hi @alejomongua , i have a dump question: if you are using ubuntu, can you tell us how did you enable debug mode for ollama ?
Author
Owner

@rick-github commented on GitHub (Sep 17, 2025):

Set OLLAMA_DEBUG=2 in the server environment.

<!-- gh-comment-id:3302689250 --> @rick-github commented on GitHub (Sep 17, 2025): Set `OLLAMA_DEBUG=2` in the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-linux).
Author
Owner

@RedAch2000 commented on GitHub (Sep 18, 2025):

If you have any idea @rick-github , Could you please tell me how it should be the template of the Qwen3 model in Modelfile? Is it similar to Gemma3, or is it something different? @rick-github

<!-- gh-comment-id:3306188343 --> @RedAch2000 commented on GitHub (Sep 18, 2025): If you have any idea @rick-github , Could you please tell me how it should be the template of the Qwen3 model in Modelfile? Is it similar to Gemma3, or is it something different? @rick-github
Author
Owner

@rick-github commented on GitHub (Sep 18, 2025):

@redaachouhad It seems like your question is not related to this issue. Please open a new issue and add details about what you are trying to achieve.

<!-- gh-comment-id:3306597711 --> @rick-github commented on GitHub (Sep 18, 2025): @redaachouhad It seems like your question is not related to this issue. Please open a new issue and add details about what you are trying to achieve.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52923