[GH-ISSUE #12226] Ollama granite3.1-moe:1b model cannot detect function call as a separate field #33895

Closed
opened 2026-04-22 17:04:55 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @Wenwen-Olliegi-Li on GitHub (Sep 9, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12226

What is the issue?

Ollama granite3.1-moe:1b model cannot detect function call as a separate field. I got response like this: <tool_call>[{"arguments": {"question": "Perform S-shape", "direction": "left"},
"name": "headTurn"}, {"arguments": {"type": "object", "required": null,
"properties":{}}}, {"arguments": {"type":
"function","name":"moveForwardBackward","parameters":{"direction":
"forward","speed": 1}}, {"arguments": {"type":
"function","name":"steerWheels","parameters":{"direction": "straight","angle":
0}}}]

headTurn, moveForwardBackward and steerWheels are all function names that I implemented.

Relevant log output


OS

Windows 11

GPU

             Name: 'NVIDIA TITAN Xp'
            Index: 1 (of 1)
ComputeCapability: '6.1'
      DriverModel: 'WDDM'
      TotalMemory: 12884770816 (12.88 GB)
  AvailableMemory: 11755585536 (11.76 GB)
  DeviceAvailable: true
   DeviceSelected: true

CPU

11th Gen Intel(R)

Ollama version

0.11.10

Originally created by @Wenwen-Olliegi-Li on GitHub (Sep 9, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12226 ### What is the issue? Ollama granite3.1-moe:1b model cannot detect function call as a separate field. I got response like this: <tool_call>[{"arguments": {"question": "Perform S-shape", "direction": "left"}, "name": "headTurn"}, {"arguments": {"type": "object", "required": null, "properties":{}}}, {"arguments": {"type": "function","name":"moveForwardBackward","parameters":{"direction": "forward","speed": 1}}, {"arguments": {"type": "function","name":"steerWheels","parameters":{"direction": "straight","angle": 0}}}] headTurn, moveForwardBackward and steerWheels are all function names that I implemented. ### Relevant log output ```shell ``` ### OS Windows 11 ### GPU Name: 'NVIDIA TITAN Xp' Index: 1 (of 1) ComputeCapability: '6.1' DriverModel: 'WDDM' TotalMemory: 12884770816 (12.88 GB) AvailableMemory: 11755585536 (11.76 GB) DeviceAvailable: true DeviceSelected: true ### CPU 11th Gen Intel(R) ### Ollama version 0.11.10
GiteaMirror added the bug label 2026-04-22 17:04:55 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 9, 2025):

granite3.1-moe:1b is an old and small model. Small models aren't great tool users. You can improve the quality of tool use by modifying the template:

--- Modelfile.orig	2025-09-09 15:26:32.887239441 +0200
+++ Modelfile	2025-09-09 18:04:50.674012717 +0200
@@ -8,7 +8,7 @@
 {{- (index .Messages 0).Content}}<|end_of_text|>
 {{- else }}
 {{ .System }}
-{{- if .Tools }} You are a helpful AI assistant with access to the following tools. When a tool is required to answer the user's query, respond with <|tool_call|> followed by a JSON list of tools used. If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request.
+{{- if .Tools }} You are a helpful AI assistant with access to the following tools. When a tool is required to answer the user's query, respond with <tool_call> followed by a JSON list of tools used.  If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request.
 {{- end }}
 {{- end }}
 {{- if .Tools }}
@@ -29,8 +29,9 @@
 {{- else }}{{ .Role }}
 {{- end }}<|end_of_role|>
 {{- if .Content }}{{ .Content }}
-{{- else if .ToolCalls }}<|tool_call|>
+{{- else if .ToolCalls }}<tool_call>[
 {{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
+]
 {{- end }}
 {{- end }}
 {{- if eq (len (slice $.Messages $index)) 1 }}

But the model is still poor at tool calling. A qwen3 model might be better.

<!-- gh-comment-id:3271372204 --> @rick-github commented on GitHub (Sep 9, 2025): granite3.1-moe:1b is an old and small model. Small models aren't great tool users. You can improve the quality of tool use by modifying the template: ```diff --- Modelfile.orig 2025-09-09 15:26:32.887239441 +0200 +++ Modelfile 2025-09-09 18:04:50.674012717 +0200 @@ -8,7 +8,7 @@ {{- (index .Messages 0).Content}}<|end_of_text|> {{- else }} {{ .System }} -{{- if .Tools }} You are a helpful AI assistant with access to the following tools. When a tool is required to answer the user's query, respond with <|tool_call|> followed by a JSON list of tools used. If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request. +{{- if .Tools }} You are a helpful AI assistant with access to the following tools. When a tool is required to answer the user's query, respond with <tool_call> followed by a JSON list of tools used. If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request. {{- end }} {{- end }} {{- if .Tools }} @@ -29,8 +29,9 @@ {{- else }}{{ .Role }} {{- end }}<|end_of_role|> {{- if .Content }}{{ .Content }} -{{- else if .ToolCalls }}<|tool_call|> +{{- else if .ToolCalls }}<tool_call>[ {{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}} +] {{- end }} {{- end }} {{- if eq (len (slice $.Messages $index)) 1 }} ``` But the model is still poor at tool calling. A qwen3 model might be better.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33895