[GH-ISSUE #6390] model xe/hermes3 doesn't correctly parse tool call tokens #29775

Open
opened 2026-04-22 08:59:16 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @Xe on GitHub (Aug 16, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6390

What is the issue?

I uploaded Hermes3 to Ollama here. The problem is that it isn't parsing the tool call syntax.

Hermes tool call syntax roughly looks like this:

<tool_call>
{"name": "code_interpreter", "arguments": {"code": "def reverse_list(lst):\n    return lst[::-1]\n\noriginal = [1, 2, 3, 4, 5]\nreversed_list = reverse_list(original)\nprint('Original:', original)\nprint('Reversed:', reversed_list)"}}
</tool_call>

The raw response I get from ollama when doing a tool call for a code_interpreter tool that runs Python code looks like this:

{"model":"xe/hermes3","created_at":"2024-08-16T12:38:56.878759Z","message":{"role":"assistant","content":"\n\u003ctool_call\u003e\n{\"name\": \"code_interpreter\", \"arguments\": {\"code\": \"def reverse_list(lst):\\n    return lst[::-1]\\n\\noriginal = [1, 2, 3, 4, 5]\\nreversed_list = reverse_list(original)\\nprint('Original:', original)\\nprint('Reversed:', reversed_list)\"}}\n\u003c/tool_call\u003e"},"done_reason":"stop","done":true,"total_duration":2015048833,"load_duration":28571000,"prompt_eval_count":447,"prompt_eval_duration":490176000,"eval_count":77,"eval_duration":1492265000}

It looks like it's just literally returning the tool call tokens without parsing them. Here's the operative bit of the template:

<|im_start|>{{ .Role }}
{{- if and (eq .Role "assistant") .ToolCalls }}
{{- range .ToolCalls }}
<tool_call>
{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
</tool_call>
{{- end }}
{{- else }}
{{ .Content }}
{{- end }}<|im_end|>

What am I doing wrong here?

OS

Linux, macOS

GPU

Nvidia, Apple

CPU

AMD, Apple

Ollama version

0.3.6

Originally created by @Xe on GitHub (Aug 16, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6390 ### What is the issue? I uploaded Hermes3 to Ollama [here](https://ollama.com/xe/hermes3/blobs/afa6d473672a). The problem is that it isn't parsing the tool call syntax. Hermes tool call syntax roughly looks like this: ``` <tool_call> {"name": "code_interpreter", "arguments": {"code": "def reverse_list(lst):\n return lst[::-1]\n\noriginal = [1, 2, 3, 4, 5]\nreversed_list = reverse_list(original)\nprint('Original:', original)\nprint('Reversed:', reversed_list)"}} </tool_call> ``` The raw response I get from ollama when doing a tool call for a code_interpreter tool that runs Python code looks like this: ```json {"model":"xe/hermes3","created_at":"2024-08-16T12:38:56.878759Z","message":{"role":"assistant","content":"\n\u003ctool_call\u003e\n{\"name\": \"code_interpreter\", \"arguments\": {\"code\": \"def reverse_list(lst):\\n return lst[::-1]\\n\\noriginal = [1, 2, 3, 4, 5]\\nreversed_list = reverse_list(original)\\nprint('Original:', original)\\nprint('Reversed:', reversed_list)\"}}\n\u003c/tool_call\u003e"},"done_reason":"stop","done":true,"total_duration":2015048833,"load_duration":28571000,"prompt_eval_count":447,"prompt_eval_duration":490176000,"eval_count":77,"eval_duration":1492265000} ``` It looks like it's just literally returning the tool call tokens without parsing them. Here's the operative bit of [the template](https://ollama.com/xe/hermes3/blobs/afa6d473672a): ```jinja <|im_start|>{{ .Role }} {{- if and (eq .Role "assistant") .ToolCalls }} {{- range .ToolCalls }} <tool_call> {"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}} </tool_call> {{- end }} {{- else }} {{ .Content }} {{- end }}<|im_end|> ``` What am I doing wrong here? ### OS Linux, macOS ### GPU Nvidia, Apple ### CPU AMD, Apple ### Ollama version 0.3.6
GiteaMirror added the bug label 2026-04-22 08:59:16 -05:00
Author
Owner

@MaxJa4 commented on GitHub (Aug 16, 2024):

Does the original Llama-3.1 template work (https://ollama.com/finalend/hermes-3-llama-3.1/blobs/11ce4ee3e170)?

<!-- gh-comment-id:2293520668 --> @MaxJa4 commented on GitHub (Aug 16, 2024): Does the original Llama-3.1 template work (https://ollama.com/finalend/hermes-3-llama-3.1/blobs/11ce4ee3e170)?
Author
Owner

@Xe commented on GitHub (Aug 16, 2024):

No, hermes is trained on a different template.

<!-- gh-comment-id:2293538999 --> @Xe commented on GitHub (Aug 16, 2024): No, hermes is trained on a different template.
Author
Owner

@MaxJa4 commented on GitHub (Aug 16, 2024):

Oh right, my bad. Had a look at both templates and their respective docs/examples.
Since I haven't used tool calling here yet, I can't really help with the template I'm afraid.

<!-- gh-comment-id:2293644213 --> @MaxJa4 commented on GitHub (Aug 16, 2024): Oh right, my bad. Had a look at both templates and their respective docs/examples. Since I haven't used tool calling here yet, I can't really help with the template I'm afraid.
Author
Owner

@mxyng commented on GitHub (Aug 16, 2024):

Thanks for creating the issue and the model. The issue with tool parsing is likely the addition of tool_call tags inside the range since the parser expects the range contents to be JSON. Do you mind trying with tags outside of the range?

+<tool_call>
 {{- range .ToolCalls }}
-<tool_call>
 {"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
-<tool_call>
 {{- end }}
+</tool_call>

The system prompt does mention tags for each tool call, i.e. inside the range, but this way seems to work as well

Aside: your template is missing the start the final message tags which inhibit a coherent response. Try adding <|im_start|>assistant after the last $hasToolResponse check:

 ...
 {{- if $hasToolResponses }}<|im_end|>
 {{- end }}
-
-{{- else }}
+<|im_start|>assistant
+{{ else }}
 {{- if .System }}
 <|im_start|>system
  ...
<!-- gh-comment-id:2293967997 --> @mxyng commented on GitHub (Aug 16, 2024): Thanks for creating the issue and the model. The issue with tool parsing is likely the addition of `tool_call` tags inside the range since the parser expects the range contents to be JSON. Do you mind trying with tags outside of the range? ```diff +<tool_call> {{- range .ToolCalls }} -<tool_call> {"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}} -<tool_call> {{- end }} +</tool_call> ``` The system prompt does mention tags for each tool call, i.e. inside the range, but this way seems to work as well Aside: your template is missing the start the final message tags which inhibit a coherent response. Try adding `<|im_start|>assistant` after the last `$hasToolResponse` check: ```diff ... {{- if $hasToolResponses }}<|im_end|> {{- end }} - -{{- else }} +<|im_start|>assistant +{{ else }} {{- if .System }} <|im_start|>system ... ```
Author
Owner

@MaxJa4 commented on GitHub (Aug 16, 2024):

Tested xe's template + mxyng's changes with the tools-example of the ollama-py library. Also tested it with the vanilla llama3.1 template. Worked in both cases :)

<!-- gh-comment-id:2294036880 --> @MaxJa4 commented on GitHub (Aug 16, 2024): Tested xe's template + mxyng's changes with the tools-example of the ollama-py library. Also tested it with the vanilla llama3.1 template. Worked in both cases :)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29775