[GH-ISSUE #6505] glm4 model function call support #50606

Open
opened 2026-04-28 16:29:32 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @EntropyYue on GitHub (Aug 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6505

The glm4 model supports function calls, but the templates in the model repository do not.

Originally created by @EntropyYue on GitHub (Aug 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6505 The glm4 model supports function calls, but the templates in the model repository do not.
GiteaMirror added the modelfeature request labels 2026-04-28 16:29:32 -05:00
Author
Owner

@EntropyYue commented on GitHub (Aug 26, 2024):

FROM glm4:9b

SYSTEM """
你是一个名为 GLM-4 的人工智能助手。你是基于智谱AI训练的语言模型 GLM-4 模型开发的,你的任务是针对用户的问题和要求提供适当的答复和支持。
"""

TEMPLATE """
[gMASK]<sop>
{{ if .Messages }}
{{- if or .System .Tools }}<|system|>
{{- if .System }}

{{ .System }}
{{- end }}
{{- if .Tools }}

你是一个名为 GLM-4 的人工智能助手。你是基于智谱AI训练的语言模型 GLM-4 模型开发的,你的任务是针对用户的问题和要求提供适当的答复和支持。当您收到函数调用请求时,使用输出格式化原始使用问题的答案。

{{- end }}
{{- end }}<|eot_id|>
{{- $lastUserIdx := -1 }}

{{- range $i, $_ := .Messages }}
{{- with eq .Role "user" }}
{{- $lastUserIdx = $i }}{{ end }}
{{- end }}

{{ $lastUserIdx }}

{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<|user|>
{{- if and $.Tools (eq $i $lastUserIdx) }}

给定以下函数,请为函数调用返回一个JSON,其中包含最能回答给定提示的正确参数。

以以下格式响应 {"name": function name, "parameters": dictionary of argument name and its value}。 不要使用变量。

{{ $.Tools }}
{{- end }}

{{ .Content }}<|eot_id|>{{ if $last }}<|assistant|>

{{ end }}
{{- else if eq .Role "assistant" }}<|assistant|>
{{- if .ToolCalls }}

{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }}
{{- else }}

{{ .Content }}{{ if not $last }}<|eot_id|>{{ end }}
{{- end }}
{{- else if eq .Role "tool" }}Tools:

{{ .Content }}<|eot_id|>{{ if $last }}<|assistant|>

{{ end }}
{{- end }}
{{- end }}
{{ if .System }}<|system|>
{{ .System }}{{ end }}{{ if .Prompt }}<|user|>
{{ .Prompt }}{{ end }}<|assistant|>
{{ .Response }}
{{ end }}
"""

PARAMETER stop "<|system|>"
PARAMETER stop "<|user|>"
PARAMETER stop "<|assistant|>"
PARAMETER stop "<|eot_id|>"
<!-- gh-comment-id:2310577684 --> @EntropyYue commented on GitHub (Aug 26, 2024): ``` FROM glm4:9b SYSTEM """ 你是一个名为 GLM-4 的人工智能助手。你是基于智谱AI训练的语言模型 GLM-4 模型开发的,你的任务是针对用户的问题和要求提供适当的答复和支持。 """ TEMPLATE """ [gMASK]<sop> {{ if .Messages }} {{- if or .System .Tools }}<|system|> {{- if .System }} {{ .System }} {{- end }} {{- if .Tools }} 你是一个名为 GLM-4 的人工智能助手。你是基于智谱AI训练的语言模型 GLM-4 模型开发的,你的任务是针对用户的问题和要求提供适当的答复和支持。当您收到函数调用请求时,使用输出格式化原始使用问题的答案。 {{- end }} {{- end }}<|eot_id|> {{- $lastUserIdx := -1 }} {{- range $i, $_ := .Messages }} {{- with eq .Role "user" }} {{- $lastUserIdx = $i }}{{ end }} {{- end }} {{ $lastUserIdx }} {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if eq .Role "user" }}<|user|> {{- if and $.Tools (eq $i $lastUserIdx) }} 给定以下函数,请为函数调用返回一个JSON,其中包含最能回答给定提示的正确参数。 以以下格式响应 {"name": function name, "parameters": dictionary of argument name and its value}。 不要使用变量。 {{ $.Tools }} {{- end }} {{ .Content }}<|eot_id|>{{ if $last }}<|assistant|> {{ end }} {{- else if eq .Role "assistant" }}<|assistant|> {{- if .ToolCalls }} {{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }} {{- else }} {{ .Content }}{{ if not $last }}<|eot_id|>{{ end }} {{- end }} {{- else if eq .Role "tool" }}Tools: {{ .Content }}<|eot_id|>{{ if $last }}<|assistant|> {{ end }} {{- end }} {{- end }} {{ if .System }}<|system|> {{ .System }}{{ end }}{{ if .Prompt }}<|user|> {{ .Prompt }}{{ end }}<|assistant|> {{ .Response }} {{ end }} """ PARAMETER stop "<|system|>" PARAMETER stop "<|user|>" PARAMETER stop "<|assistant|>" PARAMETER stop "<|eot_id|>" ```
Author
Owner

@EntropyYue commented on GitHub (Aug 26, 2024):

Sorry, this type of code is too complex, can someone provide support?

<!-- gh-comment-id:2310582176 --> @EntropyYue commented on GitHub (Aug 26, 2024): Sorry, this type of code is too complex, can someone provide support?
Author
Owner

@rick-github commented on GitHub (Aug 27, 2024):

This is a modelfile that adds tool support to glm4. You save this in a file called Modelfile, and then create a new model:

ollama create glm4-tool:9b -f Modelfile

Then call it with tools:

$ curl -s localhost:11434/v1/chat/completions -d '{"model": "glm4-tool:9b","tools":[{"type":"function","function": {}}], "messages": [{"role":"user","content":"weather in zurich"}], "stream": false}' | jq
{
  "id": "chatcmpl-595",
  "object": "chat.completion",
  "created": 1724750444,
  "model": "glm4-tool:9b",
  "system_fingerprint": "fp_ollama",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "",
        "tool_calls": [
          {
            "id": "call_2h4gimgk",
            "type": "function",
            "function": {
              "name": "get_weather",
              "arguments": "{\"location\":\"Zurich\"}"
            }
          }
        ]
      },
      "finish_reason": "tool_calls"
    }
  ],
  "usage": {
    "prompt_tokens": 247,
    "completion_tokens": 18,
    "total_tokens": 265
  }
}

<!-- gh-comment-id:2312004680 --> @rick-github commented on GitHub (Aug 27, 2024): This is a modelfile that adds tool support to glm4. You save this in a file called Modelfile, and then create a new model: ``` ollama create glm4-tool:9b -f Modelfile ``` Then call it with tools: ``` $ curl -s localhost:11434/v1/chat/completions -d '{"model": "glm4-tool:9b","tools":[{"type":"function","function": {}}], "messages": [{"role":"user","content":"weather in zurich"}], "stream": false}' | jq { "id": "chatcmpl-595", "object": "chat.completion", "created": 1724750444, "model": "glm4-tool:9b", "system_fingerprint": "fp_ollama", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "", "tool_calls": [ { "id": "call_2h4gimgk", "type": "function", "function": { "name": "get_weather", "arguments": "{\"location\":\"Zurich\"}" } } ] }, "finish_reason": "tool_calls" } ], "usage": { "prompt_tokens": 247, "completion_tokens": 18, "total_tokens": 265 } } ```
Author
Owner

@EntropyYue commented on GitHub (Aug 27, 2024):

这是一个为 glm4 添加工具支持的 modelfile。将其保存在名为 Modelfile 的文件中,然后创建一个新模型:

ollama create glm4-tool:9b -f Modelfile

然后用工具调用它:

$ curl -s localhost:11434/v1/chat/completions -d '{"model": "glm4-tool:9b","tools":[{"type":"function","function": {}}], "messages": [{"role":"user","content":"weather in zurich"}], "stream": false}' | jq
{
  "id": "chatcmpl-595",
  "object": "chat.completion",
  "created": 1724750444,
  "model": "glm4-tool:9b",
  "system_fingerprint": "fp_ollama",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "",
        "tool_calls": [
          {
            "id": "call_2h4gimgk",
            "type": "function",
            "function": {
              "name": "get_weather",
              "arguments": "{\"location\":\"Zurich\"}"
            }
          }
        ]
      },
      "finish_reason": "tool_calls"
    }
  ],
  "usage": {
    "prompt_tokens": 247,
    "completion_tokens": 18,
    "total_tokens": 265
  }
}

I am aware of the functionality of this modelfile, but I think it is not perfect. I need someone to make some improvements to it.

<!-- gh-comment-id:2312397876 --> @EntropyYue commented on GitHub (Aug 27, 2024): > 这是一个为 glm4 添加工具支持的 modelfile。将其保存在名为 Modelfile 的文件中,然后创建一个新模型: > > ``` > ollama create glm4-tool:9b -f Modelfile > ``` > > 然后用工具调用它: > > ``` > $ curl -s localhost:11434/v1/chat/completions -d '{"model": "glm4-tool:9b","tools":[{"type":"function","function": {}}], "messages": [{"role":"user","content":"weather in zurich"}], "stream": false}' | jq > { > "id": "chatcmpl-595", > "object": "chat.completion", > "created": 1724750444, > "model": "glm4-tool:9b", > "system_fingerprint": "fp_ollama", > "choices": [ > { > "index": 0, > "message": { > "role": "assistant", > "content": "", > "tool_calls": [ > { > "id": "call_2h4gimgk", > "type": "function", > "function": { > "name": "get_weather", > "arguments": "{\"location\":\"Zurich\"}" > } > } > ] > }, > "finish_reason": "tool_calls" > } > ], > "usage": { > "prompt_tokens": 247, > "completion_tokens": 18, > "total_tokens": 265 > } > } > ``` I am aware of the functionality of this modelfile, but I think it is not perfect. I need someone to make some improvements to it.
Author
Owner

@rick-github commented on GitHub (Aug 27, 2024):

What's wrong with it?

<!-- gh-comment-id:2312401208 --> @rick-github commented on GitHub (Aug 27, 2024): What's wrong with it?
Author
Owner

@EntropyYue commented on GitHub (Aug 28, 2024):

它有什么问题?

When using this prompt template, the model's perplexity increases significantly

<!-- gh-comment-id:2314605524 --> @EntropyYue commented on GitHub (Aug 28, 2024): > 它有什么问题? When using this prompt template, the model's perplexity increases significantly
Author
Owner

@EntropyYue commented on GitHub (Sep 8, 2024):

FROM glm4:9b

SYSTEM """
你是一个名为 GLM-4 的人工智能助手。你是基于智谱AI训练的语言模型 GLM-4 模型开发的,你的任务是针对用户的问题和要求提供适当的答复和支持。
"""

TEMPLATE """
[gMASK]<sop>
{{- if or .System .Tools }}<|system|>
{{- if .System }}

{{ .System }}
{{- end }}
{{- if .Tools }}

你是能够调用函数的大语言模型,你的任务是针对用户的问题和要求提供适当的答复和支持。不要对可用的函数做出假设,当您收到函数调用请求时,使用输出格式化原始使用问题的答案。

{{- end }}
{{- end }}
{{- $lastUserIdx := -1 }}

{{- range $i, $_ := .Messages }}
{{- with eq .Role "user" }}
{{- $lastUserIdx = $i }}{{ end }}
{{- end }}

{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<|user|>
{{- if and $.Tools (eq $i $lastUserIdx) }}

你可以调用以下函数,请为函数调用返回一个JSON,其中包含最能回答给定提示的正确参数。
作为一个能够调用函数的大语言模型,你可以完成文本生成以及调用这些函数的任务

如果任务需要调用函数,请正确地按照以下格式在 <tool_call></tool_call> xml标签中做出响应,不要使用变量,如果不需要使用函数,返回一个空的字典:
<tool_call>
{"name": 函数名称, "parameters": 参数键值对字典}
</tool_call>

{{ range $.Tools }}
{{- . }}
{{ end }}
{{- end }}

{{ .Content }}{{ if $last }}<|assistant|>

{{ end }}
{{- else if eq .Role "assistant" }}<|assistant|>
{{- if .ToolCalls }}
<tool_call>
{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }}
</tool_call>
{{- else }}

{{ .Content }}{{ if not $last }}{{ end }}
{{- end }}
{{- else if eq .Role "tool" }}ipython

{{ .Content }}{{ if $last }}<|assistant|>

{{ end }}
{{- end }}
{{- end }}
"""

PARAMETER stop "<|system|>"
PARAMETER stop "<|user|>"
PARAMETER stop "<|assistant|>"
<!-- gh-comment-id:2336774942 --> @EntropyYue commented on GitHub (Sep 8, 2024): ``` FROM glm4:9b SYSTEM """ 你是一个名为 GLM-4 的人工智能助手。你是基于智谱AI训练的语言模型 GLM-4 模型开发的,你的任务是针对用户的问题和要求提供适当的答复和支持。 """ TEMPLATE """ [gMASK]<sop> {{- if or .System .Tools }}<|system|> {{- if .System }} {{ .System }} {{- end }} {{- if .Tools }} 你是能够调用函数的大语言模型,你的任务是针对用户的问题和要求提供适当的答复和支持。不要对可用的函数做出假设,当您收到函数调用请求时,使用输出格式化原始使用问题的答案。 {{- end }} {{- end }} {{- $lastUserIdx := -1 }} {{- range $i, $_ := .Messages }} {{- with eq .Role "user" }} {{- $lastUserIdx = $i }}{{ end }} {{- end }} {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if eq .Role "user" }}<|user|> {{- if and $.Tools (eq $i $lastUserIdx) }} 你可以调用以下函数,请为函数调用返回一个JSON,其中包含最能回答给定提示的正确参数。 作为一个能够调用函数的大语言模型,你可以完成文本生成以及调用这些函数的任务 如果任务需要调用函数,请正确地按照以下格式在 <tool_call></tool_call> xml标签中做出响应,不要使用变量,如果不需要使用函数,返回一个空的字典: <tool_call> {"name": 函数名称, "parameters": 参数键值对字典} </tool_call> {{ range $.Tools }} {{- . }} {{ end }} {{- end }} {{ .Content }}{{ if $last }}<|assistant|> {{ end }} {{- else if eq .Role "assistant" }}<|assistant|> {{- if .ToolCalls }} <tool_call> {{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }} </tool_call> {{- else }} {{ .Content }}{{ if not $last }}{{ end }} {{- end }} {{- else if eq .Role "tool" }}ipython {{ .Content }}{{ if $last }}<|assistant|> {{ end }} {{- end }} {{- end }} """ PARAMETER stop "<|system|>" PARAMETER stop "<|user|>" PARAMETER stop "<|assistant|>" ```
Author
Owner

@EntropyYue commented on GitHub (Sep 8, 2024):

Utilize the above template to enable the glm4 model with function call capability

<!-- gh-comment-id:2336775124 --> @EntropyYue commented on GitHub (Sep 8, 2024): Utilize the above template to enable the glm4 model with function call capability
Author
Owner

@rick-github commented on GitHub (Sep 8, 2024):

@xiaopa233 Could you edit the post and code markdown tags ( ```) around it so that special characters are not mis-interpreted.

<!-- gh-comment-id:2336783803 --> @rick-github commented on GitHub (Sep 8, 2024): @xiaopa233 Could you edit the post and code markdown tags ( \`\`\`) around it so that special characters are not mis-interpreted.
Author
Owner

@EntropyYue commented on GitHub (Sep 8, 2024):

The editing is complete, sorry I forgot about it

<!-- gh-comment-id:2336812858 --> @EntropyYue commented on GitHub (Sep 8, 2024): The editing is complete, sorry I forgot about it
Author
Owner

@LuckLittleBoy commented on GitHub (Oct 12, 2024):

Utilize the above template to enable the glm4 model with function call capability

The above templates do not have a function call response capability

<!-- gh-comment-id:2408462495 --> @LuckLittleBoy commented on GitHub (Oct 12, 2024): > Utilize the above template to enable the glm4 model with function call capability The above templates do not have a function call response capability
Author
Owner

@rick-github commented on GitHub (Oct 14, 2024):

It seems to work fine:

$ ./tool-test.py --model glm4-tool:9b --prompt 'what is the time?'
calling get_datetime({})
The current time is 12:29.
$ ./tool-test.py --model glm4-tool:9b --prompt 'what is 1.34 ^ 10?'
calling power({'x': 1.34, 'y': 10})
The result of \(1.34^{10}\) is approximately 18.666.
$ ./tool-test.py --model glm4-tool:9b --prompt 'what operating system are you running on?'
calling get_operating_system({})
The operating system I am running on is Linux-5.15.0-94-generic-x86_64-with-glibc2.35.
$ ./tool-test.py --model glm4-tool:9b --prompt "summarize the contents of the file ./1.txt"
calling read_file({'filename': './1.txt'})
The contents of the file './1.txt' include the following lines: 'hi', 'ok', 'hello', and 'hi there'.
$ ./tool-test.py --model glm4-tool:9b --prompt "when is sunrise in Auckland tomorrow?"
calling get_datetime({})
calling search_web({'query': 'Auckland sunrise time October 15, 2024'})
Sunrise in Auckland tomorrow is expected to be around 06:38.

It does fail occasionally:

$ ./tool-test.py --model glm4-tool:9b --prompt "when is sunrise in Auckland tomorrow?"
I'm sorry, but I cannot directly retrieve real-time weather data or calculate sunrise times. However, I can help you find this information if you provide the name of a tool that can be used to get the weather details for a specific location
<!-- gh-comment-id:2410860751 --> @rick-github commented on GitHub (Oct 14, 2024): It seems to work fine: ```console $ ./tool-test.py --model glm4-tool:9b --prompt 'what is the time?' calling get_datetime({}) The current time is 12:29. $ ./tool-test.py --model glm4-tool:9b --prompt 'what is 1.34 ^ 10?' calling power({'x': 1.34, 'y': 10}) The result of \(1.34^{10}\) is approximately 18.666. $ ./tool-test.py --model glm4-tool:9b --prompt 'what operating system are you running on?' calling get_operating_system({}) The operating system I am running on is Linux-5.15.0-94-generic-x86_64-with-glibc2.35. $ ./tool-test.py --model glm4-tool:9b --prompt "summarize the contents of the file ./1.txt" calling read_file({'filename': './1.txt'}) The contents of the file './1.txt' include the following lines: 'hi', 'ok', 'hello', and 'hi there'. $ ./tool-test.py --model glm4-tool:9b --prompt "when is sunrise in Auckland tomorrow?" calling get_datetime({}) calling search_web({'query': 'Auckland sunrise time October 15, 2024'}) Sunrise in Auckland tomorrow is expected to be around 06:38. ``` It does fail occasionally: ```console $ ./tool-test.py --model glm4-tool:9b --prompt "when is sunrise in Auckland tomorrow?" I'm sorry, but I cannot directly retrieve real-time weather data or calculate sunrise times. However, I can help you find this information if you provide the name of a tool that can be used to get the weather details for a specific location ```
Author
Owner

@EntropyYue commented on GitHub (Oct 14, 2024):

Occasionally, work failures are normal phenomena; you need a more stable model with formatted output like qwen2.5

<!-- gh-comment-id:2410941699 --> @EntropyYue commented on GitHub (Oct 14, 2024): Occasionally, work failures are normal phenomena; you need a more stable model with formatted output like qwen2.5
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#50606