[GH-ISSUE #3165] Support "tool" role in messages #1949

Closed
opened 2026-04-12 12:05:44 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @aaronrussell on GitHub (Mar 15, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3165

What are you trying to do?

The new Herme 2 Pro model recommends results from function calling to come in messages with the role "tool", eg:

<|im_start|>tool
<tool_response>...result here...</tool_response>
<|im_end|>

The chat api doesn't support messages with a role "tool" - treats it as a bad request.

How should we solve this?

Accepting the "tool" role in messages API would make life easier for those using Hermes Pro - and any future models that are likely to be based off the same open data sets.

This has implications for how templating works in Ollama - it may even require a total rethink of templates.

What is the impact of not solving this?

Not solving it means that users have to create their own templates and use Ollama's raw option, which negates some of the joy of using Ollama in the first place.

I truly believe function calling and building local agents is one area where Ollama really could excel, if the experience of doing so is made totally painless.

Anything else?

Repo of Hermes Pro function calling with prompting/templating instructions:
https://github.com/NousResearch/Hermes-Function-Calling

Originally created by @aaronrussell on GitHub (Mar 15, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3165 ### What are you trying to do? The new Herme 2 Pro model recommends results from function calling to come in messages with the role "tool", eg: ``` <|im_start|>tool <tool_response>...result here...</tool_response> <|im_end|> ``` The chat api doesn't support messages with a role "tool" - treats it as a bad request. ### How should we solve this? Accepting the "tool" role in messages API would make life easier for those using Hermes Pro - and any future models that are likely to be based off the same open data sets. This has implications for how templating works in Ollama - it may even require a total rethink of templates. ### What is the impact of not solving this? Not solving it means that users have to create their own templates and use Ollama's `raw` option, which negates some of the joy of using Ollama in the first place. I truly believe function calling and building local agents is one area where Ollama really could excel, if the experience of doing so is made totally painless. ### Anything else? Repo of Hermes Pro function calling with prompting/templating instructions: https://github.com/NousResearch/Hermes-Function-Calling
GiteaMirror added the feature request label 2026-04-12 12:05:44 -05:00
Author
Owner

@napa3um commented on GitHub (Mar 28, 2024):

JSON-mode support for <|im_start|>assistant<tool_call>... is also needed.

<!-- gh-comment-id:2026102085 --> @napa3um commented on GitHub (Mar 28, 2024): JSON-mode support for `<|im_start|>assistant<tool_call>...` is also needed.
Author
Owner

@lalanikarim commented on GitHub (Jun 12, 2024):

This is also needed for a cleaner implementation within langchain.
Currently ChatOllama doesn't support handling ToolMessage which is of type tool.
In order to work around this, within OllamaFunctions (a tool calling compatible subclass of ChatOllama) we pass ToolMessage as type assistant.

<!-- gh-comment-id:2163521861 --> @lalanikarim commented on GitHub (Jun 12, 2024): This is also needed for a cleaner implementation within langchain. Currently [ChatOllama](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/) [doesn't support handling `ToolMessage`](https://github.com/langchain-ai/langchain/blob/3d6e8547f973bb85fa316937c37ae317779fc309/libs/community/langchain_community/chat_models/ollama.py#L122-L129) which is of type `tool`. In order to work around this, within [OllamaFunctions](https://python.langchain.com/v0.2/docs/integrations/chat/ollama_functions/) (a tool calling compatible subclass of ChatOllama) we pass [`ToolMessage` as type `assistant`](https://github.com/langchain-ai/langchain/pull/22339#discussion_r1636772192).
Author
Owner

@ThomasVitale commented on GitHub (Jul 23, 2024):

From what I can see, this issue has been fixed in https://github.com/ollama/ollama/pull/5284.

Using the latest Ollama version, sending an HTTP request to http://localhost:11434/api/chat with the following body completes successfully (using the mistral model which supports function calling).

{
  "model": "mistral",
  "messages": [
      {
        "role": "system",
        "content": "You are a knowledgable assistant. You can answer questions and perform tasks."
      },
      {
        "role": "user",
        "content": "What's the weather like today in Paris?"
      },
      {
        "role": "assistant",
        "tool_calls": [
          {
            "type": "function",
            "function": {
              "name": "get_current_weather",
              "arguments": {
                "location": "Paris, France",
                "format": "celsius"
              }
            }
          }
        ]
      },
      {
        "role": "tool",
        "content": "22"
      },
      {
        "role": "assistant",
        "content": "The current temperature in Paris, France is 22 degrees Celsius."
      },
      {
        "role": "user",
        "content": "What's the weather like today in San Francisco and Toronto?"
      }
  ],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_current_weather",
        "description": "Get the current weather",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {
              "type": "string",
              "description": "The city and state, e.g. San Francisco, CA"
            },
            "format": {
              "type": "string",
              "enum": [
                "celsius",
                "fahrenheit"
              ],
              "description": "The temperature unit to use. Infer this from the users location."
            }
          },
          "required": [
            "location",
            "format"
          ]
        }
      }
    }
  ],
  "stream": false,
  "options": {
    "temperature": 0
  }
}
<!-- gh-comment-id:2246369678 --> @ThomasVitale commented on GitHub (Jul 23, 2024): From what I can see, this issue has been fixed in https://github.com/ollama/ollama/pull/5284. Using the latest Ollama version, sending an HTTP request to `http://localhost:11434/api/chat` with the following body completes successfully (using the `mistral` model which supports function calling). ``` { "model": "mistral", "messages": [ { "role": "system", "content": "You are a knowledgable assistant. You can answer questions and perform tasks." }, { "role": "user", "content": "What's the weather like today in Paris?" }, { "role": "assistant", "tool_calls": [ { "type": "function", "function": { "name": "get_current_weather", "arguments": { "location": "Paris, France", "format": "celsius" } } } ] }, { "role": "tool", "content": "22" }, { "role": "assistant", "content": "The current temperature in Paris, France is 22 degrees Celsius." }, { "role": "user", "content": "What's the weather like today in San Francisco and Toronto?" } ], "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "format": { "type": "string", "enum": [ "celsius", "fahrenheit" ], "description": "The temperature unit to use. Infer this from the users location." } }, "required": [ "location", "format" ] } } } ], "stream": false, "options": { "temperature": 0 } } ```
Author
Owner

@tzolov commented on GitHub (Jul 24, 2024):

Yep, as of 0.8.2 Ollama supports the TOOL message role as well.
Unfortunately it doesn't support streaming function calling yet

<!-- gh-comment-id:2247829335 --> @tzolov commented on GitHub (Jul 24, 2024): Yep, as of 0.8.2 Ollama supports the `TOOL` message role as well. Unfortunately it doesn't support streaming function calling yet
Author
Owner

@jmorganca commented on GitHub (Jul 26, 2024):

Hi there! Tools are now supported in Ollama. See https://ollama.com/blog/tool-support

<!-- gh-comment-id:2251734584 --> @jmorganca commented on GitHub (Jul 26, 2024): Hi there! Tools are now supported in Ollama. See https://ollama.com/blog/tool-support
Author
Owner

@HARISH-CS-01 commented on GitHub (Jun 22, 2025):

Does the chat function in Ollama support passing a message with the tool role?

<!-- gh-comment-id:2994342347 --> @HARISH-CS-01 commented on GitHub (Jun 22, 2025): Does the chat function in Ollama support passing a message with the tool role?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1949