[GH-ISSUE #6213] Different behavior for "tool" and "function" roles #29642

Closed
opened 2026-04-22 08:41:47 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @matheusfvesco on GitHub (Aug 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6213

What is the issue?

Ollama models only replies/summarizes the result of the function calls if the message is created with the role "tool". If the role "function" is used, the model simply returns an empty string or says it can't do what i asked and keeps on doing it, no matter the chat history.

Example code:

from ollama import Client
import openai

client = Client(host="http://localhost:11434/")
openai.base_url = "http://localhost:11434/v1/"
openai.api_key = 'ollama'

tool_messages = [
    {"role": "user", "content": "What is the temperature today?"},
    {"role": "tool", "content": "today is 33 Celsius at your location"}
]

function_messages = [
    {"role": "user", "content": "What is the temperature today?"},
    {"role": "function", "content": "today is 33 Celsius at your location"}
]

tools = tools=[{
      'type': 'function',
      'function': {
        'name': 'get_current_weather',
        'description': 'Get the current weather for a city',
        'parameters': {
          'type': 'object',
          'properties': {
            'city': {
              'type': 'string',
              'description': 'The name of the city',
            },
          },
          'required': ['city'],
        },
      },
    },
  ]

print("Ollama:")

print(" - function:")
response = client.chat(model='llama3.1', messages=function_messages, tools=tools)
print(response['message']['content'])

print(" - tool:")
response = client.chat(model='llama3.1', messages=tool_messages, tools=tools)
print(response['message']['content'])

print("OpenAI:")

print(" - function:")
response = openai.chat.completions.create(
	model="llama3.1",
	messages=function_messages,
	tools=tools,
)
print(response.choices[0].message.content)

print(" - tool:")
response = openai.chat.completions.create(
	model="llama3.1",
	messages=tool_messages,
	tools=tools,
)
print(response.choices[0].message.content)

Expected output:

Ollama:
 - function:

 - tool:
The current temperature is 33°C.
OpenAI:
 - function:

 - tool:
Based on the tool's response, the temperature today is:

The temperature today is 33°C.

This is breaking in some cases. For example, Haystack only implements the role "function" for their abstraction.

Changing the model to "mistral", i got this result:

Ollama:
 - function:
 I don't have the ability to check or provide real-time weather information. You can look up the current temperature in your area using a weather app or website.
 - tool:
 Today's temperature is 33 degrees Celsius.
OpenAI:
 - function:
 I cannot tell you the current temperature as I am a text-based AI and do not have real-time data capabilities. Please check an online weather service to find out today's temperature for your location.
 - tool:
33 degrees Celsius in Fahrenheit: Approximately 91.4 degrees Fahrenheit. Enjoy your day!

Using "mistral-nemo":

Ollama:
 - function:
 Which city would you like to check?
 - tool:
 Would you like to know the weather forecast for this week?
OpenAI:
 - function:
 In which city? Could youplease specify the city so I could give you a precise answer ?
 - tool:
 Can you help with the weather ?

OS

Linux, Docker

GPU

Nvidia

CPU

Intel

Ollama version

0.3.3

Originally created by @matheusfvesco on GitHub (Aug 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6213 ### What is the issue? Ollama models only replies/summarizes the result of the function calls if the message is created with the role "tool". If the role "function" is used, the model simply returns an empty string or says it can't do what i asked and keeps on doing it, no matter the chat history. Example code: ```python from ollama import Client import openai client = Client(host="http://localhost:11434/") openai.base_url = "http://localhost:11434/v1/" openai.api_key = 'ollama' tool_messages = [ {"role": "user", "content": "What is the temperature today?"}, {"role": "tool", "content": "today is 33 Celsius at your location"} ] function_messages = [ {"role": "user", "content": "What is the temperature today?"}, {"role": "function", "content": "today is 33 Celsius at your location"} ] tools = tools=[{ 'type': 'function', 'function': { 'name': 'get_current_weather', 'description': 'Get the current weather for a city', 'parameters': { 'type': 'object', 'properties': { 'city': { 'type': 'string', 'description': 'The name of the city', }, }, 'required': ['city'], }, }, }, ] print("Ollama:") print(" - function:") response = client.chat(model='llama3.1', messages=function_messages, tools=tools) print(response['message']['content']) print(" - tool:") response = client.chat(model='llama3.1', messages=tool_messages, tools=tools) print(response['message']['content']) print("OpenAI:") print(" - function:") response = openai.chat.completions.create( model="llama3.1", messages=function_messages, tools=tools, ) print(response.choices[0].message.content) print(" - tool:") response = openai.chat.completions.create( model="llama3.1", messages=tool_messages, tools=tools, ) print(response.choices[0].message.content) ``` Expected output: ``` Ollama: - function: - tool: The current temperature is 33°C. OpenAI: - function: - tool: Based on the tool's response, the temperature today is: The temperature today is 33°C. ``` This is breaking in some cases. For example, Haystack only implements the role "function" for their abstraction. Changing the model to "mistral", i got this result: ``` Ollama: - function: I don't have the ability to check or provide real-time weather information. You can look up the current temperature in your area using a weather app or website. - tool: Today's temperature is 33 degrees Celsius. OpenAI: - function: I cannot tell you the current temperature as I am a text-based AI and do not have real-time data capabilities. Please check an online weather service to find out today's temperature for your location. - tool: 33 degrees Celsius in Fahrenheit: Approximately 91.4 degrees Fahrenheit. Enjoy your day! ``` Using "mistral-nemo": ``` Ollama: - function: Which city would you like to check? - tool: Would you like to know the weather forecast for this week? OpenAI: - function: In which city? Could youplease specify the city so I could give you a precise answer ? - tool: Can you help with the weather ? ``` ### OS Linux, Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.3
GiteaMirror added the bug label 2026-04-22 08:41:47 -05:00
Author
Owner

@royjhan commented on GitHub (Aug 7, 2024):

Screenshot 2024-08-07 at 10 16 10 AM

https://platform.openai.com/docs/api-reference/chat/create

Per the OpenAI spec, the function role is deprecated, and its behavior is undefined on our part.

<!-- gh-comment-id:2273952121 --> @royjhan commented on GitHub (Aug 7, 2024): <img width="442" alt="Screenshot 2024-08-07 at 10 16 10 AM" src="https://github.com/user-attachments/assets/effdc1bb-5157-434e-aa35-3bf3f2dd845c"> https://platform.openai.com/docs/api-reference/chat/create Per the OpenAI spec, the function role is deprecated, and its behavior is undefined on our part.
Author
Owner

@matheusfvesco commented on GitHub (Aug 8, 2024):

Thank you!

<!-- gh-comment-id:2274633295 --> @matheusfvesco commented on GitHub (Aug 8, 2024): Thank you!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29642