[GH-ISSUE #6127] llama3.1 always uses tool #65864

Closed
opened 2026-05-03 22:56:33 -05:00 by GiteaMirror · 22 comments
Owner

Originally created by @tomaszbk on GitHub (Aug 1, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6127

What is the issue?

no matter what I prompt, llama3.1 always replies with a tool call

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.10.58

Originally created by @tomaszbk on GitHub (Aug 1, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6127 ### What is the issue? no matter what I prompt, llama3.1 always replies with a tool call ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.10.58
GiteaMirror added the bug label 2026-05-03 22:56:33 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 1, 2024):

An example of a prompt that replies with a tool call and server logs would help in debugging.

$ curl -s localhost:11434/api/generate -d '{"model":"llama3.1","prompt":"multiply 3 times 10", "stream":false}' | jq 'del(.context)'
{
  "model": "llama3.1",
  "created_at": "2024-08-01T22:08:24.784291922Z",
  "response": "The result of multiplying 3 by 10 is:\n\n30",
  "done": true,
  "done_reason": "stop",
  "total_duration": 226709571,
  "load_duration": 15261997,
  "prompt_eval_count": 16,
  "prompt_eval_duration": 20152000,
  "eval_count": 13,
  "eval_duration": 147840000
}
<!-- gh-comment-id:2264099836 --> @rick-github commented on GitHub (Aug 1, 2024): An example of a prompt that replies with a tool call and server logs would help in debugging. ``` $ curl -s localhost:11434/api/generate -d '{"model":"llama3.1","prompt":"multiply 3 times 10", "stream":false}' | jq 'del(.context)' { "model": "llama3.1", "created_at": "2024-08-01T22:08:24.784291922Z", "response": "The result of multiplying 3 by 10 is:\n\n30", "done": true, "done_reason": "stop", "total_duration": 226709571, "load_duration": 15261997, "prompt_eval_count": 16, "prompt_eval_duration": 20152000, "eval_count": 13, "eval_duration": 147840000 } ```
Author
Owner

@tomaszbk commented on GitHub (Aug 1, 2024):

from langchain_core.tools import tool
from langchain_ollama import ChatOllama

@tool
def add(x: int, y: int) -> int:
    """Adds 2 numbers and returns the result"""
    return x + y
tools = [add]
model = ChatOllama(
    model="llama3.1",
    temperature=0,
    prompt="Your name is botty"
).bind_tools(tools)
response = model.invoke("what is your name?")
print(f"response: {response.content}")
print(response.tool_calls)

output:
response:
[{'name': 'add', 'args': {'x': '0', 'y': '0'}, 'id': '62cf011c-7a66-481c-894a-0fa5a3c9e6a8', 'type': 'tool_call'}]

<!-- gh-comment-id:2264125009 --> @tomaszbk commented on GitHub (Aug 1, 2024): ```python from langchain_core.tools import tool from langchain_ollama import ChatOllama @tool def add(x: int, y: int) -> int: """Adds 2 numbers and returns the result""" return x + y tools = [add] model = ChatOllama( model="llama3.1", temperature=0, prompt="Your name is botty" ).bind_tools(tools) response = model.invoke("what is your name?") print(f"response: {response.content}") print(response.tool_calls) ``` output: response: [{'name': 'add', 'args': {'x': '0', 'y': '0'}, 'id': '62cf011c-7a66-481c-894a-0fa5a3c9e6a8', 'type': 'tool_call'}]
Author
Owner

@rick-github commented on GitHub (Aug 1, 2024):

Tool calls have higher priority for this model. Don't bind tools if you don't want a tool call.

<!-- gh-comment-id:2264155782 --> @rick-github commented on GitHub (Aug 1, 2024): Tool calls have higher priority for this model. Don't bind tools if you don't want a tool call.
Author
Owner

@tomaszbk commented on GitHub (Aug 2, 2024):

I mean, I could set one llm that decides if the user prompt requires a tool to be called, and then pass the prompt to a model with the tools binded, or else pass the prompt to a model that doesn't have the tools binded. Maybe this is the way I'm supposed to do it? but it doesn't sound like optimal

<!-- gh-comment-id:2264251113 --> @tomaszbk commented on GitHub (Aug 2, 2024): I mean, I could set one llm that decides if the user prompt requires a tool to be called, and then pass the prompt to a model with the tools binded, or else pass the prompt to a model that doesn't have the tools binded. Maybe this is the way I'm supposed to do it? but it doesn't sound like optimal
Author
Owner

@rick-github commented on GitHub (Aug 2, 2024):

The alternative is to modify the template and try to make the model decide whether it needs to use a tool or not. You can do this by getting the Modelfile (ollama show --modelfile llama3.1 > Modelfile), editing the TEMPLATE field, and creating a new model (ollama create llama3.1-tool -f Modelfile). For example, I changed the template:

@@ -19,9 +19,7 @@
 {{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|>
 {{- if and $.Tools $last }}
 
-Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.
-
-Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables.
+Analyse the given prompt and decided whether or not it can be answered by a tool.  If it can, use the following functions to respond with a JSON for a function call with its proper arguments that best answers the given prompt.  Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables.
 
 {{ $.Tools }}
 {{- end }}

and now your script responds:

response: The prompt "what is your name?" cannot be answered by a tool, as it requires personal information. However, I can respond with my name.

Since the given function list does not include a function that directly answers this question, I will use a simple string response instead of a JSON object.

My name is Botty.
[]

and if I change the prompt to what is 2 + 2?, it responds:

response: 
[{'name': 'add', 'args': {'x': 2, 'y': 2}, 'id': 'bb51899f-3826-401a-bd79-11548c7bd0aa', 'type': 'tool_call'}]

If you take this approach you will need to craft the modified prompt to remove the "thinking" that it does and just return the answer.

<!-- gh-comment-id:2264291170 --> @rick-github commented on GitHub (Aug 2, 2024): The alternative is to modify the template and try to make the model decide whether it needs to use a tool or not. You can do this by getting the Modelfile (`ollama show --modelfile llama3.1 > Modelfile`), editing the TEMPLATE field, and creating a new model (`ollama create llama3.1-tool -f Modelfile`). For example, I changed the template: ```diff @@ -19,9 +19,7 @@ {{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|> {{- if and $.Tools $last }} -Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt. - -Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables. +Analyse the given prompt and decided whether or not it can be answered by a tool. If it can, use the following functions to respond with a JSON for a function call with its proper arguments that best answers the given prompt. Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables. {{ $.Tools }} {{- end }} ``` and now your script responds: ``` response: The prompt "what is your name?" cannot be answered by a tool, as it requires personal information. However, I can respond with my name. Since the given function list does not include a function that directly answers this question, I will use a simple string response instead of a JSON object. My name is Botty. [] ``` and if I change the prompt to `what is 2 + 2?`, it responds: ``` response: [{'name': 'add', 'args': {'x': 2, 'y': 2}, 'id': 'bb51899f-3826-401a-bd79-11548c7bd0aa', 'type': 'tool_call'}] ``` If you take this approach you will need to craft the modified prompt to remove the "thinking" that it does and just return the answer.
Author
Owner

@tomaszbk commented on GitHub (Aug 2, 2024):

Sound great, thanks for your help @rick-github ! :)

<!-- gh-comment-id:2264295850 --> @tomaszbk commented on GitHub (Aug 2, 2024): Sound great, thanks for your help @rick-github ! :)
Author
Owner

@rick-github commented on GitHub (Aug 2, 2024):

BTW, your invocation of ChatOllama (on my system, at least) is not quite correct, it doesn't take a prompt parameter:

from langchain_core.tools import tool
from langchain_ollama import ChatOllama

@tool
def add(x: int, y: int) -> int:
    """Adds 2 numbers and returns the result"""
    return x + y
tools = [add]
model = ChatOllama(
    model="llama3.1-tool",
    temperature=0,
).bind_tools(tools)
messages = [
  ( "system", "Your name is botty"),
  ( "user", "what is 2 + 2?"),
]
response = model.invoke(messages)
print(f"response: {response.content}")
print(response.tool_calls)
<!-- gh-comment-id:2264302918 --> @rick-github commented on GitHub (Aug 2, 2024): BTW, your invocation of ChatOllama (on my system, at least) is not quite correct, it doesn't take a `prompt` parameter: ```python from langchain_core.tools import tool from langchain_ollama import ChatOllama @tool def add(x: int, y: int) -> int: """Adds 2 numbers and returns the result""" return x + y tools = [add] model = ChatOllama( model="llama3.1-tool", temperature=0, ).bind_tools(tools) messages = [ ( "system", "Your name is botty"), ( "user", "what is 2 + 2?"), ] response = model.invoke(messages) print(f"response: {response.content}") print(response.tool_calls) ```
Author
Owner

@SamuelBG13 commented on GitHub (Aug 13, 2024):

Hello! Thank you for your insights, the fix worked for me as well.

@rick-github I was wondering if it would be sensible to use this modified modelfile as default? Because one of the big advantages of function calling is having a model decide when (and if) to use a tool, for a given query.

Of course, this will probably be handled more holistically once streaming tool calling is working (i.e. parsing tool calls from an arbitrary assistant message) and calling them live. My guess is that this feature will take a while?

<!-- gh-comment-id:2286215061 --> @SamuelBG13 commented on GitHub (Aug 13, 2024): Hello! Thank you for your insights, the fix worked for me as well. @rick-github I was wondering if it would be sensible to use this modified modelfile as default? Because one of the big advantages of function calling is having a model decide when (and _if_) to use a tool, for a given query. Of course, this will probably be handled more holistically once streaming tool calling is working (i.e. parsing tool calls from an arbitrary assistant message) and calling them live. My guess is that this feature will take a while?
Author
Owner

@rick-github commented on GitHub (Aug 13, 2024):

The problem with allowing the model to decide whether to use the tool is that it opens up another path for hallucinations. Personally I prefer to have well defined roles for my LLM clients: if it's a tool user, bind tools to it, otherwise it's text generation only. No room for the text generator to take the phrase "stock up on apples" and return a toolcall to buy 100 shares of AAPL.

<!-- gh-comment-id:2287411250 --> @rick-github commented on GitHub (Aug 13, 2024): The problem with allowing the model to decide whether to use the tool is that it opens up another path for hallucinations. Personally I prefer to have well defined roles for my LLM clients: if it's a tool user, bind tools to it, otherwise it's text generation only. No room for the text generator to take the phrase "stock up on apples" and return a toolcall to buy 100 shares of AAPL.
Author
Owner

@SamuelBG13 commented on GitHub (Aug 18, 2024):

Personally I prefer to have well defined roles for my LLM clients: if it's a tool user, bind tools to it, otherwise it's text generation only. No room for the text generator to take the phrase "stock up on apples" and return a toolcall to buy 100 shares of AAPL.

I understand, and I fully agree with having well defined roles (and this is actually backed by a recent paper).

I am considering more scenarios with chain of thought or in-context learning strategies. I.e. I want the model to consider the tools it has, process a few ideas or heuristics, and then finally select the tool. As an alternative, sure, one can chain two prompts, one for reasoning and one for structuring. But then one needs to find some intermediate representation of the tools for the "reasoner" LLM (you can't pass the tools as tools) plus add the overhead of two prompts being processed.

<!-- gh-comment-id:2295219560 --> @SamuelBG13 commented on GitHub (Aug 18, 2024): > Personally I prefer to have well defined roles for my LLM clients: if it's a tool user, bind tools to it, otherwise it's text generation only. No room for the text generator to take the phrase "stock up on apples" and return a toolcall to buy 100 shares of AAPL. I understand, and I fully agree with having well defined roles (and this is actually backed [by a recent paper](https://arxiv.org/html/2406.00507v1)). I am considering more scenarios with chain of thought or in-context learning strategies. I.e. I want the model to consider the tools it has, process a few ideas or heuristics, and then finally select the tool. As an alternative, sure, one can chain two prompts, one for reasoning and one for structuring. But then one needs to find some intermediate representation of the tools for the "reasoner" LLM (you can't pass the tools as tools) plus add the overhead of two prompts being processed.
Author
Owner

@zhiftyDK commented on GitHub (Sep 18, 2024):

Another solution to this problem could be the following code. Instead of relying on the ollama built in tools function you can customize the user prompt to get an appropirate tools response with arguments, should work with any non tool trained model. Here is the example:

tools.json file:

[
    {
        "name": "set_light_state",
        "description": "Set the current state and brightness of lights",
        "parameters": {
            "type": "object",
            "properties": {
                "brightness": {
                    "type": "integer",
                    "description": "The brightness to set the lights to (0-100)."
                },
                "state": {
                    "type": "string",
                    "description": "The state to set the lights to, either 'on' or 'off'. If state is on and brightness is not specified set brightness to 100.",
                    "enum": ["on", "off"]
                }
            },
            "required": ["state", "brightness"]
        }
    },
    {
        "name": "get_current_weather",
        "description": "Get the current weather in a given location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city and country code, e.g. San Francisco, CA"
                }
            },
            "required": ["location"]
        }
    }
]

tools.py file:

import ollama
import json

with open("./models/tools.json") as f:
    tools = json.loads(f.read())

available_tools = "\n".join(f"Function {tool['name']} to {tool['description']}:\n{tool}" for tool in tools)

toolPrompt = f"""
Analyse the given prompt and decided whether or not it can be answered by a any of the following functions that you have access to:
{available_tools}
 
If you choose to call a function ONLY respond in the JSON format:

{{"name": function name, "parameters": dictionary of argument name and its value}}

Do not use variables.
 
Reminder:
- If looking for real time information use relevant functions before falling back to brave_search
- Function calls MUST follow the specified format
- Required parameters MUST always be specified in the response
- Only call one function at a time
- Put the entire function call reply on one line
"""

print(toolPrompt)

#This is just a demo system message/prompt
system_message = """
You should always give reasonably short answers.
"""
 
response = ollama.chat(model="llama3.1:8b-instruct-q4_0", messages=[
    {"role": "system", "content": system_message},
    {"role": "user", "content": "What is your name?"},
    {"role": "user", "content": toolPrompt},
])

try:
   # Try to load the returned json format if a tool/function was decided to be used
    tool = json.loads(response["message"]["content"])
    print(tool)
except:
    #Print the response if the llm decided not to use a tool/function
    print(response["message"]["content"])

# Here you can do anything you want with the tool variable in the format {"name:" "function name", "parameters": "function paramters/arguments"}
<!-- gh-comment-id:2357267909 --> @zhiftyDK commented on GitHub (Sep 18, 2024): Another solution to this problem could be the following code. Instead of relying on the ollama built in tools function you can customize the user prompt to get an appropirate tools response with arguments, should work with any non tool trained model. Here is the example: tools.json file: ```json [ { "name": "set_light_state", "description": "Set the current state and brightness of lights", "parameters": { "type": "object", "properties": { "brightness": { "type": "integer", "description": "The brightness to set the lights to (0-100)." }, "state": { "type": "string", "description": "The state to set the lights to, either 'on' or 'off'. If state is on and brightness is not specified set brightness to 100.", "enum": ["on", "off"] } }, "required": ["state", "brightness"] } }, { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and country code, e.g. San Francisco, CA" } }, "required": ["location"] } } ] ``` tools.py file: ```py import ollama import json with open("./models/tools.json") as f: tools = json.loads(f.read()) available_tools = "\n".join(f"Function {tool['name']} to {tool['description']}:\n{tool}" for tool in tools) toolPrompt = f""" Analyse the given prompt and decided whether or not it can be answered by a any of the following functions that you have access to: {available_tools} If you choose to call a function ONLY respond in the JSON format: {{"name": function name, "parameters": dictionary of argument name and its value}} Do not use variables. Reminder: - If looking for real time information use relevant functions before falling back to brave_search - Function calls MUST follow the specified format - Required parameters MUST always be specified in the response - Only call one function at a time - Put the entire function call reply on one line """ print(toolPrompt) #This is just a demo system message/prompt system_message = """ You should always give reasonably short answers. """ response = ollama.chat(model="llama3.1:8b-instruct-q4_0", messages=[ {"role": "system", "content": system_message}, {"role": "user", "content": "What is your name?"}, {"role": "user", "content": toolPrompt}, ]) try: # Try to load the returned json format if a tool/function was decided to be used tool = json.loads(response["message"]["content"]) print(tool) except: #Print the response if the llm decided not to use a tool/function print(response["message"]["content"]) # Here you can do anything you want with the tool variable in the format {"name:" "function name", "parameters": "function paramters/arguments"} ```
Author
Owner

@pannous commented on GitHub (Sep 27, 2024):

It finally worked after adding

NEVER make up your own parameter values as tool function arguments, like 'city=London'! 
NEVER use tool functions if not asked, instead revert to normal chat!

to the modelfile

Back to prompt engineering like it's 2023

<!-- gh-comment-id:2379762636 --> @pannous commented on GitHub (Sep 27, 2024): It finally worked after adding ``` NEVER make up your own parameter values as tool function arguments, like 'city=London'! NEVER use tool functions if not asked, instead revert to normal chat! ``` to the modelfile Back to prompt engineering like it's 2023
Author
Owner

@acastry commented on GitHub (Sep 27, 2024):

NEVER make up your own parameter values as tool function arguments, like 'city=London'!

can you show a complete model file please ?

<!-- gh-comment-id:2379894555 --> @acastry commented on GitHub (Sep 27, 2024): > NEVER make up your own parameter values as tool function arguments, like 'city=London'! can you show a complete model file please ?
Author
Owner

@pannous commented on GitHub (Sep 28, 2024):

@acastry same as rick-github's just with these two lines added below

<!-- gh-comment-id:2380555102 --> @pannous commented on GitHub (Sep 28, 2024): @acastry same as [rick-github](https://github.com/rick-github)'s just with these two lines added below
Author
Owner

@Master-Pr0grammer commented on GitHub (Oct 26, 2024):

I think this issue needs be fixed, but in the mean time, you can also add a 'respond_to_chat' function if u dont want the added delay from the double prompting method of deciding wether or not it needs a function call.

However since its using a function call to respond, the response might be hindered or limited because the model isnt meant to respond that way, however its good for quick responses.

<!-- gh-comment-id:2439674154 --> @Master-Pr0grammer commented on GitHub (Oct 26, 2024): I think this issue needs be fixed, but in the mean time, you can also add a 'respond_to_chat' function if u dont want the added delay from the double prompting method of deciding wether or not it needs a function call. However since its using a function call to respond, the response might be hindered or limited because the model isnt meant to respond that way, however its good for quick responses.
Author
Owner

@shjala commented on GitHub (Dec 31, 2024):

I managed to reduce the unnecessary tool call by checking the user input and only making the tools available if needed, surely not perfect

func toolNeeded(text string) bool {
	text = strings.ToLower(text)
        // add more like "search", "remind(er)", etc.
	return strings.Contains(text, "weather")
}
[...]
	availableTools := []openai.ChatCompletionToolParam{}
	// Check if the user input requires a tool
	if toolNeeded(userInput) {
		availableTools = append(availableTools, weatherTool)
	}

	params := openai.ChatCompletionNewParams{
		Messages: openai.F([]openai.ChatCompletionMessageParamUnion{
			openai.UserMessage(userInput),
		}),
		ParallelToolCalls: openai.F(true),
		Model:             openai.F("llama3.2:3b-tool"),
		Tools:             openai.F(availableTools),
	}
[...]
<!-- gh-comment-id:2566597628 --> @shjala commented on GitHub (Dec 31, 2024): I managed to reduce the unnecessary tool call by checking the user input and only making the tools available if needed, surely not perfect ```go func toolNeeded(text string) bool { text = strings.ToLower(text) // add more like "search", "remind(er)", etc. return strings.Contains(text, "weather") } [...] availableTools := []openai.ChatCompletionToolParam{} // Check if the user input requires a tool if toolNeeded(userInput) { availableTools = append(availableTools, weatherTool) } params := openai.ChatCompletionNewParams{ Messages: openai.F([]openai.ChatCompletionMessageParamUnion{ openai.UserMessage(userInput), }), ParallelToolCalls: openai.F(true), Model: openai.F("llama3.2:3b-tool"), Tools: openai.F(availableTools), } [...] ```
Author
Owner

@lemassykoi commented on GitHub (Mar 22, 2025):

import requests
import json
import colorama
YELLOW = colorama.Fore.YELLOW
RESET = colorama.Style.RESET_ALL

model = "llama3.2:latest"
user_query = 'Hello there!'

url = "http://localhost:11434/api/chat"

messages = [
    {
        "role": "user",
        "content": user_query
    }
]

## WITHOUT TOOLS
data = {
    "model": model,
    "messages": messages,
    "stream": False,
    "keep_alive": "1m",
    "options": {'temperature': 0.0, 'seed': 1234567890}
}
headers = {"Content-Type": "application/json"}
payload = json.dumps(data).encode("utf-8")
response = requests.post(url, headers=headers, data=payload, stream=False)

if response.status_code == 200:
    print(response.json())
    print(YELLOW + response.json()['message']['content'] + RESET)

else:
    print(f"Error: {response.status_code}")
    print(response.text)

print('\n')

## WITH TOOLS
tools = [
    {
        'type': 'function',
        'function': {
            'name': 'get_current_weather',
            'description': 'Get the current weather for a city',
            'parameters': {
                'type': 'object',
                'properties': {
                'city': {
                    'type': 'string',
                    'description': 'The name of the city',
                },
            },
            'required': ['city'],
            },
        },
    },
]

data = {
    "model": model,
    "messages": messages,
    "stream": False,
    "keep_alive": "1m",
    "tools": tools,
    "options": {'temperature': 0.0, 'seed': 1234567890}
}
headers = {"Content-Type": "application/json"}
payload = json.dumps(data).encode("utf-8")
response = requests.post(url, headers=headers, data=payload, stream=False)

if response.status_code == 200:
    print(response.json())
    print(YELLOW + response.json()['message']['content'] + RESET)

else:
    print(f"Error: {response.status_code}")
    print(response.text)

case 1:

{'model': 'llama3.2:latest', 'created_at': '2025-03-22T21:06:28.040493426Z', 'message': {'role': 'assistant', 'content': "It's nice to meet you. Is there something I can help you with or would you like to chat?"}, 'done_reason': 'stop', 'done': True, 'total_duration': 1438499087, 'load_duration': 1270149241, 'prompt_eval_count': 28, 'prompt_eval_duration': 22325287, 'eval_count': 23, 'eval_duration': 144218483}
It's nice to meet you. Is there something I can help you with or would you like to chat?

case 2:

{'model': 'llama3.2:latest', 'created_at': '2025-03-22T21:06:28.190441052Z', 'message': {'role': 'assistant', 'content': '', 'tool_calls': [{'function': {'name': 'get_current_weather', 'arguments': {'city': 'New York'}}}]}, 'done_reason': 'stop', 'done': True, 'total_duration': 147643531, 'load_duration': 13826815, 'prompt_eval_count': 162, 'prompt_eval_duration': 3391734, 'eval_count': 19, 'eval_duration': 129858423}
None

This is really problematic

<!-- gh-comment-id:2745748059 --> @lemassykoi commented on GitHub (Mar 22, 2025): ```python import requests import json import colorama YELLOW = colorama.Fore.YELLOW RESET = colorama.Style.RESET_ALL model = "llama3.2:latest" user_query = 'Hello there!' url = "http://localhost:11434/api/chat" messages = [ { "role": "user", "content": user_query } ] ## WITHOUT TOOLS data = { "model": model, "messages": messages, "stream": False, "keep_alive": "1m", "options": {'temperature': 0.0, 'seed': 1234567890} } headers = {"Content-Type": "application/json"} payload = json.dumps(data).encode("utf-8") response = requests.post(url, headers=headers, data=payload, stream=False) if response.status_code == 200: print(response.json()) print(YELLOW + response.json()['message']['content'] + RESET) else: print(f"Error: {response.status_code}") print(response.text) print('\n') ## WITH TOOLS tools = [ { 'type': 'function', 'function': { 'name': 'get_current_weather', 'description': 'Get the current weather for a city', 'parameters': { 'type': 'object', 'properties': { 'city': { 'type': 'string', 'description': 'The name of the city', }, }, 'required': ['city'], }, }, }, ] data = { "model": model, "messages": messages, "stream": False, "keep_alive": "1m", "tools": tools, "options": {'temperature': 0.0, 'seed': 1234567890} } headers = {"Content-Type": "application/json"} payload = json.dumps(data).encode("utf-8") response = requests.post(url, headers=headers, data=payload, stream=False) if response.status_code == 200: print(response.json()) print(YELLOW + response.json()['message']['content'] + RESET) else: print(f"Error: {response.status_code}") print(response.text) ``` case 1: ```shell {'model': 'llama3.2:latest', 'created_at': '2025-03-22T21:06:28.040493426Z', 'message': {'role': 'assistant', 'content': "It's nice to meet you. Is there something I can help you with or would you like to chat?"}, 'done_reason': 'stop', 'done': True, 'total_duration': 1438499087, 'load_duration': 1270149241, 'prompt_eval_count': 28, 'prompt_eval_duration': 22325287, 'eval_count': 23, 'eval_duration': 144218483} ``` ```text It's nice to meet you. Is there something I can help you with or would you like to chat? ``` case 2: ```shell {'model': 'llama3.2:latest', 'created_at': '2025-03-22T21:06:28.190441052Z', 'message': {'role': 'assistant', 'content': '', 'tool_calls': [{'function': {'name': 'get_current_weather', 'arguments': {'city': 'New York'}}}]}, 'done_reason': 'stop', 'done': True, 'total_duration': 147643531, 'load_duration': 13826815, 'prompt_eval_count': 162, 'prompt_eval_duration': 3391734, 'eval_count': 19, 'eval_duration': 129858423} ``` ```text None ``` This is really problematic
Author
Owner

@rick-github commented on GitHub (Mar 22, 2025):

llama3.2 doesn't have great discrimination. Try something from the qwen2.5 family.

#!/usr/bin/env python3

import requests
import json
import colorama
import argparse
YELLOW = colorama.Fore.YELLOW
RESET = colorama.Style.RESET_ALL

parser = argparse.ArgumentParser()
parser.add_argument("model", nargs='?', default="llama3.2:latest")
args = parser.parse_args()

user_query = 'Hello there!'

url = "http://localhost:11434/api/chat"

def get_response(user_query, tools):
  data = {
      "model": args.model,
      "messages": [{
        "role": "user",
        "content": user_query
      }],
      "stream": False,
      "keep_alive": "1m",
      "tools": tools,
      "options": {'temperature': 0.0, 'seed': 1234567890}
  }
  headers = {"Content-Type": "application/json"}
  payload = json.dumps(data).encode("utf-8")
  response = requests.post(url, headers=headers, data=payload, stream=False)

  if response.status_code == 200:
      print(response.json())
      print(YELLOW + response.json()['message']['content'] + RESET)

  else:
      print(f"Error: {response.status_code}")
      print(response.text)

  print('\n')

## WITH TOOLS
tools = [
    {
        'type': 'function',
        'function': {
            'name': 'get_current_weather',
            'description': 'Get the current weather for a city',
            'parameters': {
                'type': 'object',
                'properties': {
                'city': {
                    'type': 'string',
                    'description': 'The name of the city',
                },
            },
            'required': ['city'],
            },
        },
    },
]

get_response(user_query, None)
get_response(user_query, tools)
get_response("What's the weather like in Paris?", tools)
$ ./6127.py qwen2.5:0.5b
{'model': 'qwen2.5:0.5b', 'created_at': '2025-03-22T21:46:40.953201319Z', 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today?'}, 'done_reason': 'stop', 'done': True, 'total_duration': 299040394, 'load_duration': 251506763, 'prompt_eval_count': 32, 'prompt_eval_duration': 7939767, 'eval_count': 10, 'eval_duration': 35222937}
Hello! How can I assist you today?


{'model': 'qwen2.5:0.5b', 'created_at': '2025-03-22T21:46:41.235725336Z', 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today?'}, 'done_reason': 'stop', 'done': True, 'total_duration': 280486460, 'load_duration': 230412881, 'prompt_eval_count': 163, 'prompt_eval_duration': 6175926, 'eval_count': 10, 'eval_duration': 39423668}
Hello! How can I assist you today?


{'model': 'qwen2.5:0.5b', 'created_at': '2025-03-22T21:46:41.541227609Z', 'message': {'role': 'assistant', 'content': '', 'tool_calls': [{'function': {'name': 'get_current_weather', 'arguments': {'city': 'Paris'}}}]}, 'done_reason': 'stop', 'done': True, 'total_duration': 304057078, 'load_duration': 221519148, 'prompt_eval_count': 168, 'prompt_eval_duration': 5164776, 'eval_count': 21, 'eval_duration': 72863163}
<!-- gh-comment-id:2745886986 --> @rick-github commented on GitHub (Mar 22, 2025): llama3.2 doesn't have great discrimination. Try something from the qwen2.5 family. ```python #!/usr/bin/env python3 import requests import json import colorama import argparse YELLOW = colorama.Fore.YELLOW RESET = colorama.Style.RESET_ALL parser = argparse.ArgumentParser() parser.add_argument("model", nargs='?', default="llama3.2:latest") args = parser.parse_args() user_query = 'Hello there!' url = "http://localhost:11434/api/chat" def get_response(user_query, tools): data = { "model": args.model, "messages": [{ "role": "user", "content": user_query }], "stream": False, "keep_alive": "1m", "tools": tools, "options": {'temperature': 0.0, 'seed': 1234567890} } headers = {"Content-Type": "application/json"} payload = json.dumps(data).encode("utf-8") response = requests.post(url, headers=headers, data=payload, stream=False) if response.status_code == 200: print(response.json()) print(YELLOW + response.json()['message']['content'] + RESET) else: print(f"Error: {response.status_code}") print(response.text) print('\n') ## WITH TOOLS tools = [ { 'type': 'function', 'function': { 'name': 'get_current_weather', 'description': 'Get the current weather for a city', 'parameters': { 'type': 'object', 'properties': { 'city': { 'type': 'string', 'description': 'The name of the city', }, }, 'required': ['city'], }, }, }, ] get_response(user_query, None) get_response(user_query, tools) get_response("What's the weather like in Paris?", tools) ``` ```console $ ./6127.py qwen2.5:0.5b {'model': 'qwen2.5:0.5b', 'created_at': '2025-03-22T21:46:40.953201319Z', 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today?'}, 'done_reason': 'stop', 'done': True, 'total_duration': 299040394, 'load_duration': 251506763, 'prompt_eval_count': 32, 'prompt_eval_duration': 7939767, 'eval_count': 10, 'eval_duration': 35222937} Hello! How can I assist you today? {'model': 'qwen2.5:0.5b', 'created_at': '2025-03-22T21:46:41.235725336Z', 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today?'}, 'done_reason': 'stop', 'done': True, 'total_duration': 280486460, 'load_duration': 230412881, 'prompt_eval_count': 163, 'prompt_eval_duration': 6175926, 'eval_count': 10, 'eval_duration': 39423668} Hello! How can I assist you today? {'model': 'qwen2.5:0.5b', 'created_at': '2025-03-22T21:46:41.541227609Z', 'message': {'role': 'assistant', 'content': '', 'tool_calls': [{'function': {'name': 'get_current_weather', 'arguments': {'city': 'Paris'}}}]}, 'done_reason': 'stop', 'done': True, 'total_duration': 304057078, 'load_duration': 221519148, 'prompt_eval_count': 168, 'prompt_eval_duration': 5164776, 'eval_count': 21, 'eval_duration': 72863163} ```
Author
Owner

@lemassykoi commented on GitHub (Mar 22, 2025):

llama3.2 doesn't have great discrimination. Try something from the qwen2.5 family.

your are right, it's ok for qwen2.5:14b-instruct

<!-- gh-comment-id:2745887665 --> @lemassykoi commented on GitHub (Mar 22, 2025): > llama3.2 doesn't have great discrimination. Try something from the qwen2.5 family. your are right, it's ok for `qwen2.5:14b-instruct`
Author
Owner

@adar2378 commented on GitHub (Mar 28, 2025):

is there no proper solution? :/ I overriding system prompt feels kind of like a hack.

<!-- gh-comment-id:2761594104 --> @adar2378 commented on GitHub (Mar 28, 2025): is there no proper solution? :/ I overriding system prompt feels kind of like a hack.
Author
Owner

@zhiftyDK commented on GitHub (Mar 29, 2025):

is there no proper solution? :/ I overriding system prompt feels kind of like a hack.

The only proper solution is using a better model, which is why i switched to openai api

<!-- gh-comment-id:2763371454 --> @zhiftyDK commented on GitHub (Mar 29, 2025): > is there no proper solution? :/ I overriding system prompt feels kind of like a hack. The only proper solution is using a better model, which is why i switched to openai api
Author
Owner

@JoHavel commented on GitHub (Jul 6, 2025):

Or you can create a tool that does nothing. It seems to work (if the model doesn't need to do anything, it runs this tool and then answers; otherwise, it uses the right tool):

@tool
def do_nothing():
    """ Does nothing """

The downside is that there are two calls to the model if no tool is used.

<!-- gh-comment-id:3041723304 --> @JoHavel commented on GitHub (Jul 6, 2025): Or you can create a tool that does nothing. It seems to work (if the model doesn't need to do anything, it runs this tool and then answers; otherwise, it uses the right tool): ``` @tool def do_nothing(): """ Does nothing """ ``` The downside is that there are two calls to the model if no tool is used.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65864