[GH-ISSUE #8152] LangChain - ChatOLLAMA model - calling tool on every input #30964

Closed
opened 2026-04-22 11:00:21 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @Arslan-Mehmood1 on GitHub (Dec 18, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/8152

What is the issue?

llama3.2:1b

llama3.2:3b

llama3.2:1b-instruct-fp16

llama3.1:8b

I've tested above models and all the above models are calling tools even with simple query like 'hi'.

the behavior is same whether binding :

tools_list

openai_format_tools_list

Need help.

Result:


python 1_tool_calling_test.py 
content='' additional_kwargs={} response_metadata={'model': 'llama3.1:8b', 'created_at': '2024-12-18T09:17:37.90843589Z', 'done': True, 'done_reason': 'stop', 'total_duration': 72841245771, 'load_duration': 13778033737, 'prompt_eval_count': 194, 'prompt_eval_duration': 50723000000, 'eval_count': 22, 'eval_duration': 8337000000, 'message': Message(role='assistant', content='', images=None, tool_calls=[ToolCall(function=Function(name='tavily_search_results_json', arguments={'query': 'current events'}))])} id='run-8931e574-9297-4ce9-93f1-54d00ce8c413-0' tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current events'}, 'id': '82754a8a-619b-4a1e-85d3-cb767d4c6a9f', 'type': 'tool_call'}] usage_metadata={'input_tokens': 194, 'output_tokens': 22, 'total_tokens': 216} 


[{'name': 'tavily_search_results_json', 'args': {'query': 'current events'}, 'id': '82754a8a-619b-4a1e-85d3-cb767d4c6a9f', 'type': 'tool_call'}]

Code For testing:


from typing import List
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
from langchain_core.tools import tool
from langchain_ollama import ChatOllama
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.utils.function_calling import convert_to_openai_tool

# @tool
# def web_search_tool(web_query: str) -> str:
#     """
#     Use this tool only when you need to use web search in order to find an answer for user.
#     Args:
#         web_query (str) : the query for web search
    
#     """
#     search = TavilySearchResults()
#     results = search.invoke(query)
#     return results

web_search_tool = TavilySearchResults()

tools_list = [web_search_tool]
openai_format_tools_list = [convert_to_openai_tool(f) for f in tools_list]

llm = ChatOllama(model="llama3.1:8b", temperature=0).bind_tools(tools_list)

result = llm.invoke("Hi, how are you?")

print(result,"\n\n")
print(result.tool_calls)

OS

Linux

GPU

No response

CPU

Intel

Ollama version

0.5.1

Originally created by @Arslan-Mehmood1 on GitHub (Dec 18, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/8152 ### What is the issue? llama3.2:1b llama3.2:3b llama3.2:1b-instruct-fp16 llama3.1:8b I've tested above models and all the above models are calling tools even with simple query like 'hi'. the behavior is same whether binding : tools_list openai_format_tools_list Need help. Result: ``` python 1_tool_calling_test.py content='' additional_kwargs={} response_metadata={'model': 'llama3.1:8b', 'created_at': '2024-12-18T09:17:37.90843589Z', 'done': True, 'done_reason': 'stop', 'total_duration': 72841245771, 'load_duration': 13778033737, 'prompt_eval_count': 194, 'prompt_eval_duration': 50723000000, 'eval_count': 22, 'eval_duration': 8337000000, 'message': Message(role='assistant', content='', images=None, tool_calls=[ToolCall(function=Function(name='tavily_search_results_json', arguments={'query': 'current events'}))])} id='run-8931e574-9297-4ce9-93f1-54d00ce8c413-0' tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current events'}, 'id': '82754a8a-619b-4a1e-85d3-cb767d4c6a9f', 'type': 'tool_call'}] usage_metadata={'input_tokens': 194, 'output_tokens': 22, 'total_tokens': 216} [{'name': 'tavily_search_results_json', 'args': {'query': 'current events'}, 'id': '82754a8a-619b-4a1e-85d3-cb767d4c6a9f', 'type': 'tool_call'}] ``` Code For testing: ``` from typing import List from dotenv import load_dotenv, find_dotenv load_dotenv(find_dotenv()) from langchain_core.tools import tool from langchain_ollama import ChatOllama from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.utils.function_calling import convert_to_openai_tool # @tool # def web_search_tool(web_query: str) -> str: # """ # Use this tool only when you need to use web search in order to find an answer for user. # Args: # web_query (str) : the query for web search # """ # search = TavilySearchResults() # results = search.invoke(query) # return results web_search_tool = TavilySearchResults() tools_list = [web_search_tool] openai_format_tools_list = [convert_to_openai_tool(f) for f in tools_list] llm = ChatOllama(model="llama3.1:8b", temperature=0).bind_tools(tools_list) result = llm.invoke("Hi, how are you?") print(result,"\n\n") print(result.tool_calls) ``` ### OS Linux ### GPU _No response_ ### CPU Intel ### Ollama version 0.5.1
GiteaMirror added the bug label 2026-04-22 11:00:22 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 18, 2024):

#6127

<!-- gh-comment-id:2550892170 --> @rick-github commented on GitHub (Dec 18, 2024): #6127
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30964