Are tool calls guaranteed to be streamed in a single chunk? #7695

Closed
opened 2025-11-12 14:15:00 -06:00 by GiteaMirror · 6 comments
Owner

Originally created by @anakin87 on GitHub (Aug 1, 2025).

Hello and thanks for the great work!

I maintain the integration of Ollama with a LLM framework. (ollama-haystack).

Our integration uses the Ollama python client.

One aspect I'm having trouble understanding is this:
when using streaming + tool calls, is a single tool calls guaranteed to be streamed in a single part/chunk? Or can it be split across multiple chunks?

Based on my experiments with several models, it seems that each tool call arrives in a single chunk.
(mistral-small3.1:24b, llama3.2:3b, llama3.1:8b, qwen3:0.6b, and qwen3:1.7b)

But I would appreciate confirmation from a maintainer. @ParthSareen

And if this is not guaranteed, would it be possible to share an example model or method that can reproduce tool calls being split across chunks?

Thanks again!

Originally created by @anakin87 on GitHub (Aug 1, 2025). Hello and thanks for the great work! I maintain the integration of Ollama with a LLM framework. ([ollama-haystack](https://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/ollama)). Our integration uses the Ollama python client. One aspect I'm having trouble understanding is this: **when using streaming + tool calls, is a single tool calls guaranteed to be streamed in a single part/chunk**? Or can it be split across multiple chunks? Based on my experiments with several models, it seems that each tool call arrives in a single chunk. (mistral-small3.1:24b, llama3.2:3b, llama3.1:8b, qwen3:0.6b, and qwen3:1.7b) But I would appreciate confirmation from a maintainer. @ParthSareen And if this is not guaranteed, would it be possible to share an example model or method that can reproduce tool calls being split across chunks? Thanks again!
GiteaMirror added the question label 2025-11-12 14:15:00 -06:00
Author
Owner

@anakin87 commented on GitHub (Aug 1, 2025):

To clarify my question

from ollama import chat

def add_two_numbers(a: int, b: int) -> int:
  """
  Add two numbers

  Args:
    a (int): The first number
    b (int): The second number

  Returns:
    int: The sum of the two numbers
  """

  # The cast is necessary as returned tool call arguments don't always conform exactly to schema
  # E.g. this would prevent "what is 30 + 12" to produce '3012' instead of 42
  return int(a) + int(b)

stream = chat(
    model='mistral-small3.1:24b',
    messages=[{'role': 'user', 'content': "compute 2 +3 using the add_two_numbers tool"}],
    stream=True,
    think=False,
    tools=[add_two_numbers])

for chunk in stream:
  print(chunk)

# 1st chunk
# model='mistral-small3.1:24b' created_at='2025-08-01T14:46:35.009377Z' done=False done_reason=None total_duration=None
# load_duration=None prompt_eval_count=None prompt_eval_duration=None eval_count=None eval_duration=None 
# message=Message(role='assistant', content='', thinking=None, images=None, 
# tool_calls=[ToolCall(function=Function(name='add_two_numbers', arguments={'a': 2, 'b': 3}))])

# 2nd chunk
# model='mistral-small3.1:24b' created_at='2025-08-01T14:46:35.138876Z' done=True done_reason='stop' total_duration=9068546333
# load_duration=3598080708 prompt_eval_count=434 prompt_eval_duration=4161556583 eval_count=21 eval_duration=1294140959 
# message=Message(role='assistant', content='', thinking=None, images=None, tool_calls=None)  

Here we see that the tool call is completely contained in the 1st chunk.
Is it possible that sometimes it will split in more than one chunk?

@anakin87 commented on GitHub (Aug 1, 2025): To clarify my question ```python from ollama import chat def add_two_numbers(a: int, b: int) -> int: """ Add two numbers Args: a (int): The first number b (int): The second number Returns: int: The sum of the two numbers """ # The cast is necessary as returned tool call arguments don't always conform exactly to schema # E.g. this would prevent "what is 30 + 12" to produce '3012' instead of 42 return int(a) + int(b) stream = chat( model='mistral-small3.1:24b', messages=[{'role': 'user', 'content': "compute 2 +3 using the add_two_numbers tool"}], stream=True, think=False, tools=[add_two_numbers]) for chunk in stream: print(chunk) # 1st chunk # model='mistral-small3.1:24b' created_at='2025-08-01T14:46:35.009377Z' done=False done_reason=None total_duration=None # load_duration=None prompt_eval_count=None prompt_eval_duration=None eval_count=None eval_duration=None # message=Message(role='assistant', content='', thinking=None, images=None, # tool_calls=[ToolCall(function=Function(name='add_two_numbers', arguments={'a': 2, 'b': 3}))]) # 2nd chunk # model='mistral-small3.1:24b' created_at='2025-08-01T14:46:35.138876Z' done=True done_reason='stop' total_duration=9068546333 # load_duration=3598080708 prompt_eval_count=434 prompt_eval_duration=4161556583 eval_count=21 eval_duration=1294140959 # message=Message(role='assistant', content='', thinking=None, images=None, tool_calls=None) ``` Here we see that the tool call is completely contained in the 1st chunk. Is it possible that sometimes it will split in more than one chunk?
Author
Owner

@GreazySpoon commented on GitHub (Aug 1, 2025):

The client receives the stream in chunks, but for tool call, from the first chunk, if its a tool call, it will keep buffering until it finishes the full chunk so it executes it. if its not, ill keep yielding the chunks.
Why is this? You only need the message stream, but for the tool, you dont need it in the first place because its not up to you to execute it.
Let's say you got the chunks... so what? you will execute it pogrammatically?

An other thing you must understand, is that the tool call step, its a substep, the ai needs to execute and observe the result, then start yielding final response. a tool call its not a final response.

I hope i explained in detail.

@GreazySpoon commented on GitHub (Aug 1, 2025): The client receives the stream in chunks, but for tool call, from the first chunk, if its a tool call, it will keep buffering until it finishes the full chunk so it executes it. if its not, ill keep yielding the chunks. Why is this? You only need the message stream, but for the tool, you dont need it in the first place because its not up to you to execute it. Let's say you got the chunks... so what? you will execute it pogrammatically? An other thing you must understand, is that the tool call step, its a substep, the ai needs to execute and observe the result, then start yielding final response. a tool call its not a final response. I hope i explained in detail.
Author
Owner

@anakin87 commented on GitHub (Aug 4, 2025):

I found this comment in the PR that introduced streaming + tool calls: https://github.com/ollama/ollama/pull/10415#issuecomment-2902249225

It seems to confirm that Ollama streams complete JSON tool calls (not string fragments). (I would still appreciate confirmation from a maintainer.)

@anakin87 commented on GitHub (Aug 4, 2025): I found this comment in the PR that introduced streaming + tool calls: https://github.com/ollama/ollama/pull/10415#issuecomment-2902249225 It seems to confirm that Ollama streams complete JSON tool calls (not string fragments). (I would still appreciate confirmation from a maintainer.)
Author
Owner

@GreazySpoon commented on GitHub (Aug 4, 2025):

@anakin87
** JUMP TO THE END TO SEE AN EXAMPLE OF CHUNKS **

I wrote for google adk my own Ollama model class, to prevent all bugs in adk + LiteLLM when streaming. i had to handle chunks to execute the tool.

Here is an example for you to understand and the execution results:

import requests
import json
from datetime import datetime

--- Configuration ---

OLLAMA_API_BASE = "http://10.10.61.147:11434"
OLLAMA_MODEL = "qwen2:7b"

--- 1. Define Local Tools ---

This is our local Python function that the LLM can call.

def get_current_time():
"""Gets the current date and time."""
try:
now = datetime.now()
return {
"status": "success",
"current_time": now.strftime("%Y-%m-%d %H:%M:%S")
}
except Exception as e:
return {"status": "error", "message": str(e)}

A dictionary to map tool names to the actual functions.

AVAILABLE_TOOLS = {
"get_current_time": get_current_time,
}

--- 2. Create the Tool Declaration for the LLM ---

This is the "menu" of tools we show to the model.

TOOL_DECLARATIONS = [
{
"type": "function",
"function": {
"name": "get_current_time",
"description": "Get the current date and time.",
"parameters": {
"type": "object",
"properties": {}, # No parameters for this tool
}
}
}
]

--- Main Script Logic ---

def run_agent_turn():
# Start with the user's prompt
messages = [
{"role": "user", "content": "What time is it right now?"}
]

print("--- Turn 1: User asks for the time ---")
print(f"User > What time is it right now?\n")

# --- 3. First API Call: Ask the model to choose a tool ---
payload = {
    "model": OLLAMA_MODEL,
    "messages": messages,
    "tools": TOOL_DECLARATIONS,
    "stream": True
}

# This dictionary will assemble the streamed tool call
aggregated_tool = {}
print("Ollama Chunks (Tool Call):")

with requests.post(f"{OLLAMA_API_BASE}/api/chat", json=payload, stream=True) as response:
    response.raise_for_status()
    for chunk in response.iter_lines():
        if chunk:
            chunk_data = json.loads(chunk)
            print(chunk_data) # This is where we print the raw chunks
            
            # Intelligent aggregation logic
            message_chunk = chunk_data.get("message", {})
            if tool_calls := message_chunk.get("tool_calls"):
                tool_chunk = tool_calls[0] # Assuming one tool call for simplicity
                if not aggregated_tool:
                    aggregated_tool = tool_chunk
                else:
                    if args_delta := tool_chunk.get("function", {}).get("arguments"):
                        aggregated_tool["function"]["arguments"] += args_delta
            
            # Stop when the full response is received
            if chunk_data.get("done"):
                break

print("\n--- Aggregated Tool Call ---")
print(json.dumps(aggregated_tool, indent=2))

# --- 4. Execute the chosen tool ---
tool_name = aggregated_tool.get("function", {}).get("name")

if tool_name in AVAILABLE_TOOLS:
    print(f"\n--- Executing Tool: {tool_name} ---")
    tool_function = AVAILABLE_TOOLS[tool_name]
    tool_result = tool_function() # Execute the function
    print(f"Result: {tool_result}\n")

    # Add the original assistant message (the tool call) to history
    messages.append({"role": "assistant", "tool_calls": [aggregated_tool]})
    
    # Add the tool's result to history
    messages.append({
        "role": "tool",
        "tool_call_id": aggregated_tool.get("id"), # Use the ID from the model
        "content": json.dumps(tool_result)
    })

else:
    print("Error: Model tried to call a non-existent tool.")
    return

# --- 5. Second API Call: Send the result back to get a final answer ---
print("--- Turn 2: Sending tool result back to Ollama ---")

final_payload = {
    "model": OLLAMA_MODEL,
    "messages": messages,
    "stream": True # We can stream the final answer too
}

final_answer = ""
print("Ollama Chunks (Final Answer):")
with requests.post(f"{OLLAMA_API_BASE}/api/chat", json=final_payload, stream=True) as response:
    response.raise_for_status()
    for chunk in response.iter_lines():
        if chunk:
            chunk_data = json.loads(chunk)
            print(chunk_data) # This is where we print the raw chunks
            
            if content := chunk_data.get("message", {}).get("content"):
                final_answer += content
            
            if chunk_data.get("done"):
                break

print("\n--- Final Answer from AI ---")
print(final_answer)

if name == "main":
run_agent_turn()

The results:

--- Turn 1: User asks for the time ---
User > What time is it right now?

Ollama Chunks (Tool Call):
{'model': 'qwen2:7b', 'created_at': '...', 'message': {'role': 'assistant', 'content': '', 'tool_calls': [{'index': 0, 'id': 'call_abc123', 'type': 'function', 'function': {'name': 'get_current_time', 'arguments': '{}'}}]}, 'done': False}
{'model': 'qwen2:7b', 'created_at': '...', 'message': {'role': 'assistant', 'content': ''}, 'done': True, 'total_duration': ..., 'load_duration': ..., 'prompt_eval_count': ..., 'prompt_eval_duration': ..., 'eval_count': ..., 'eval_duration': ...}

--- Aggregated Tool Call ---
{
"index": 0,
"id": "call_abc123",
"type": "function",
"function": {
"name": "get_current_time",
"arguments": "{}"
}
}

--- Executing Tool: get_current_time ---
Result: {'status': 'success', 'current_time': '2025-08-04 10:30:00'}

--- Turn 2: Sending tool result back to Ollama ---
Ollama Chunks (Final Answer):
{'model': 'qwen2:7b', 'created_at': '...', 'message': {'role': 'assistant', 'content': 'The current'}, 'done': False}
{'model': 'qwen2:7b', 'created_at': '...', 'message': {'role': 'assistant', 'content': ' time is'}, 'done': False}
{'model': 'qwen2:7b', 'created_at': '...', 'message': {'role': 'assistant', 'content': ' 2025-08-04'}, 'done': False}
{'model': 'qwen2:7b', 'created_at': '...', 'message': {'role': 'assistant', 'content': ' 10:30:00'}, 'done': False}
{'model': 'qwen2:7b', 'created_at': '...', 'message': {'role': 'assistant', 'content': '.'}, 'done': False}
{'model': 'qwen2:7b', 'created_at': '...', 'message': {'role': 'assistant', 'content': ''}, 'done': True, 'total_duration': ..., 'load_duration': ..., 'prompt_eval_count': ..., 'prompt_eval_duration': ..., 'eval_count': ..., 'eval_duration': ...}

--- Final Answer from AI ---
The current time is 2025-08-04 10:30:00.

Study it and understand.

@GreazySpoon commented on GitHub (Aug 4, 2025): @anakin87 ** JUMP TO THE END TO SEE AN EXAMPLE OF CHUNKS ** I wrote for google adk my own Ollama model class, to prevent all bugs in adk + LiteLLM when streaming. i had to handle chunks to execute the tool. Here is an example for you to understand and the execution results: import requests import json from datetime import datetime # --- Configuration --- OLLAMA_API_BASE = "http://10.10.61.147:11434" OLLAMA_MODEL = "qwen2:7b" # --- 1. Define Local Tools --- # This is our local Python function that the LLM can call. def get_current_time(): """Gets the current date and time.""" try: now = datetime.now() return { "status": "success", "current_time": now.strftime("%Y-%m-%d %H:%M:%S") } except Exception as e: return {"status": "error", "message": str(e)} # A dictionary to map tool names to the actual functions. AVAILABLE_TOOLS = { "get_current_time": get_current_time, } # --- 2. Create the Tool Declaration for the LLM --- # This is the "menu" of tools we show to the model. TOOL_DECLARATIONS = [ { "type": "function", "function": { "name": "get_current_time", "description": "Get the current date and time.", "parameters": { "type": "object", "properties": {}, # No parameters for this tool } } } ] # --- Main Script Logic --- def run_agent_turn(): # Start with the user's prompt messages = [ {"role": "user", "content": "What time is it right now?"} ] print("--- Turn 1: User asks for the time ---") print(f"User > What time is it right now?\n") # --- 3. First API Call: Ask the model to choose a tool --- payload = { "model": OLLAMA_MODEL, "messages": messages, "tools": TOOL_DECLARATIONS, "stream": True } # This dictionary will assemble the streamed tool call aggregated_tool = {} print("Ollama Chunks (Tool Call):") with requests.post(f"{OLLAMA_API_BASE}/api/chat", json=payload, stream=True) as response: response.raise_for_status() for chunk in response.iter_lines(): if chunk: chunk_data = json.loads(chunk) print(chunk_data) # This is where we print the raw chunks # Intelligent aggregation logic message_chunk = chunk_data.get("message", {}) if tool_calls := message_chunk.get("tool_calls"): tool_chunk = tool_calls[0] # Assuming one tool call for simplicity if not aggregated_tool: aggregated_tool = tool_chunk else: if args_delta := tool_chunk.get("function", {}).get("arguments"): aggregated_tool["function"]["arguments"] += args_delta # Stop when the full response is received if chunk_data.get("done"): break print("\n--- Aggregated Tool Call ---") print(json.dumps(aggregated_tool, indent=2)) # --- 4. Execute the chosen tool --- tool_name = aggregated_tool.get("function", {}).get("name") if tool_name in AVAILABLE_TOOLS: print(f"\n--- Executing Tool: {tool_name} ---") tool_function = AVAILABLE_TOOLS[tool_name] tool_result = tool_function() # Execute the function print(f"Result: {tool_result}\n") # Add the original assistant message (the tool call) to history messages.append({"role": "assistant", "tool_calls": [aggregated_tool]}) # Add the tool's result to history messages.append({ "role": "tool", "tool_call_id": aggregated_tool.get("id"), # Use the ID from the model "content": json.dumps(tool_result) }) else: print("Error: Model tried to call a non-existent tool.") return # --- 5. Second API Call: Send the result back to get a final answer --- print("--- Turn 2: Sending tool result back to Ollama ---") final_payload = { "model": OLLAMA_MODEL, "messages": messages, "stream": True # We can stream the final answer too } final_answer = "" print("Ollama Chunks (Final Answer):") with requests.post(f"{OLLAMA_API_BASE}/api/chat", json=final_payload, stream=True) as response: response.raise_for_status() for chunk in response.iter_lines(): if chunk: chunk_data = json.loads(chunk) print(chunk_data) # This is where we print the raw chunks if content := chunk_data.get("message", {}).get("content"): final_answer += content if chunk_data.get("done"): break print("\n--- Final Answer from AI ---") print(final_answer) if __name__ == "__main__": run_agent_turn() The results: --- Turn 1: User asks for the time --- User > What time is it right now? Ollama Chunks (Tool Call): {'model': 'qwen2:7b', 'created_at': '...', 'message': {'role': 'assistant', 'content': '', 'tool_calls': [{'index': 0, 'id': 'call_abc123', 'type': 'function', 'function': {'name': 'get_current_time', 'arguments': '{}'}}]}, 'done': False} {'model': 'qwen2:7b', 'created_at': '...', 'message': {'role': 'assistant', 'content': ''}, 'done': True, 'total_duration': ..., 'load_duration': ..., 'prompt_eval_count': ..., 'prompt_eval_duration': ..., 'eval_count': ..., 'eval_duration': ...} --- Aggregated Tool Call --- { "index": 0, "id": "call_abc123", "type": "function", "function": { "name": "get_current_time", "arguments": "{}" } } --- Executing Tool: get_current_time --- Result: {'status': 'success', 'current_time': '2025-08-04 10:30:00'} --- Turn 2: Sending tool result back to Ollama --- Ollama Chunks (Final Answer): {'model': 'qwen2:7b', 'created_at': '...', 'message': {'role': 'assistant', 'content': 'The current'}, 'done': False} {'model': 'qwen2:7b', 'created_at': '...', 'message': {'role': 'assistant', 'content': ' time is'}, 'done': False} {'model': 'qwen2:7b', 'created_at': '...', 'message': {'role': 'assistant', 'content': ' 2025-08-04'}, 'done': False} {'model': 'qwen2:7b', 'created_at': '...', 'message': {'role': 'assistant', 'content': ' 10:30:00'}, 'done': False} {'model': 'qwen2:7b', 'created_at': '...', 'message': {'role': 'assistant', 'content': '.'}, 'done': False} {'model': 'qwen2:7b', 'created_at': '...', 'message': {'role': 'assistant', 'content': ''}, 'done': True, 'total_duration': ..., 'load_duration': ..., 'prompt_eval_count': ..., 'prompt_eval_duration': ..., 'eval_count': ..., 'eval_duration': ...} --- Final Answer from AI --- The current time is 2025-08-04 10:30:00. Study it and understand.
Author
Owner

@ParthSareen commented on GitHub (Aug 4, 2025):

Hey @anakin87! Tool calls should be coming back fully parsed. We don't explicitly fail but you won't have to build the tool call up yourself :)

@ParthSareen commented on GitHub (Aug 4, 2025): Hey @anakin87! Tool calls should be coming back fully parsed. We don't explicitly fail but you won't have to build the tool call up yourself :)
Author
Owner

@anakin87 commented on GitHub (Aug 4, 2025):

Thx!

@anakin87 commented on GitHub (Aug 4, 2025): Thx!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#7695