[GH-ISSUE #6708] Support tool/tool call ids when multiple tool calls are requested. #4224

Open
opened 2026-04-12 15:09:34 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @ggozad on GitHub (Sep 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6708

What is the issue?

When multiple tools are provided it is often the case that Olllama will respond with multiple tool_calls to be made. In that case, I am guessing we are expected to answer with as many {'role': 'tool', 'content': '...'} messages.
How can then one specify which of these messages corresponds to which tool call? I think OpenAI provides some id to each of the tool calls that the responses are supposed to use.

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.3.9

Originally created by @ggozad on GitHub (Sep 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6708 ### What is the issue? When multiple tools are provided it is often the case that Olllama will respond with multiple `tool_calls` to be made. In that case, I am guessing we are expected to answer with as many `{'role': 'tool', 'content': '...'}` messages. How can then one specify which of these messages corresponds to which tool call? I *think* OpenAI provides some id to each of the tool calls that the responses are supposed to use. ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.3.9
GiteaMirror added the bugapi labels 2026-04-12 15:09:34 -05:00
Author
Owner

@YonTracks commented on GitHub (Sep 13, 2024):

what you name the tool and the description seems to work for me, when tools[] in in the api/chat request the model seems to use the name and description(so good names and description is key?) to decide wich tool to use accordingly(hence a limit of how many tools) then it provides the tool and arguments to use(the model tries to find/generate the correct arguments for the tool from the based on the tool structure and prompt, context even capabilities and capabilities for the tool etc). hoping somebody more clever than me can explain this without causing security issues lol. good luck.

<!-- gh-comment-id:2347965375 --> @YonTracks commented on GitHub (Sep 13, 2024): what you name the tool and the description seems to work for me, when tools[] in in the api/chat request the model seems to use the name and description(so good names and description is key?) to decide wich tool to use accordingly(hence a limit of how many tools) then it provides the tool and arguments to use(the model tries to find/generate the correct arguments for the tool from the based on the tool structure and prompt, context even capabilities and capabilities for the tool etc). hoping somebody more clever than me can explain this without causing security issues lol. good luck.
Author
Owner

@ggozad commented on GitHub (Sep 13, 2024):

@YonTracks I am afraid this does not cut it. For instance it is possible (and desirable in some cases) that the same tool might be called several times (for instance if the tool performs searches) with different arguments. Or that the responses from several calls are difficult to distinguish, for instance if they are numeric values.

<!-- gh-comment-id:2348197077 --> @ggozad commented on GitHub (Sep 13, 2024): @YonTracks I am afraid this does not cut it. For instance it is possible (and desirable in some cases) that the same tool might be called several times (for instance if the tool performs searches) with different arguments. Or that the responses from several calls are difficult to distinguish, for instance if they are numeric values.
Author
Owner

@YonTracks commented on GitHub (Sep 13, 2024):

yep agree, id would be awesome. eg:

    type: "function",
    id: 1,
    functionName: "describeImage",
    function: {
      name: "describe_image",
      description:
        "Processes an image and returns a description of its content",
      parameters: {
        type: "object",
        properties: {
          base64Image: {
            type: "string",
            description: "The base64 encoded image data",
          },
        },
        required: ["base64Image"],
      },
    },
    {
    type: "function",
    id: 2,
    functionName: "describeImage",
    function: {
      name: "describe_image",
      description:
        "Processes an image and returns a description of its content",
      parameters: {
        type: "object",
        properties: {
          imageUrl: {
            type: "string",
            description: "image url",
          },
        },
        required: ["imageUrl"],
      },
    },
  ```.
  
  response = assistant .```{
    "role": "assistant",
    "content": "",
    "tool_calls": [
        {
            "function": {
                "id: 1,
                "name": "describe_image",
                "arguments": {
                    "base64Image": "iVBORw0KGgoAAAANSUhEUg...jhlhds"
                }
            }
        }
    ]
}
```.  
  response = assistant .```{
    "role": "assistant",
    "content": "",
    "tool_calls": [
        {
            "function": {
                "id: 2,
                "name": "describe_image",
                "arguments": {
                    "imageUrl": "example.png"
                }
            }
        }
    ]
}
```.  cheers.
<!-- gh-comment-id:2348205652 --> @YonTracks commented on GitHub (Sep 13, 2024): yep agree, id would be awesome. eg: ```{ type: "function", id: 1, functionName: "describeImage", function: { name: "describe_image", description: "Processes an image and returns a description of its content", parameters: { type: "object", properties: { base64Image: { type: "string", description: "The base64 encoded image data", }, }, required: ["base64Image"], }, }, { type: "function", id: 2, functionName: "describeImage", function: { name: "describe_image", description: "Processes an image and returns a description of its content", parameters: { type: "object", properties: { imageUrl: { type: "string", description: "image url", }, }, required: ["imageUrl"], }, }, ```. response = assistant .```{ "role": "assistant", "content": "", "tool_calls": [ { "function": { "id: 1, "name": "describe_image", "arguments": { "base64Image": "iVBORw0KGgoAAAANSUhEUg...jhlhds" } } } ] } ```. response = assistant .```{ "role": "assistant", "content": "", "tool_calls": [ { "function": { "id: 2, "name": "describe_image", "arguments": { "imageUrl": "example.png" } } } ] } ```. cheers.
Author
Owner

@YonTracks commented on GitHub (Sep 13, 2024):

@ggozad I tried to impliment this? kind of did? the model is stuggling to understand what the id is for? Please share a refined use case? examples more info needed cheers. I will keep going but don't want to waste my time(never wasted anyway), but like there are a few ways folks use tools, are you using the official ollama way? or the open ai struct style. srry if I don't make sense.
if needing open ai support(non tools[] style), I guess it is comming when they sort it.

srry if pinging multi times due to edit. again, my understanding of this? the official ollama way is the model decides based on good description and name. eg: if image is url name = describe_image_url, if base64, then name = describe_image_base64...

<!-- gh-comment-id:2348850193 --> @YonTracks commented on GitHub (Sep 13, 2024): @ggozad I tried to impliment this? kind of did? the model is stuggling to understand what the id is for? Please share a refined use case? examples more info needed cheers. I will keep going but don't want to waste my time(never wasted anyway), but like there are a few ways folks use tools, are you using the official ollama way? or the open ai struct style. srry if I don't make sense. if needing open ai support(non tools[] style), I guess it is comming when they sort it. srry if pinging multi times due to edit. again, my understanding of this? the official ollama way is the model decides based on good description and name. eg: if image is url name = describe_image_url, if base64, then name = describe_image_base64...
Author
Owner

@YonTracks commented on GitHub (Sep 13, 2024):

My tools array is min 9 tools so far in my testing up to 18 different functions/tools, ollama official way is flawless. for me custom UI?

<!-- gh-comment-id:2348886839 --> @YonTracks commented on GitHub (Sep 13, 2024): My tools array is min 9 tools so far in my testing up to 18 different functions/tools, ollama official way is flawless. for me custom UI?
Author
Owner

@YonTracks commented on GitHub (Sep 13, 2024):

next JS example of the executeTool for me works great.

import { getCurrentWeather, getFutureWeatherWeek } from "../utils/weather";
import { searchWeb } from "./searchWeb";
import { getRegularResponse } from "./getRegularResponse";
import { getFlightTimes } from "./getFligtTimes";
import { describeImage } from "./describeImage";
import { processContent } from "./processContent";

/**
 * Executes a tool based on the provided name and arguments.
 * @param toolName - Name of the tool to execute
 * @param args - Arguments to pass to the tool function
 * @param eventEmitter - Optional EventEmitter to track tool execution progress
 */
export const executeTool = async (
  toolName: string,
  args: any,
  eventEmitter: any = null // Default to null if no eventEmitter is passed
): Promise<string> => {
  const emitter = new EventEmitter(eventEmitter); // Initialize EventEmitter

  console.log(`Executing tool: ${toolName} with arguments:`, args);

  // Map of available functions for tools
  const availableFunctions: { [key: string]: (...args: any) => any } = {
    search_web: searchWeb,
    get_flight_times: getFlightTimes,
    get_current_weather: getCurrentWeather,
    get_future_weather_week: getFutureWeatherWeek,
    get_regular_response: getRegularResponse,
    describe_image: describeImage,
    process_content: processContent,
  };

  if (availableFunctions[toolName]) {
    try {
      await emitter.emit(`Executing tool: ${toolName}`, "in_progress");
 

      // Execute the tool function with provided arguments
      const result = await availableFunctions[toolName](
        ...Object.values(args),
      );
      console.log(`Result from ${toolName}:`, result);

      await emitter.emit(
        `Tool ${toolName} execution completed`,
        "completed",
        true
      );
      return result;
    } catch (error: any) {
      console.error(`Error executing tool ${toolName}:`, error);
      await emitter.emit(
        `Error executing tool: ${toolName}`,
        "error",
        true,
        error.message
      );
      return JSON.stringify({
        error: "An error occurred while executing the tool",
      });
    }
  } else {
    console.error(`Tool ${toolName} not found`);
    await emitter.emit(`Tool ${toolName} not found`, "error", true);
    return JSON.stringify({ error: "Tool not found" });
  }
};
<!-- gh-comment-id:2348894916 --> @YonTracks commented on GitHub (Sep 13, 2024): next JS example of the executeTool for me works great. ```import EventEmitter from "./eventEmitter"; import { getCurrentWeather, getFutureWeatherWeek } from "../utils/weather"; import { searchWeb } from "./searchWeb"; import { getRegularResponse } from "./getRegularResponse"; import { getFlightTimes } from "./getFligtTimes"; import { describeImage } from "./describeImage"; import { processContent } from "./processContent"; /** * Executes a tool based on the provided name and arguments. * @param toolName - Name of the tool to execute * @param args - Arguments to pass to the tool function * @param eventEmitter - Optional EventEmitter to track tool execution progress */ export const executeTool = async ( toolName: string, args: any, eventEmitter: any = null // Default to null if no eventEmitter is passed ): Promise<string> => { const emitter = new EventEmitter(eventEmitter); // Initialize EventEmitter console.log(`Executing tool: ${toolName} with arguments:`, args); // Map of available functions for tools const availableFunctions: { [key: string]: (...args: any) => any } = { search_web: searchWeb, get_flight_times: getFlightTimes, get_current_weather: getCurrentWeather, get_future_weather_week: getFutureWeatherWeek, get_regular_response: getRegularResponse, describe_image: describeImage, process_content: processContent, }; if (availableFunctions[toolName]) { try { await emitter.emit(`Executing tool: ${toolName}`, "in_progress"); // Execute the tool function with provided arguments const result = await availableFunctions[toolName]( ...Object.values(args), ); console.log(`Result from ${toolName}:`, result); await emitter.emit( `Tool ${toolName} execution completed`, "completed", true ); return result; } catch (error: any) { console.error(`Error executing tool ${toolName}:`, error); await emitter.emit( `Error executing tool: ${toolName}`, "error", true, error.message ); return JSON.stringify({ error: "An error occurred while executing the tool", }); } } else { console.error(`Tool ${toolName} not found`); await emitter.emit(`Tool ${toolName} not found`, "error", true); return JSON.stringify({ error: "Tool not found" }); } }; ```
Author
Owner

@chrisxiao commented on GitHub (Oct 29, 2024):

I experienced the same problem, and I think the tool id is needed when the LLM returns multiple function calls at once
I use the Mistral model on Ollama, and the Mistral official API supports tool id, but Ollama does not.

<!-- gh-comment-id:2443343037 --> @chrisxiao commented on GitHub (Oct 29, 2024): I experienced the same problem, and I think the `tool id` is needed when the LLM returns multiple function calls at once I use the Mistral model on Ollama, and the Mistral official API supports `tool id`, but Ollama does not.
Author
Owner

@mkorpela commented on GitHub (Dec 5, 2024):

I would also expect id to be there. If the model calls multiple tools, then mapping the tool results to right tools is a bit of a puzzle.

Maybe the safest way for now is to split the multiple tool_calls requesting AI message to multiple and having a tool response after each call. This then shows a linear message history of calls to the model.

<!-- gh-comment-id:2519613599 --> @mkorpela commented on GitHub (Dec 5, 2024): I would also expect id to be there. If the model calls multiple tools, then mapping the tool results to right tools is a bit of a puzzle. Maybe the safest way for now is to split the multiple tool_calls requesting AI message to multiple and having a tool response after each call. This then shows a linear message history of calls to the model.
Author
Owner

@vblagoje commented on GitHub (Jun 2, 2025):

It would be nice to have these!

<!-- gh-comment-id:2929743310 --> @vblagoje commented on GitHub (Jun 2, 2025): It would be nice to have these!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4224