[GH-ISSUE #6790] openai tools streaming support coming soon? #30040

Closed
opened 2026-04-22 09:27:32 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @LuckLittleBoy on GitHub (Sep 13, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6790

In which version of the openai tools streaming support feature is planned to be supported?
When will it be supported?
联想截图_20240912094356

Originally created by @LuckLittleBoy on GitHub (Sep 13, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6790 In which version of the openai tools streaming support feature is planned to be supported? When will it be supported? ![联想截图_20240912094356](https://github.com/user-attachments/assets/46b20823-333c-4c2f-bc7d-01731b6c1f86)
GiteaMirror added the feature request label 2026-04-22 09:27:32 -05:00
Author
Owner

@YonTracks commented on GitHub (Sep 13, 2024):

I learned if while tools[] then stream: false else true? it does stream tools? and it is awesome. good luck
can add that in backend but for now, the actual tool_calls call to the api needs stream: false then can stream the results of the funtion/tool, both if not tools[] or after the tools[] hint hint, api/generate and api/chat seem to work, I have both readableStream and JSON in the front? hope this makes sense. good luck

<!-- gh-comment-id:2348994734 --> @YonTracks commented on GitHub (Sep 13, 2024): I learned if while tools[] then stream: false else true? it does stream tools? and it is awesome. good luck can add that in backend but for now, the actual tool_calls call to the api needs stream: false then can stream the results of the funtion/tool, both if not tools[] or after the tools[] hint hint, api/generate and api/chat seem to work, I have both readableStream and JSON in the front? hope this makes sense. good luck
Author
Owner

@RipperTs commented on GitHub (Sep 19, 2024):

I was looking forward to this feature as well, but why is it closed for this issue?

<!-- gh-comment-id:2359703381 --> @RipperTs commented on GitHub (Sep 19, 2024): I was looking forward to this feature as well, but why is it closed for this issue?
Author
Owner

@LuckLittleBoy commented on GitHub (Sep 19, 2024):

I was looking forward to this feature as well, but why is it closed for this issue?

I think the project members have already recognized this issue, so I closed it.

<!-- gh-comment-id:2359722691 --> @LuckLittleBoy commented on GitHub (Sep 19, 2024): > I was looking forward to this feature as well, but why is it closed for this issue? I think the project members have already recognized this issue, so I closed it.
Author
Owner

@YonTracks commented on GitHub (Sep 19, 2024):

possibly because of open issue #6708 is same same and others? being openai support is needed, and ollama will be working on it?
mean while the official well what ever you call it? ollama tools way? seems a good way also? so a lot of testing for stability / backwards compatability etc... Good luck. ps. check out ollama tool_calls stream: false ony if tools[toolsList] the after remove the tools[] and stream true. good luck.

<!-- gh-comment-id:2359728450 --> @YonTracks commented on GitHub (Sep 19, 2024): possibly because of open issue #6708 is same same and others? being openai support is needed, and ollama will be working on it? mean while the official well what ever you call it? ollama tools way? seems a good way also? so a lot of testing for stability / backwards compatability etc... Good luck. ps. check out ollama `tool_calls stream: false ony if tools[toolsList] the after remove the tools[]` and stream true. good luck.
Author
Owner

@RipperTs commented on GitHub (Sep 19, 2024):

I'm just trying to migrate seamlessly from OpenAI, but as it stands, the structure of the streaming output isn't the same as OpenAI, so that's why I'm questioning it.

I understand the way Ollama's tools are called now, but for now the OpenAI interface specification is compatible with most applications, so forgive me if I'm lazy and don't want to change the structure of my code for this purpose.

<!-- gh-comment-id:2359738069 --> @RipperTs commented on GitHub (Sep 19, 2024): I'm just trying to migrate seamlessly from `OpenAI`, but as it stands, the structure of the streaming output isn't the same as OpenAI, so that's why I'm questioning it. I understand the way Ollama's tools are called now, but for now the `OpenAI` interface specification is compatible with most applications, so forgive me if I'm lazy and don't want to change the structure of my code for this purpose.
Author
Owner

@YonTracks commented on GitHub (Sep 19, 2024):

yep, ollama will sort it, they are waesome!, keeping backwards compatability, and no breaking changes epic team I must say!

<!-- gh-comment-id:2359757529 --> @YonTracks commented on GitHub (Sep 19, 2024): yep, ollama will sort it, they are waesome!, keeping backwards compatability, and no breaking changes epic team I must say!
Author
Owner

@YonTracks commented on GitHub (Sep 19, 2024):

help me understand? heres openai version using ollama way?

export const tools: Tool[] = [
{
    type: "function",
    functionName: "getFlightTimes",
    function: {
      name: "get_flight_times",
      description: "Get the flight times between two cities",
      parameters: {
        type: "object",
        properties: {
          departure: {
            type: "string",
            description: "The departure city (airport code)",
          },
          arrival: {
            type: "string",
            description: "The arrival city (airport code)",
          },
        },
        required: ["departure", "arrival"],
        additionalProperties: false,
      },
    },
  },
  {
    type: "function",
    functionName: "get_delivery_date",
    function: {
      name: "get_delivery_date",
      description:
        "Get the delivery date for a customer's order. Call this whenever you need to know the delivery date, for example when a customer asks 'Where is my package'",
      parameters: {
        type: "object",
        properties: {
          order_id: {
            type: "string",
            description: "The customer's order ID.",
          },
        },
        required: ["order_id"],
        additionalProperties: false,
      },
    },
  },
  ]

import EventEmitter from "./eventEmitter";
import { getCurrentWeather, getFutureWeatherWeek } from "../utils/weather";
import { fetchWebsiteContent } from "./fetchWebsiteContent";
import { searchWeb } from "./searchWeb";
import { getRegularResponse } from "./getRegularResponse";
import { getFlightTimes } from "./getFlightTimes";
import { describeImage } from "./describeImage";
import { performReasoning } from "./performReasoning";
import { performMath } from "./performMath";
import { convertHtmlToMarkdown } from "./convertHtmlToMarkdown.ts";
import { getDeliveryDate } from "./getDeliveryDate";

/**
* Executes a tool based on the provided name and arguments.
* @param toolName - Name of the tool to execute
* @param args - Arguments to pass to the tool function
* @param eventEmitter - Optional EventEmitter to track tool execution progress
*/

export const executeTool = async (
toolName: string,
args: any,
eventEmitter: any = null
): Promise<string> => {
const emitter = new EventEmitter(eventEmitter);
console.log(`Executing tool: ${toolName} with arguments:`, args);

const availableFunctions: { [key: string]: (...args: any) => any } = {
  search_web: searchWeb,
  get_flight_times: getFlightTimes,
  get_current_weather: getCurrentWeather,
  get_future_weather_week: getFutureWeatherWeek,
  get_regular_response: getRegularResponse,
  fetch_website_content: fetchWebsiteContent,
  describe_image: describeImage,
  perform_reasoning: performReasoning,
  perform_math: performMath,
  convert_html_to_markdown: convertHtmlToMarkdown,
  get_delivery_date: getDeliveryDate,
};

if (availableFunctions[toolName]) {
  try {
    await emitter.emit(`Executing tool: ${toolName}`, "in_progress");

    // Execute the tool function with provided arguments
    const result = await availableFunctions[toolName](args);

    const resultAsString =
      typeof result === "object" ? JSON.stringify(result) : result;

    // console.log(`Result from ${toolName}:`, resultAsString);
    await emitter.emit(
      `Tool ${toolName} execution completed`,
      "completed",
      true
    );

    // Return result as a string
    return resultAsString;
  } catch (error: any) {
    console.error(`Error executing tool ${toolName}:`, error);
    await emitter.emit(
      `Error executing tool: ${toolName}`,
      "error",
      true,
      error.message
    );
    return JSON.stringify({
      error: "An error occurred while executing the tool",
    });
  }
} else {
  console.error(`Tool ${toolName} not found`);
  await emitter.emit(`Tool ${toolName} not found`, "error", true);
  return JSON.stringify({ error: "Tool not found" });
}
};
```. 
  const response = await fetch("/api/ollamaChat", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      model,
      messages,
      tools,
      stream: false,
    }),
  });
  ``` = 
  heres the response: ```{
"model": "llama3.1",
"created_at": "2024-09-19T02:20:01.9948428Z",
"message": {
    "role": "assistant",
    "content": "",
    "tool_calls": [
        {
            "function": {
                "name": "get_delivery_date",
                "arguments": {
                    "order_id": "ORDER123"
                }
            }
        }
    ]
},
"done_reason": "stop",
"done": true,
"total_duration": 4992693500,
"load_duration": 4133287200,
"prompt_eval_count": 731,
"prompt_eval_duration": 516220000,
"eval_count": 20,
"eval_duration": 342176000

}

heres open ai response:```{
    finish_reason: 'tool_calls',
    index: 0,
    logprobs: null,
    message: {
        content: null,
        role: 'assistant',
        function_call: null,
        tool_calls: [
            {
                id: 'call_62136354',
                function: {
                    arguments: '{"order_id":"order_12345"}',
                    name: 'get_delivery_date'
                },
                type: 'function'
            }
        ]
    }
}```
can be done no?  I must be missing something it works?  currently building a openai tools app to test this the other way hmm?

I mean ya gunna get errors but far out I do anyway lol and sort it?,   thats what its all about. I will soon see. cheers hope this is taken good. will let you know.
<!-- gh-comment-id:2359858745 --> @YonTracks commented on GitHub (Sep 19, 2024): help me understand? heres openai version using ollama way? ``` export const tools: Tool[] = [ { type: "function", functionName: "getFlightTimes", function: { name: "get_flight_times", description: "Get the flight times between two cities", parameters: { type: "object", properties: { departure: { type: "string", description: "The departure city (airport code)", }, arrival: { type: "string", description: "The arrival city (airport code)", }, }, required: ["departure", "arrival"], additionalProperties: false, }, }, }, { type: "function", functionName: "get_delivery_date", function: { name: "get_delivery_date", description: "Get the delivery date for a customer's order. Call this whenever you need to know the delivery date, for example when a customer asks 'Where is my package'", parameters: { type: "object", properties: { order_id: { type: "string", description: "The customer's order ID.", }, }, required: ["order_id"], additionalProperties: false, }, }, }, ] ``` ```// lib/utils/executeTool.ts import EventEmitter from "./eventEmitter"; import { getCurrentWeather, getFutureWeatherWeek } from "../utils/weather"; import { fetchWebsiteContent } from "./fetchWebsiteContent"; import { searchWeb } from "./searchWeb"; import { getRegularResponse } from "./getRegularResponse"; import { getFlightTimes } from "./getFlightTimes"; import { describeImage } from "./describeImage"; import { performReasoning } from "./performReasoning"; import { performMath } from "./performMath"; import { convertHtmlToMarkdown } from "./convertHtmlToMarkdown.ts"; import { getDeliveryDate } from "./getDeliveryDate"; /** * Executes a tool based on the provided name and arguments. * @param toolName - Name of the tool to execute * @param args - Arguments to pass to the tool function * @param eventEmitter - Optional EventEmitter to track tool execution progress */ export const executeTool = async ( toolName: string, args: any, eventEmitter: any = null ): Promise<string> => { const emitter = new EventEmitter(eventEmitter); console.log(`Executing tool: ${toolName} with arguments:`, args); const availableFunctions: { [key: string]: (...args: any) => any } = { search_web: searchWeb, get_flight_times: getFlightTimes, get_current_weather: getCurrentWeather, get_future_weather_week: getFutureWeatherWeek, get_regular_response: getRegularResponse, fetch_website_content: fetchWebsiteContent, describe_image: describeImage, perform_reasoning: performReasoning, perform_math: performMath, convert_html_to_markdown: convertHtmlToMarkdown, get_delivery_date: getDeliveryDate, }; if (availableFunctions[toolName]) { try { await emitter.emit(`Executing tool: ${toolName}`, "in_progress"); // Execute the tool function with provided arguments const result = await availableFunctions[toolName](args); const resultAsString = typeof result === "object" ? JSON.stringify(result) : result; // console.log(`Result from ${toolName}:`, resultAsString); await emitter.emit( `Tool ${toolName} execution completed`, "completed", true ); // Return result as a string return resultAsString; } catch (error: any) { console.error(`Error executing tool ${toolName}:`, error); await emitter.emit( `Error executing tool: ${toolName}`, "error", true, error.message ); return JSON.stringify({ error: "An error occurred while executing the tool", }); } } else { console.error(`Tool ${toolName} not found`); await emitter.emit(`Tool ${toolName} not found`, "error", true); return JSON.stringify({ error: "Tool not found" }); } }; ```. ``` const response = await fetch("/api/ollamaChat", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ model, messages, tools, stream: false, }), }); ``` = heres the response: ```{ "model": "llama3.1", "created_at": "2024-09-19T02:20:01.9948428Z", "message": { "role": "assistant", "content": "", "tool_calls": [ { "function": { "name": "get_delivery_date", "arguments": { "order_id": "ORDER123" } } } ] }, "done_reason": "stop", "done": true, "total_duration": 4992693500, "load_duration": 4133287200, "prompt_eval_count": 731, "prompt_eval_duration": 516220000, "eval_count": 20, "eval_duration": 342176000 } ```. heres open ai response:```{ finish_reason: 'tool_calls', index: 0, logprobs: null, message: { content: null, role: 'assistant', function_call: null, tool_calls: [ { id: 'call_62136354', function: { arguments: '{"order_id":"order_12345"}', name: 'get_delivery_date' }, type: 'function' } ] } }``` can be done no? I must be missing something it works? currently building a openai tools app to test this the other way hmm? I mean ya gunna get errors but far out I do anyway lol and sort it?, thats what its all about. I will soon see. cheers hope this is taken good. will let you know.
Author
Owner

@YonTracks commented on GitHub (Sep 19, 2024):

also there is function calling then there is ollama tools very similar, can utilize both even for more ai power lol hint hint.
imagine when if no errors, it just works? but, is it? what way is it working? these models even fool you lol calling python(I'm lucky I see becaure JS lol), log everything lol, true! a lot going on, hard to explain with out seeming even more crazy lol.

<!-- gh-comment-id:2359868379 --> @YonTracks commented on GitHub (Sep 19, 2024): also there is function calling then there is ollama tools very similar, can utilize both even for more ai power lol hint hint. imagine when if no errors, it just works? but, is it? what way is it working? these models even fool you lol calling python(I'm lucky I see becaure JS lol), log everything lol, true! a lot going on, hard to explain with out seeming even more crazy lol.
Author
Owner

@YonTracks commented on GitHub (Sep 19, 2024):

I see, open ai function calling and open ai tools(python), then we got ollama tools and folks are thinking built in "python" tools, ok the plot thickens lol, building lol (there probably is built in tools learning lol). epic. love it cheers.

last edit for this comment srry. ahh! I see readableStream errors when parsing. I will try. Good luck.

<!-- gh-comment-id:2359883539 --> @YonTracks commented on GitHub (Sep 19, 2024): I see, open ai function calling and open ai tools(python), then we got ollama tools and folks are thinking built in "python" tools, ok the plot thickens lol, building lol (there probably is built in tools learning lol). epic. love it cheers. last edit for this comment srry. ahh! I see readableStream errors when parsing. I will try. Good luck.
Author
Owner

@YonTracks commented on GitHub (Sep 19, 2024):

it works? not sure what the issue is(the readable stream bit), must be the front end/client you are using? open ai fuction calling works with my setup no errors I just pass the tools same same to the api? then use the response? I will build openai/ollama example. give me a few days I will share what I learn. cheers.

<!-- gh-comment-id:2359892130 --> @YonTracks commented on GitHub (Sep 19, 2024): it works? not sure what the issue is(the readable stream bit), must be the front end/client you are using? open ai fuction calling works with my setup no errors I just pass the tools same same to the api? then use the response? I will build openai/ollama example. give me a few days I will share what I learn. cheers.
Author
Owner

@LuckLittleBoy commented on GitHub (Sep 19, 2024):

sorry, I don't know much about front-end technology, and I wish you success. good luck.

<!-- gh-comment-id:2359896924 --> @LuckLittleBoy commented on GitHub (Sep 19, 2024): sorry, I don't know much about front-end technology, and I wish you success. good luck.
Author
Owner

@YonTracks commented on GitHub (Sep 19, 2024):

I will still build a example? slowly? srry if I seem blunt/ crazy what ever, I get a vibe everywhere I go lol I know I'm different(if i try change then its worse lol) only want to help, srry folks.

<!-- gh-comment-id:2359901287 --> @YonTracks commented on GitHub (Sep 19, 2024): I will still build a example? slowly? srry if I seem blunt/ crazy what ever, I get a vibe everywhere I go lol I know I'm different(if i try change then its worse lol) only want to help, srry folks.
Author
Owner

@YonTracks commented on GitHub (Sep 19, 2024):

yep, I see.

{
    "result": {
        "id": "chatcmpl-A94lw91bO4io2ipX9M3MSJoveRsPO",
        "object": "chat.completion",
        "created": 1726726660,
        "model": "gpt-4-0613",
        "choices": [
            {
                "index": 0,
                "message": {
                    "role": "assistant",
                    "content": null,
                    "tool_calls": [
                        {
                            "id": "call_wZJ00hcxH0uEoeN8G2V6rzhN",
                            "type": "function",
                            "function": {
                                "name": "get_delivery_date",
                                "arguments": "{\n  \"order_id\": \"order12345\"\n}"
                            }
                        }
                    ],
                    "refusal": null
                },
                "logprobs": null,
                "finish_reason": "tool_calls"
            }
        ],
        "usage": {
            "prompt_tokens": 468,
            "completion_tokens": 19,
            "total_tokens": 487,
            "completion_tokens_details": {
                "reasoning_tokens": 0
            }
        },
        "system_fingerprint": null
    }
}

openai has id and more to help find correct function / tool call, args etc...

{
  "result": {
      "id": "chatcmpl-A94lw91bO4io2ipX9M3MSJoveRsPO",
      "object": "chat.completion",
      "created": 1726726660,
      "model": "gpt-4-0613",
      "choices": [
          {
              "index": 0,
              "message": {
                  "role": "assistant",
                  "content": null,
                  "tool_calls": [
                      {
                          "id": "call_wZJ00hcxH0uEoeN8G2V6rzhN",
                          "type": "function",
                          "function": {
                              "name": "get_delivery_date",
                              "arguments": "{\n  \"order_id\": \"order12345\"\n}"
                          }
                      }
                  ],
                  "refusal": null
              },
              "logprobs": null,
              "finish_reason": "tool_calls"
          }
      ],
      "usage": {
          "prompt_tokens": 468,
          "completion_tokens": 19,
          "total_tokens": 487,
          "completion_tokens_details": {
              "reasoning_tokens": 0
          }
      },
      "system_fingerprint": null
  }
}

and ooh! paying quickly adds up(I'm Broke lol). I bet heaps of reasons why openai way is good/better, but I probably won't do the "openai"(payed api) tools app ollama example srry I think I understand lots more about why others too, but I will try my best to do a ollama(free api) compatable with openai and try that way. wish me luck cheers.

<!-- gh-comment-id:2360110629 --> @YonTracks commented on GitHub (Sep 19, 2024): yep, I see. ``` { "result": { "id": "chatcmpl-A94lw91bO4io2ipX9M3MSJoveRsPO", "object": "chat.completion", "created": 1726726660, "model": "gpt-4-0613", "choices": [ { "index": 0, "message": { "role": "assistant", "content": null, "tool_calls": [ { "id": "call_wZJ00hcxH0uEoeN8G2V6rzhN", "type": "function", "function": { "name": "get_delivery_date", "arguments": "{\n \"order_id\": \"order12345\"\n}" } } ], "refusal": null }, "logprobs": null, "finish_reason": "tool_calls" } ], "usage": { "prompt_tokens": 468, "completion_tokens": 19, "total_tokens": 487, "completion_tokens_details": { "reasoning_tokens": 0 } }, "system_fingerprint": null } } ``` openai has id and more to help find correct function / tool call, args etc... ``` { "result": { "id": "chatcmpl-A94lw91bO4io2ipX9M3MSJoveRsPO", "object": "chat.completion", "created": 1726726660, "model": "gpt-4-0613", "choices": [ { "index": 0, "message": { "role": "assistant", "content": null, "tool_calls": [ { "id": "call_wZJ00hcxH0uEoeN8G2V6rzhN", "type": "function", "function": { "name": "get_delivery_date", "arguments": "{\n \"order_id\": \"order12345\"\n}" } } ], "refusal": null }, "logprobs": null, "finish_reason": "tool_calls" } ], "usage": { "prompt_tokens": 468, "completion_tokens": 19, "total_tokens": 487, "completion_tokens_details": { "reasoning_tokens": 0 } }, "system_fingerprint": null } } ``` and ooh! paying quickly adds up(I'm Broke lol). I bet heaps of reasons why openai way is good/better, but I probably won't do the "openai"(payed api) tools app ollama example srry I think I understand lots more about why others too, but I will try my best to do a ollama(free api) compatable with openai and try that way. wish me luck cheers.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30040