[GH-ISSUE #7886] Classify tool call vs. content earlier and stream to user #67101

Closed
opened 2026-05-04 09:28:29 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @ParthSareen on GitHub (Nov 30, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7886

Originally assigned to: @ParthSareen on GitHub.

https://github.com/ollama/ollama/issues/5796#issuecomment-2508374342

Originally created by @ParthSareen on GitHub (Nov 30, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7886 Originally assigned to: @ParthSareen on GitHub. https://github.com/ollama/ollama/issues/5796#issuecomment-2508374342
GiteaMirror added the feature request label 2026-05-04 09:28:29 -05:00
Author
Owner

@gregnr commented on GitHub (Nov 30, 2024):

Hey @ParthSareen great work on tool streaming, this is big 😄

Just wanted to chime in as we are hoping to use Ollama tool streaming at Supabase with database.build. We're experiencing the same issue as @Rizaldy in https://github.com/ollama/ollama/issues/5796#issuecomment-2508374342 where tool call info is all returned in a single delta vs split into multiple partial deltas. Regarding your comment in the other thread:

If there is a toolcall present, the content should be removed and only the call should be sent back, and if there is no toolcall we should return whatever content was returned by the model.

Correct me if I'm wrong but OpenAI should support both toolcall and regular text content in the same response. Most of the time the model will only reply with either tool call(s) or regular text content (as they've been trained more to do now), but from the API side both should be supported at the same time.

I assumed that under the hood OpenAI used a small FSM during streaming that would:

  1. Check to see if token is a tool call start token
  2. If not, stream each token as regular text deltas
  3. If yes, stream each token as tool call arg deltas until an end token is reached.
  4. Repeat for each token

I could be wrong, but this seems to be the behaviour I see from OpenAI's API. Do you think this is possible to do with Ollama?

<!-- gh-comment-id:2508866713 --> @gregnr commented on GitHub (Nov 30, 2024): Hey @ParthSareen great work on tool streaming, this is big 😄 Just wanted to chime in as we are hoping to use Ollama tool streaming at Supabase with [database.build](https://database.build/). We're experiencing the same issue as @Rizaldy in https://github.com/ollama/ollama/issues/5796#issuecomment-2508374342 where tool call info is all returned in a single delta vs split into multiple partial deltas. Regarding your comment in the other thread: > If there is a toolcall present, the content should be removed and only the call should be sent back, and if there is no toolcall we should return whatever content was returned by the model. Correct me if I'm wrong but OpenAI should support both toolcall and regular text content in the same response. Most of the time the model will only reply with either tool call(s) or regular text content (as they've been trained more to do now), but from the API side both should be supported at the same time. I assumed that under the hood OpenAI used a small FSM during streaming that would: 1. Check to see if token is a tool call start token 2. If not, stream each token as regular text deltas 3. If yes, stream each token as tool call arg deltas until an end token is reached. 4. Repeat for each token I could be wrong, but this seems to be the behaviour I see from OpenAI's API. Do you think this is possible to do with Ollama?
Author
Owner

@ParthSareen commented on GitHub (Nov 30, 2024):

Hi @gregnr! We're thinking through a good way to do this and have also considered a similar approach. Will definitely be getting to this in the following weeks. Our logic for capturing tool calls needs a bit of rework so a bit worried about adding more things lying on top of it for now.

I think a good middle ground right now would be to only pass in tools if needed - but I understand if that's difficult to determine for certain applications. Will keep you posted here - definitely on top of mind!

<!-- gh-comment-id:2509390921 --> @ParthSareen commented on GitHub (Nov 30, 2024): Hi @gregnr! We're thinking through a good way to do this and have also considered a similar approach. Will definitely be getting to this in the following weeks. Our logic for capturing tool calls needs a bit of rework so a bit worried about adding more things lying on top of it for now. I think a good middle ground right now would be to only pass in tools if needed - but I understand if that's difficult to determine for certain applications. Will keep you posted here - definitely on top of mind!
Author
Owner

@Rizaldy commented on GitHub (Dec 1, 2024):

Hi @ParthSareen Thanks for taking the time to consider this, and no rush—we’re happy to wait for a solid implementation that doesn’t introduce new issues with the current setup.

And my two cent for others that having the same use case as me, instead of adding an extra layer to determine if tools are needed or making additional LLM calls, I’ve opted to create a fake stream generator. This approach works well since Ollama now effectively supports tool calling with the stream=true flag thanks to @ParthSareen. While the current limitation is that the response comes as a single output rather than token-by-token, the fake stream generator bridges the gap until a proper fix is implemented.

<!-- gh-comment-id:2509564858 --> @Rizaldy commented on GitHub (Dec 1, 2024): Hi @ParthSareen Thanks for taking the time to consider this, and no rush—we’re happy to wait for a solid implementation that doesn’t introduce new issues with the current setup. And my two cent for others that having the same use case as me, instead of adding an extra layer to determine if tools are needed or making additional LLM calls, I’ve opted to create a fake stream generator. This approach works well since Ollama now effectively supports tool calling with the stream=true flag thanks to @ParthSareen. While the current limitation is that the response comes as a single output rather than token-by-token, the fake stream generator bridges the gap until a proper fix is implemented.
Author
Owner

@davidliudev commented on GitHub (Dec 14, 2024):

I am facing the same issue here. Great to know that this is being planned, as OpenAI does support both streaming and tool call and with proper streaming behavior of token-by-toekn delta.

Looking forward to updates in the coming weeks

<!-- gh-comment-id:2543180992 --> @davidliudev commented on GitHub (Dec 14, 2024): I am facing the same issue here. Great to know that this is being planned, as OpenAI does support both streaming and tool call and with proper streaming behavior of token-by-toekn delta. Looking forward to updates in the coming weeks
Author
Owner

@cypherbits commented on GitHub (Mar 7, 2025):

🥲 hoping this gets fixed soon...

<!-- gh-comment-id:2706546143 --> @cypherbits commented on GitHub (Mar 7, 2025): 🥲 hoping this gets fixed soon...
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67101