mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 19:08:59 -05:00
[GH-ISSUE #9755] Support for multiple tool execution #31163
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @rodrigopv on GitHub (Feb 10, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/9755
Feature Request
Is your feature request related to a problem? Please describe.
I'm exploring certain use cases from open Web UI to interact with different data stores / API by using Tools. At times I require to merge/join different data sources to guide the LLM to a more specific response, however, right now, the Tools API only executes a single tool per prompt.
if I create a conversation with both tools enabled, only one of them will be executed. I tested both tools separately and they if I enable only one of them, they would execute, meaning, the tools do work properly when used in isolation.
Describe the solution you'd like
Judging the prompt, more than a single tool should be able to provide context to the LLM, so the LLM can join/merge/rationalize on more than a single tool's output.
Describe alternatives you've considered
The only workaround I imagine is to duplicate the actions/tools to do the whole logic in a single tool/call, but that could generate a lot of repeated code across tools, for a job that an LLM alone can solve.
Additional context
A sample use case that I've been trying.
Prompt (using gpt-4o):
Open WebUI would only call one of those tools. If I enable just a single one, it would always call it (as expected), but would miss the context from the missing tool.
@tjbck commented on GitHub (Feb 13, 2025):
Addressed with
68519d6ca7, make sure the tool calling param is set to native!@Classic298 commented on GitHub (Feb 14, 2025):
Sorry for my ignorance, but what parameter is this you are talking of that is related to tool calling that needs to be set to "native"?
@rodrigopv commented on GitHub (Feb 14, 2025):
This one:

I confirm it works by setting it to native and using gpt-4 🚀
@Classic298 commented on GitHub (Feb 14, 2025):
Oh. I don't have that on my models.
I use pipelines to integrate models from Google vertex AI like claude or Gemini. Is there a documentation what I need to change in my pipeline?
@thiswillbeyourgithub commented on GitHub (Feb 14, 2025):
I don't see this option either. I'm using litellm if that matters? What is the specific reason some models do or do not have this?
@thiswillbeyourgithub commented on GitHub (Feb 14, 2025):
Fixed: i'm using openrouter that not always has the appropriate model metadata. From https://docs.litellm.ai/docs/providers/ollama#example-usage---tool-calling I can add
and it works now I think.
@Classic298 commented on GitHub (Feb 14, 2025):
@thiswillbeyourgithub did you test it? Does it work?
What did you change in your pipeline exactly (not using LiteLLM myself, but perhaps i can derivate from your changes what I need to do in my google vertex ai pipeline)
@thiswillbeyourgithub commented on GitHub (Feb 15, 2025):
well actually the rendering in the picture was not accurate for me and I had to go to the advanced parameters for the model in the admin settings as well as in the workspace and towards the top there was the naive function calling feature. So basically it appeared towards the top of the list of parameters like top K and temperature, etc.
@Classic298 commented on GitHub (Feb 17, 2025):
@thiswillbeyourgithub does it work for you, the native function calling?
@thiswillbeyourgithub commented on GitHub (Feb 17, 2025):
I'm not sure how to check if the model is using native but at least I think it works better since I enabled it.
@Classic298 commented on GitHub (Feb 17, 2025):
@thiswillbeyourgithub could you share your pipeline implementation w/ me please? At least the part of the function calling
@thiswillbeyourgithub commented on GitHub (Feb 17, 2025):
I am not using any pipeline, only tools. You can see an example of a tool I'm using on that repo
@Classic298 commented on GitHub (Feb 17, 2025):
Well... What models do you use and more importantly how do you integrate the models? (Since native function calling works for you?)
@thiswillbeyourgithub commented on GitHub (Feb 17, 2025):
I'm not sure what you mean by "integrating a model". I'm mostly using claude sonnet 3.5 via openrouter via litellm. Meaning I run litellm, it exposes "openrouter/anthropic/claude-3.5-sonnet" as "claude_sonnet" and I use that litellm endpoint as my openai connection in open-webui. It has worked with other models, I just tested it with ollama mistral-nemo:12b-instruct-2407-q3_K_S (NOT via litellm):
Btw in my litellm just in case I added
Not sure it's needed though.