[GH-ISSUE #9755] Support for multiple tool execution #15635

Closed
opened 2026-04-19 21:47:33 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @rodrigopv on GitHub (Feb 10, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/9755

Feature Request

Is your feature request related to a problem? Please describe.
I'm exploring certain use cases from open Web UI to interact with different data stores / API by using Tools. At times I require to merge/join different data sources to guide the LLM to a more specific response, however, right now, the Tools API only executes a single tool per prompt.

if I create a conversation with both tools enabled, only one of them will be executed. I tested both tools separately and they if I enable only one of them, they would execute, meaning, the tools do work properly when used in isolation.

Describe the solution you'd like
Judging the prompt, more than a single tool should be able to provide context to the LLM, so the LLM can join/merge/rationalize on more than a single tool's output.

Describe alternatives you've considered
The only workaround I imagine is to duplicate the actions/tools to do the whole logic in a single tool/call, but that could generate a lot of repeated code across tools, for a job that an LLM alone can solve.

Additional context
A sample use case that I've been trying.

  1. Create a tool to fetch an user info by ID to an API
  2. Create a tool to fetch an user latest posts.

Prompt (using gpt-4o):

  • Provide info on user with id 12345 along it's latest posts.

Open WebUI would only call one of those tools. If I enable just a single one, it would always call it (as expected), but would miss the context from the missing tool.

Originally created by @rodrigopv on GitHub (Feb 10, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/9755 # Feature Request **Is your feature request related to a problem? Please describe.** I'm exploring certain use cases from open Web UI to interact with different data stores / API by using Tools. At times I require to merge/join different data sources to guide the LLM to a more specific response, however, right now, the Tools API only executes a single tool per prompt. if I create a conversation with both tools enabled, only one of them will be executed. I tested both tools separately and they if I enable only one of them, they would execute, meaning, the tools do work properly when used in isolation. **Describe the solution you'd like** Judging the prompt, more than a single tool should be able to provide context to the LLM, so the LLM can join/merge/rationalize on more than a single tool's output. **Describe alternatives you've considered** The only workaround I imagine is to duplicate the actions/tools to do the whole logic in a single tool/call, but that could generate a lot of repeated code across tools, for a job that an LLM alone can solve. **Additional context** A sample use case that I've been trying. 1. Create a tool to fetch an user info by ID to an API 2. Create a tool to fetch an user latest posts. Prompt (using gpt-4o): - Provide info on user with id 12345 along it's latest posts. Open WebUI would only call one of those tools. If I enable just a single one, it would always call it (as expected), but would miss the context from the missing tool.
Author
Owner

@tjbck commented on GitHub (Feb 13, 2025):

Addressed with 68519d6ca7, make sure the tool calling param is set to native!

<!-- gh-comment-id:2657874132 --> @tjbck commented on GitHub (Feb 13, 2025): Addressed with 68519d6ca7dcab1fd95118d6af7ac491d90cf57d, make sure the tool calling param is set to native!
Author
Owner

@Classic298 commented on GitHub (Feb 14, 2025):

Sorry for my ignorance, but what parameter is this you are talking of that is related to tool calling that needs to be set to "native"?

<!-- gh-comment-id:2659767392 --> @Classic298 commented on GitHub (Feb 14, 2025): Sorry for my ignorance, but what parameter is this you are talking of that is related to tool calling that needs to be set to "native"?
Author
Owner

@rodrigopv commented on GitHub (Feb 14, 2025):

Sorry for my ignorance, but what parameter is this you are talking of that is related to tool calling that needs to be set to "native"?

This one:
Image

I confirm it works by setting it to native and using gpt-4 🚀

<!-- gh-comment-id:2659966054 --> @rodrigopv commented on GitHub (Feb 14, 2025): > Sorry for my ignorance, but what parameter is this you are talking of that is related to tool calling that needs to be set to "native"? This one: <img width="1516" alt="Image" src="https://github.com/user-attachments/assets/c852d1c3-09b6-4b89-a6b2-06b10b6e3549" /> I confirm it works by setting it to native and using gpt-4 🚀
Author
Owner

@Classic298 commented on GitHub (Feb 14, 2025):

Sorry for my ignorance, but what parameter is this you are talking of that is related to tool calling that needs to be set to "native"?

This one: Image

I confirm it works by setting it to native and using gpt-4 🚀

Oh. I don't have that on my models.

I use pipelines to integrate models from Google vertex AI like claude or Gemini. Is there a documentation what I need to change in my pipeline?

<!-- gh-comment-id:2659978787 --> @Classic298 commented on GitHub (Feb 14, 2025): > > Sorry for my ignorance, but what parameter is this you are talking of that is related to tool calling that needs to be set to "native"? > > This one: <img alt="Image" width="1516" src="https://private-user-images.githubusercontent.com/198869/413422158-c852d1c3-09b6-4b89-a6b2-06b10b6e3549.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk1NTcwODgsIm5iZiI6MTczOTU1Njc4OCwicGF0aCI6Ii8xOTg4NjkvNDEzNDIyMTU4LWM4NTJkMWMzLTA5YjYtNGI4OS1hNmIyLTA2YjEwYjZlMzU0OS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjE0JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxNFQxODEzMDhaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0zYzY1YzBmMGI3MzBhNGM1ZTRjOTAzNWI3ODNjNGE5MTk5NDIzMTMxYjFlOTJhOTFmYzM4ZjUxYTkzYWViM2Y0JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.uHZhjtzBkNqRvjZg6lu3_xoE464mvf6KSTlevPm5cSQ"> > > I confirm it works by setting it to native and using gpt-4 🚀 Oh. I don't have that on my models. I use pipelines to integrate models from Google vertex AI like claude or Gemini. Is there a documentation what I need to change in my pipeline?
Author
Owner

@thiswillbeyourgithub commented on GitHub (Feb 14, 2025):

I don't see this option either. I'm using litellm if that matters? What is the specific reason some models do or do not have this?

<!-- gh-comment-id:2660340826 --> @thiswillbeyourgithub commented on GitHub (Feb 14, 2025): I don't see this option either. I'm using litellm if that matters? What is the specific reason some models do or do not have this?
Author
Owner

@thiswillbeyourgithub commented on GitHub (Feb 14, 2025):

Fixed: i'm using openrouter that not always has the appropriate model metadata. From https://docs.litellm.ai/docs/providers/ollama#example-usage---tool-calling I can add

    model_info:
      supports_function_calling: true

and it works now I think.

<!-- gh-comment-id:2660352760 --> @thiswillbeyourgithub commented on GitHub (Feb 14, 2025): Fixed: i'm using openrouter that not always has the appropriate model metadata. From https://docs.litellm.ai/docs/providers/ollama#example-usage---tool-calling I can add ```yaml model_info: supports_function_calling: true ``` and it works now I think.
Author
Owner

@Classic298 commented on GitHub (Feb 14, 2025):

@thiswillbeyourgithub did you test it? Does it work?
What did you change in your pipeline exactly (not using LiteLLM myself, but perhaps i can derivate from your changes what I need to do in my google vertex ai pipeline)

<!-- gh-comment-id:2660356904 --> @Classic298 commented on GitHub (Feb 14, 2025): @thiswillbeyourgithub did you test it? Does it work? What did you change in your pipeline exactly (not using LiteLLM myself, but perhaps i can derivate from your changes what I need to do in my google vertex ai pipeline)
Author
Owner

@thiswillbeyourgithub commented on GitHub (Feb 15, 2025):

well actually the rendering in the picture was not accurate for me and I had to go to the advanced parameters for the model in the admin settings as well as in the workspace and towards the top there was the naive function calling feature. So basically it appeared towards the top of the list of parameters like top K and temperature, etc.

<!-- gh-comment-id:2660819789 --> @thiswillbeyourgithub commented on GitHub (Feb 15, 2025): well actually the rendering in the picture was not accurate for me and I had to go to the advanced parameters for the model in the admin settings as well as in the workspace and towards the top there was the naive function calling feature. So basically it appeared towards the top of the list of parameters like top K and temperature, etc.
Author
Owner

@Classic298 commented on GitHub (Feb 17, 2025):

@thiswillbeyourgithub does it work for you, the native function calling?

<!-- gh-comment-id:2663085018 --> @Classic298 commented on GitHub (Feb 17, 2025): @thiswillbeyourgithub does it work for you, the native function calling?
Author
Owner

@thiswillbeyourgithub commented on GitHub (Feb 17, 2025):

I'm not sure how to check if the model is using native but at least I think it works better since I enabled it.

<!-- gh-comment-id:2663187671 --> @thiswillbeyourgithub commented on GitHub (Feb 17, 2025): I'm not sure how to check if the model is using native but at least I think it works better since I enabled it.
Author
Owner

@Classic298 commented on GitHub (Feb 17, 2025):

@thiswillbeyourgithub could you share your pipeline implementation w/ me please? At least the part of the function calling

<!-- gh-comment-id:2663296956 --> @Classic298 commented on GitHub (Feb 17, 2025): @thiswillbeyourgithub could you share your pipeline implementation w/ me please? At least the part of the function calling
Author
Owner

@thiswillbeyourgithub commented on GitHub (Feb 17, 2025):

I am not using any pipeline, only tools. You can see an example of a tool I'm using on that repo

<!-- gh-comment-id:2663454729 --> @thiswillbeyourgithub commented on GitHub (Feb 17, 2025): I am not using any pipeline, only tools. You can see an example of a tool I'm using [on that repo](https://github.com/thiswillbeyourgithub/openwebui_custom_pipes_filters/blob/main/tools/anki_tool.py)
Author
Owner

@Classic298 commented on GitHub (Feb 17, 2025):

I am not using any pipeline, only tools. You can see an example of a tool I'm using on that repo

Well... What models do you use and more importantly how do you integrate the models? (Since native function calling works for you?)

<!-- gh-comment-id:2663495320 --> @Classic298 commented on GitHub (Feb 17, 2025): > I am not using any pipeline, only tools. You can see an example of a tool I'm using [on that repo](https://github.com/thiswillbeyourgithub/openwebui_custom_pipes_filters/blob/main/tools/anki_tool.py) Well... What models do you use and more importantly how do you integrate the models? (Since native function calling works for you?)
Author
Owner

@thiswillbeyourgithub commented on GitHub (Feb 17, 2025):

I'm not sure what you mean by "integrating a model". I'm mostly using claude sonnet 3.5 via openrouter via litellm. Meaning I run litellm, it exposes "openrouter/anthropic/claude-3.5-sonnet" as "claude_sonnet" and I use that litellm endpoint as my openai connection in open-webui. It has worked with other models, I just tested it with ollama mistral-nemo:12b-instruct-2407-q3_K_S (NOT via litellm):

Image

Btw in my litellm just in case I added

      model_info:
        supports_function_calling: true

Not sure it's needed though.

<!-- gh-comment-id:2663617218 --> @thiswillbeyourgithub commented on GitHub (Feb 17, 2025): I'm not sure what you mean by "integrating a model". I'm mostly using claude sonnet 3.5 via openrouter via litellm. Meaning I run litellm, it exposes "openrouter/anthropic/claude-3.5-sonnet" as "claude_sonnet" and I use that litellm endpoint as my openai connection in open-webui. It has worked with other models, I just tested it with ollama mistral-nemo:12b-instruct-2407-q3_K_S (NOT via litellm): ![Image](https://github.com/user-attachments/assets/8f8216f2-4d7d-4216-ae64-d65648439f78) Btw in my litellm just in case I added ``` model_info: supports_function_calling: true ``` Not sure it's needed though.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#15635