[GH-ISSUE #12494] question: usage of openwebui enabled tools in model via llm endpoint api #16621

Closed
opened 2026-04-19 22:30:55 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @gabrieligbastos on GitHub (Apr 5, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/12494

Check Existing Issues

  • I have searched the existing issues and discussions.

Problem Description

Hi there 👋

I was experimenting with context enrichment tools (like calculate, get_user, etc.), and I created a custom tool and enabled it for a specific model (MY_MODEL). Everything works as expected when I use this model through the OpenWebUI interface — the tool is automatically enabled, gets triggered when needed, and everything flows smoothly. Great stuff!

However, here's the issue I'm running into:

In my setup, users can access the LLM either through the OpenWebUI frontend or from other applications that connect via the OpenWebUI's OpenAI-compatible /api/chat/completions endpoint.

But in reality, when I call the API this way, the tool never gets triggered — it's like the tool integration layer is skipped entirely.

Question
Is there currently a way to have the tool chain work via the API, similar to how it works in the UI? Or is this feature not yet supported for the OpenAI-compatible endpoint?

Thanks a lot! 🙏

Desired Solution you'd like

What I expected was that calling /api/chat/completions using MY_MODEL (which has the tool enabled) would behave the same as the frontend:

OpenWebUI acts as the bridge,

receives the prompt,

detects the need to use a tool,

calls the tool,

appends the result,

and finally returns the response when finish_reason = stop.

Alternatives Considered

No response

Additional Context

No response

Originally created by @gabrieligbastos on GitHub (Apr 5, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/12494 ### Check Existing Issues - [x] I have searched the existing issues and discussions. ### Problem Description Hi there 👋 I was experimenting with context enrichment tools (like calculate, get_user, etc.), and I created a custom tool and enabled it for a specific model (MY_MODEL). Everything works as expected when I use this model through the OpenWebUI interface — the tool is automatically enabled, gets triggered when needed, and everything flows smoothly. Great stuff! However, here's the issue I'm running into: In my setup, users can access the LLM either through the OpenWebUI frontend or from other applications that connect via the OpenWebUI's OpenAI-compatible `/api/chat/completions` endpoint. But in reality, when I call the API this way, the tool never gets triggered — it's like the tool integration layer is skipped entirely. *Question* Is there currently a way to have the tool chain work via the API, similar to how it works in the UI? Or is this feature not yet supported for the OpenAI-compatible endpoint? Thanks a lot! 🙏 ### Desired Solution you'd like What I expected was that calling /api/chat/completions using MY_MODEL (which has the tool enabled) would behave the same as the frontend: ``` OpenWebUI acts as the bridge, receives the prompt, detects the need to use a tool, calls the tool, appends the result, and finally returns the response when finish_reason = stop. ``` ### Alternatives Considered _No response_ ### Additional Context _No response_
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#16621