mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 19:08:59 -05:00
[GH-ISSUE #12494] question: usage of openwebui enabled tools in model via llm endpoint api #16621
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @gabrieligbastos on GitHub (Apr 5, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/12494
Check Existing Issues
Problem Description
Hi there 👋
I was experimenting with context enrichment tools (like calculate, get_user, etc.), and I created a custom tool and enabled it for a specific model (MY_MODEL). Everything works as expected when I use this model through the OpenWebUI interface — the tool is automatically enabled, gets triggered when needed, and everything flows smoothly. Great stuff!
However, here's the issue I'm running into:
In my setup, users can access the LLM either through the OpenWebUI frontend or from other applications that connect via the OpenWebUI's OpenAI-compatible
/api/chat/completionsendpoint.But in reality, when I call the API this way, the tool never gets triggered — it's like the tool integration layer is skipped entirely.
Question
Is there currently a way to have the tool chain work via the API, similar to how it works in the UI? Or is this feature not yet supported for the OpenAI-compatible endpoint?
Thanks a lot! 🙏
Desired Solution you'd like
What I expected was that calling /api/chat/completions using MY_MODEL (which has the tool enabled) would behave the same as the frontend:
Alternatives Considered
No response
Additional Context
No response