mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 19:08:59 -05:00
[GH-ISSUE #13101] feat: Add Support for Adjusting Reasoning Intensity in OpenAI-Compatible Mode #32339
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @L0stInFades on GitHub (Apr 21, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/13101
Check Existing Issues
Problem Description
In the OpenAI-compatible SDK format, Claude (developed by Anthropic) currently lacks support for enabling reasoning modes or controlling the reasoning token budget.
Desired Solution you'd like
Actually,Anthropic's documentation demonstrates how to enable and configure reasoning by adding an extra_body parameter to API requests, like this:
response = client.chat.completions.create(
model="claude-3-7-sonnet-20240229",
messages=[...],
extra_body={
"thinking": {
"type": "enabled",
"budget_tokens": 2000
}
}
)
However, this functionality relies on Anthropic-specific extensions and may not be natively available or properly implemented in standard OpenAI-compatible environments. As a result, users are unable to leverage these reasoning controls when interacting with Claude through generic OpenAI SDKs.
For more details, please refer to: https://docs.anthropic.com/en/api/openai-sdk .
I hope you can provide updates in this area to better support the use of Claude.
Alternatives Considered
No response
Additional Context
No response
@tjbck commented on GitHub (Apr 21, 2025):
reasoning_effort param is supported.
@L0stInFades commented on GitHub (Apr 22, 2025):
Actually, the reasoning_effort parameter cannot enable Claude's reasoning mode. Claude ignores the reasoning_effort parameter. The official documentation confirms this.
@Classic298 commented on GitHub (Apr 22, 2025):
@L0stInFades Open WebUI is meant to support only OpenAI API compatible AI endpoints.
reasoning_effort is an OpenAI API endpoint.
If Anthropic doesn't support this you have two options (or three)
So, yes, Anthropic doesn't support reasoning_effort. But it is not Open WebUI's goal to implement all the small differences across the 100+ AI providers out there. It is too much work to implement all the different APIs, let alone maintain it. I bet there'll be providers on a weekly basis that add features or make small changes on the API that could break things for Open WebUI. Impossible to maintain. -> That's where your own custom built functions (or community functions) or middlewares like LiteLLM come into play.
If they don't manage to come up with a unified API then that's their problem if they don't want to be customer-friendly.
Even Google learned this lesson and offers a beta version of an OpenAI API compatiple endpoint now for all their models.
And if you are reliant on other providers who do not follow the OpenAI API schema, then use a middleware like LiteLLM or use openrouter.
And this is not OpenAI API compatible, like you claimed.
OpenAI's docs show this:
So actually Anthropic does NOT follow OpenAI API compatible API schemas.