mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 10:58:17 -05:00
[GH-ISSUE #23175] issue: reasoning_content is stripped from assistant tool call messages, breaking multi-turn tool calling with reasoning models (Kimi K2.5, etc.) #35437
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @estemit on GitHub (Mar 28, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/23175
Check Existing Issues
Installation Method
Docker
Open WebUI Version
v0.8.12 (latest)
Ollama Version (if applicable)
No response
Operating System
Debian 13
Browser (if applicable)
No response
Confirmation
README.md.Expected Behavior
When using reasoning-enabled models like Kimi K2.5 (Moonshot) with native function calling, the
reasoning_contentfield from the assistant's tool call response should be preserved and included when reconstructing the conversation history for subsequent API calls. This is required by the Moonshot API and similar reasoning-model providers.According to Moonshot's official documentation:
Actual Behavior
OpenWebUI strips the
reasoning_contentfield from assistant messages when they containtool_calls. This causes the upstream API (Moonshot/OpencodeGO) to return a 400 Bad Request error on the next turn:This breaks multi-turn tool calling with any reasoning-enabled model.
Steps to Reproduce
kimi-k2.5via Moonshot API or OpencodeGO provider).function_calling: nativein model advanced params).reasoning_contentwas stripped from the assistant message containing tool callsLogs & Screenshots
Error from upstream API:
Additional Information
Root Cause Analysis
The issue is in how OpenWebUI reconstructs the conversation history when sending requests to the LLM API. When an assistant message contains both
tool_callsandreasoning_content, OpenWebUI appears to be dropping thereasoning_contentfield before sending it to the API.This is architecturally similar to the issue that LiteLLM faced with Anthropic's
thinking_blocks(see reference below). The API is stateless and requires the client to resendreasoning_contentin assistant messages, but OpenWebUI strips this field.Current Workaround
I had to implement a proxy service (
kimi-proxy) that:reasoning_contentfrom assistant tool call responsesThis is not ideal and should be handled natively by OpenWebUI.
References
Similar issue in LiteLLM: https://github.com/BerriAI/litellm/issues/21672
"thinking is enabled but reasoning_content is missing in assistant tool call message at index N"Moonshot API Documentation: https://platform.moonshot.ai/docs/guide/kimi-k2-5-quickstart#tool-use-compatibility
reasoning_contentmust be preserved during multi-step tool callingRelated OpenWebUI Issue: https://github.com/open-webui/open-webui/issues/23173
Workaround proxy logs showing the fix:
Proposed Solution
When reconstructing the conversation history for API calls, OpenWebUI should:
reasoning_contentfield in assistant messages that containtool_callsThis may require changes in:
Additional Information
reasoning_contentto be preserved@huaanhmai28-rgb commented on GitHub (Mar 31, 2026):
same problem
@aayushbaluni commented on GitHub (Apr 15, 2026):
Submitted a fix in #23742. The root cause is that
convert_output_to_messages()inmisc.pynever setsreasoning_contenton the emitted assistant message dict — the reasoning text is only folded intocontentas tagged text. The fix adds apending_reasoningaccumulator soreasoning_contentis preserved alongsidetool_callsfor providers that require it.@tjbck commented on GitHub (Apr 17, 2026):
Likely addressed in dev.
@tjbck commented on GitHub (Apr 21, 2026):
Reverting this change in dev, this change introduces incompatibilities with certain providers. Should be handled externally instead.
@RodolfoCastanheira commented on GitHub (Apr 21, 2026):
How?
@Marutselu commented on GitHub (Apr 23, 2026):
I have been using my own patch for almost 3 months, and it is working perfectly on my end with all providers (OpenAI, Anthropic, Gemini, Deepseek, MoonshotAI, LiteLLM, OpenRouter).
Also, currently it cannot be handled externally because reasoning_content is not kept without patching the code.