mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 19:08:59 -05:00
issue: Output stops after first tool call with reasoning enabled (new regression in 0.6.34) #6712
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Bickio on GitHub (Oct 20, 2025).
Check Existing Issues
Installation Method
Docker
Open WebUI Version
v0.6.34
Ollama Version (if applicable)
No response
Operating System
MacOS 15.4.1
Browser (if applicable)
No response
Confirmation
README.md.Expected Behavior
Model does its initial thinking, uses the tool natively and then continues its response
Actual Behavior
Model output stops after the first tool use
Steps to Reproduce
Configure a reasoning + native function execution model in litellm (I tested with Claude 4.5 Sonnet)
merge_reasoning_content_in_choices: true, but no other special settingsConfigure litellm as an openai server in openwebui
Adjust the model settings in openwebui (either in admin, or chat settings):
Function CallingtoNativeReasoning EfforttomediumConfigure an "external tool" connection in openwebui
Prompt the model with the tool enabled, in such a way that it will use the tool
Logs & Screenshots
SCREENSHOT FROM v0.6.34 (NOT WORKING):
SCREENSHOT FROM v0.6.33 (WORKING):
Additional Information
I've tested up and downgrading the versions multiple times, and rerunning the prompts multiple times. This is 100% reproducible and clearly a new regression between v0.6.33 and v0.6.34
@tjbck commented on GitHub (Oct 20, 2025):
Backend logs required here, most likely has to do with the Pipe Function implementation.