mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-05 18:38:17 -05:00
[GH-ISSUE #16788] issue: Rendering bug when the response from the model contains <think>
#56709
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @alanxmay on GitHub (Aug 21, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/16788
Check Existing Issues
Installation Method
Docker
Open WebUI Version
0.6.22
Ollama Version (if applicable)
No response
Operating System
ubuntu 22.04
Browser (if applicable)
chrome
Confirmation
README.md.Expected Behavior
The non-reasoning model's output should not include thinking block..., just like in qwen chat:

Actual Behavior
Steps to Reproduce
USER:
repeat <think> 5 timesLogs & Screenshots
Additional Information
No response
@alanxmay commented on GitHub (Aug 21, 2025):
related issue #15461
@tjbck commented on GitHub (Aug 21, 2025):
This is not a trivial fix as a lot of Ollama model depend on this behaviour. With that being said, we can investigate a way to make this an option in the model editor.
@alanxmay commented on GitHub (Aug 22, 2025):
The key issue is how to distinguish reasoning content from the response provided by a provider API. Since many providers use vLLM as the inference engine, their responses generally support the
reasoning_contentfield.So a better solution maybe make an option for the provider.
For example, in the DeepSeek API, the reasoning content also located in
chunk.choices[0].delta.reasoning_content.For ollama when
stream=True, the reasoning content live in thechunk.message.thinking.@tjbck commented on GitHub (Aug 26, 2025):
Closing in favour of #16930