mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 10:58:17 -05:00
[GH-ISSUE #16721] issue: Continuous Repetition of Tool-Use Responses #18020
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ziozzang on GitHub (Aug 19, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/16721
Check Existing Issues
Installation Method
Docker
Open WebUI Version
0.6.22
Ollama Version (if applicable)
No response
Operating System
linux
Browser (if applicable)
No response
Confirmation
README.md.Expected Behavior
When I use GLM-4.5 with vLLM for tool-use, an unusual phenomenon has been observed. A specific phrase related to tool-use is continuously being repeated during the process.
Upon investigation, I have found that this issue occurs under the following conditions:
Expected//
Actual Behavior
When steps 3 and 4 are repeated, the response from step 2 is continuously output in between each repetition. Since this content does not exist in the actual API-level LLM request, it appears to be an issue with OpenWebUI's output handling. The message should be displayed only once and should not be repeatedly output.
Steps to Reproduce
docker v 0.6.22.
This is same issue.
https://www.reddit.com/r/LocalLLaMA/comments/1mekoy8/agentic_email_workflow_inside_of_openwebui/
Unlike other models, it appears that due to the specific characteristics of the GLM-4.5 model, it generates its current status outside of the tag during tool-use. Even with this model, there are no issues with the tool's invocation or usage, other than the repetition of the output on the OpenWebUI display.
Logs & Screenshots
no log. but posting with screenshot.
https://www.reddit.com/r/LocalLLaMA/comments/1mekoy8/agentic_email_workflow_inside_of_openwebui/
Additional Information
No response
@ziozzang commented on GitHub (Aug 19, 2025):
LLM level temporary solution
@tjbck commented on GitHub (Aug 19, 2025):
Most likely an issue with the model itself, we're unable to reproduce with openai models. Keep us updated.