mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 10:58:17 -05:00
issue: OWUI cannot parse tool call parameters when LLM makes multiple tool calls at once #5709
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @mlaihk on GitHub (Jul 6, 2025).
Check Existing Issues
Installation Method
Docker
Open WebUI Version
0.6.15
Ollama Version (if applicable)
0.95
Operating System
Windows 11 24H2
Browser (if applicable)
MSEdge
Confirmation
README.md.Expected Behavior
When LLM makes multiple tool calls at a iteration, Open WebUI should correctly parse the calls into multiple tool calls and execute.
Actual Behavior
Open WebUI fails to parse the multiple tool calls by the LLM and throws an error.
The docker container contains this error directly related to the multiple tool call.
2025-07-06 11:18:31.178 | ERROR | open_webui.utils.middleware:post_response_handler:2080 - Error parsing tool call arguments: {"include_urls": true, "url": "https://rog.asus.com/hk-en/laptops/rog-zephyrus/rog-zephyrus-g14-2025-ga403um-qs013w/"}{"include_urls": true, "url": "https://rog.asus.com/hk-en/laptops/rog-zephyrus/rog-zephyrus-g16-2025-gu605/"} - {}Steps to Reproduce
Use Ollama hosted Qwen3:14b or Gemma3:12b. As LLM a question that will result in multiple tool calls:
Example:
use playwright and access links from the search and assemble complete specifications on all the 2025 Zephyrus G14 and G16 available. Make sure the models are 2025 models, with the correct CPUs and dGPUs.
Logs & Screenshots
As above
Additional Information
No response
@tjbck commented on GitHub (Jul 6, 2025):
Local models are not reliable for tool calling especially at <14b size. I'd encourage you to use higher-end models like gpt-4.1 for such tasks.