[GH-ISSUE #14577] issue: Tool calling BROKEN #55967

Closed
opened 2026-05-05 18:24:10 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @KyleF0X on GitHub (Jun 1, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/14577

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

v0.6.13

Ollama Version (if applicable)

0.9.0

Operating System

windows 11

Browser (if applicable)

Firefox

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

For the qwen3:30b-a3b to call tools and answer the question.

As can be seen here Claude 3.7 via open router, it can correctly use the tool.

Image

Actual Behavior

Half the time qwen3:30b-a3b says is cant use tool,
Image

the other half of the time it makes the tool call, but never answer.

Image

Steps to Reproduce

install LLM - ollama run qwen3:30b-a3b
setup mcpo time server

},
"time": {
  "command": "uvx",
  "args": ["mcp-server-time", "--local-timezone=America/Los_Angeles"]
}

have mcpo setup to share the tool

Image

Select qwen3:30b-a3b in open-webui as chat model
change chat controls "function calling" to "native"

Ask for time in your city

Logs & Screenshots

openwebui console logs.txt

Additional Information

as im using claude via openrouter.ai API, and qwen3:30b-a3b is a local llm, perhaps this has something to do with Ollama?

Originally created by @KyleF0X on GitHub (Jun 1, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/14577 ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version v0.6.13 ### Ollama Version (if applicable) 0.9.0 ### Operating System windows 11 ### Browser (if applicable) Firefox ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior For the qwen3:30b-a3b to call tools and answer the question. As can be seen here Claude 3.7 via open router, it can correctly use the tool. ![Image](https://github.com/user-attachments/assets/4140e85b-c730-4a97-9a71-f95dc61e8d4e) ### Actual Behavior Half the time qwen3:30b-a3b says is cant use tool, ![Image](https://github.com/user-attachments/assets/b9239244-1f08-495f-8b8e-e2b5895c993f) the other half of the time it makes the tool call, but never answer. ![Image](https://github.com/user-attachments/assets/60365923-159a-4861-89c1-07bad85866a2) ### Steps to Reproduce install LLM - ollama run qwen3:30b-a3b setup mcpo time server ``` }, "time": { "command": "uvx", "args": ["mcp-server-time", "--local-timezone=America/Los_Angeles"] } ``` have mcpo setup to share the tool ![Image](https://github.com/user-attachments/assets/ea552f8f-be88-48c2-a937-ebe89ae8cb2d) Select qwen3:30b-a3b in open-webui as chat model change chat controls "function calling" to "native" Ask for time in your city ### Logs & Screenshots [openwebui console logs.txt](https://github.com/user-attachments/files/20538318/openwebui.console.logs.txt) ### Additional Information as im using claude via openrouter.ai API, and qwen3:30b-a3b is a local llm, perhaps this has something to do with Ollama?
GiteaMirror added the bug label 2026-05-05 18:24:10 -05:00
Author
Owner

@KyleF0X commented on GitHub (Jun 1, 2025):

When i try do a time tool call with DeepSeek-R1-0528-Qwen3-8B-GGUF:Q5_K_XL i get a different error

Image

<!-- gh-comment-id:2926601735 --> @KyleF0X commented on GitHub (Jun 1, 2025): When i try do a time tool call with DeepSeek-R1-0528-Qwen3-8B-GGUF:Q5_K_XL i get a different error ![Image](https://github.com/user-attachments/assets/7a96d2ca-b708-4870-a4b4-46d5acc46888)
Author
Owner

@EntropyYue commented on GitHub (Jun 1, 2025):

Claude can use tools, which indicates that there are no issues with tool invocation. The reason why Qwen3-30b cannot be use tools may be due to too few activation parameters in the model.

<!-- gh-comment-id:2926740917 --> @EntropyYue commented on GitHub (Jun 1, 2025): Claude can use tools, which indicates that there are no issues with tool invocation. The reason why Qwen3-30b cannot be use tools may be due to too few activation parameters in the model.
Author
Owner

@EntropyYue commented on GitHub (Jun 1, 2025):

The problem with the Deepseek distillation model is that it does not support tool calls under the default template

<!-- gh-comment-id:2926741409 --> @EntropyYue commented on GitHub (Jun 1, 2025): The problem with the Deepseek distillation model is that it does not support tool calls under the default template
Author
Owner

@KyleF0X commented on GitHub (Jun 1, 2025):

The problem with the Deepseek distillation model is that it does not support tool calls under the default template

yes from testing tonight, i have determined deepseek 0528 tool calls are not supported yet

Claude can use tools, which indicates that there are no issues with tool invocation. The reason why Qwen3-30b cannot be use tools may be due to too few activation parameters in the model.

I have directly tested qwen3:30b-a3b in ollama via CMD, and tools work, so the issue is in open-webui, still investigating

<!-- gh-comment-id:2927069391 --> @KyleF0X commented on GitHub (Jun 1, 2025): > The problem with the Deepseek distillation model is that it does not support tool calls under the default template yes from testing tonight, i have determined deepseek 0528 tool calls are not supported yet > Claude can use tools, which indicates that there are no issues with tool invocation. The reason why Qwen3-30b cannot be use tools may be due to too few activation parameters in the model. I have directly tested qwen3:30b-a3b in ollama via CMD, and tools work, so the issue is in open-webui, still investigating
Author
Owner

@EntropyYue commented on GitHub (Jun 1, 2025):

I used qwen3-30b for testing, and it works fine in my environment

Image

<!-- gh-comment-id:2927416916 --> @EntropyYue commented on GitHub (Jun 1, 2025): I used qwen3-30b for testing, and it works fine in my environment ![Image](https://github.com/user-attachments/assets/c131eaa0-4f2a-431d-8321-f8ae1b4c42cf)
Author
Owner

@taylorwilsdon commented on GitHub (Jun 1, 2025):

qwen3:30b works with default tool calling through mcpo for me just fine:

Image

The issue seems to be that it falls apart when trying to decide to invoke native tool calling, not getting the full output in the context. My guess is you've got the context set to the default (super small) and it's running out of tokens thinking and then doesn't know where it is

Image

I've confirmed that is indeed the case - works fine when I increase num_ctx from 2048 to 12000.

Image
<!-- gh-comment-id:2927917607 --> @taylorwilsdon commented on GitHub (Jun 1, 2025): `qwen3:30b` works with default tool calling through mcpo for me just fine: <img width="630" alt="Image" src="https://github.com/user-attachments/assets/aac55c45-62bb-4b8b-99cd-700b1a656a08" /> The issue seems to be that it falls apart when trying to decide to invoke native tool calling, not getting the full output in the context. My guess is you've got the context set to the default (super small) and it's running out of tokens thinking and then doesn't know where it is <img width="815" alt="Image" src="https://github.com/user-attachments/assets/7e2b037b-31b3-44a7-848c-219931417515" /> I've confirmed that is indeed the case - works fine when I increase num_ctx from 2048 to 12000. <img width="1027" alt="Image" src="https://github.com/user-attachments/assets/ca8d2adc-98fc-4225-8156-ef1516f444b0" />
Author
Owner

@peskyAdmin commented on GitHub (Jun 2, 2025):

it is not working for me with qwen3:30b. I also am able to add a tool via local host at the user level, but the connection fails at the global level

<!-- gh-comment-id:2928274529 --> @peskyAdmin commented on GitHub (Jun 2, 2025): it is not working for me with qwen3:30b. I also am able to add a tool via local host at the user level, but the connection fails at the global level
Author
Owner

@taylorwilsdon commented on GitHub (Jun 2, 2025):

Global tool servers have to be enabled in the chat by clicking the "Plus" icon unless they're explicitly forced on for the specific model in the model configuration settings. User added tools are automatically enabled for all models. Just a different distribution method, as you don't necessarily want to force every tool on for every user and every model when you add a global tool option.

<!-- gh-comment-id:2928279459 --> @taylorwilsdon commented on GitHub (Jun 2, 2025): Global tool servers have to be enabled in the chat by clicking the "Plus" icon unless they're explicitly forced on for the specific model in the model configuration settings. User added tools are automatically enabled for all models. Just a different distribution method, as you don't necessarily want to force every tool on for every user and every model when you add a global tool option.
Author
Owner

@peskyAdmin commented on GitHub (Jun 2, 2025):

I was able to get the tool configured in admin globale and enabled it with the plus. a few comments.

  1. I was able to configure the tool on the user (there is only one user the admin) with localhost
  2. localhost did not work with global config, it only worked with the docker domain. using localhost produced this error in the logs | aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host localhost:8000 ssl:default [Connect call failed ('127.0.0.1', 8000)]
  3. no matter how i use the tool at the user level, or globally and selected the tool. it will not use the tool.

I have manually verified the mcpo server is working and verified the mcp server is wroking from the mcpo ui. I have tried with llama3.2:latest and qwen3:30b

for what its worth im using gitea mcp

<!-- gh-comment-id:2928339319 --> @peskyAdmin commented on GitHub (Jun 2, 2025): I was able to get the tool configured in admin globale and enabled it with the plus. a few comments. 1. I was able to configure the tool on the user (there is only one user the admin) with localhost 2. localhost did not work with global config, it only worked with the docker domain. using localhost produced this error in the logs ` | aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host localhost:8000 ssl:default [Connect call failed ('127.0.0.1', 8000)]` 3. no matter how i use the tool at the user level, or globally and selected the tool. it will not use the tool. I have manually verified the mcpo server is working and verified the mcp server is wroking from the mcpo ui. I have tried with llama3.2:latest and qwen3:30b for what its worth im using gitea mcp
Author
Owner

@KyleF0X commented on GitHub (Jun 2, 2025):

Ok so in my ADMIN account, qwen3:30b-a3b CANT call tools. gets confused thinks it cant uses tools, then tries and hangs.

BUT, i just created a test user account, and it calls the tool no problem

Image

So... is there something broken with the admin user?

<!-- gh-comment-id:2928510437 --> @KyleF0X commented on GitHub (Jun 2, 2025): Ok so in **my ADMIN account, qwen3:30b-a3b CANT call tools.** gets confused thinks it cant uses tools, then tries and hangs. BUT, i just created a test user account, and it calls the tool no problem ![Image](https://github.com/user-attachments/assets/8e6e7c0c-714a-412b-8fa2-080622a8b19e) So... is there something broken with the admin user?
Author
Owner

@KyleF0X commented on GitHub (Jun 2, 2025):

Global tool servers have to be enabled in the chat by clicking the "Plus" icon unless they're explicitly forced on for the specific model in the model configuration settings. User added tools are automatically enabled for all models. Just a different distribution method, as you don't necessarily want to force every tool on for every user and every model when you add a global tool option.

Not sure what you mena "clicking the PLUS icon"?

Image

Here are my tool configs

Image

<!-- gh-comment-id:2928525264 --> @KyleF0X commented on GitHub (Jun 2, 2025): > Global tool servers have to be enabled in the chat by clicking the "Plus" icon unless they're explicitly forced on for the specific model in the model configuration settings. User added tools are automatically enabled for all models. Just a different distribution method, as you don't necessarily want to force every tool on for every user and every model when you add a global tool option. Not sure what you mena "clicking the PLUS icon"? ![Image](https://github.com/user-attachments/assets/6cbbbf40-ac1b-4ede-95c7-fc2230550c8e) Here are my tool configs ![Image](https://github.com/user-attachments/assets/4acd472f-dfcd-4c00-9f22-f3c241e85c71)
Author
Owner

@peskyAdmin commented on GitHub (Jun 2, 2025):

ive configured the tool both as global and as user. ive tried a few different models none will use the tool

Image Image
<!-- gh-comment-id:2930820120 --> @peskyAdmin commented on GitHub (Jun 2, 2025): ive configured the tool both as global and as user. ive tried a few different models none will use the tool <img width="264" alt="Image" src="https://github.com/user-attachments/assets/7cb6a2ad-9282-4549-95f1-d4bda620eb3d" /> <img width="682" alt="Image" src="https://github.com/user-attachments/assets/d805d48e-19c4-4782-898e-54885f8c9eb6" />
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#55967