[GH-ISSUE #9414] The model does nothing with the result of a native tool call #31024

Closed
opened 2026-04-25 05:06:44 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @Banbury on GitHub (Feb 5, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/9414

Bug Report

Installation Method

Docker

Environment

  • Open WebUI Version: 0.5.9

  • Ollama (if applicable): 0.5.7

  • Operating System: Windows 11

  • Browser (if applicable): Firefox 134.0.2

Confirmation:

  • [ x] I have read and followed all the instructions provided in the README.md.
  • [ x] I am on the latest version of both Open WebUI and Ollama.
  • [ x] I have included the browser console logs.
  • [ x] I have included the Docker container logs.
  • [ x] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.

Expected Behavior:

The result of a native tool call is handed to the model for further processing.

Actual Behavior:

The model does nothing with the result of a native tool call.

Description

Bug Summary:
Native tool calling works in principle, but the the model is not called again with the result. So no answer is generated.

Reproduction Details

Steps to Reproduce:

  • Activate native tool calling for a model (e.g. LLama 3.1 or 3.2).
  • Open a chat and activate a tool.
  • Ask the model something, that triggers the function call.
  • Open Webui creates an empty message, that only contains the details of the tool call, but no further processing by the model.
<details type="tool_calls" done="true" content="[{&quot;index&quot;: 0, &quot;id&quot;: &quot;call_2cf4317d-058f-4b18-b592-578355b950fc&quot;, &quot;type&quot;: &quot;function&quot;, &quot;function&quot;: {&quot;name&quot;: &quot;dice_roll&quot;, &quot;arguments&quot;: &quot;{&#x27;dice_str&#x27;: &#x27;2d10&#x27;}&quot;}}]" results="[{&quot;tool_call_id&quot;: &quot;call_2cf4317d-058f-4b18-b592-578355b950fc&quot;, &quot;content&quot;: 16}]">
<summary>Tool Executed</summary>

> dice_roll: 16
</details>

Logs and Screenshots

Browser Console Logs:
console-export-2025-2-5_16-42-23.txt

Docker Container Logs:
2025-02-05 16:43:56 open-webui | INFO: 192.168.32.1:42608 - "POST /api/v1/chats/ea42d230-9aa4-4bc7-acbb-8ac2fe7b9490 HTTP/1.1" 200 OK
2025-02-05 16:43:56 open-webui | INFO: 192.168.32.1:42608 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
2025-02-05 16:43:56 open-webui | INFO [open_webui.utils.middleware] tools={'dice_roll': {'toolkit_id': 'dice_roller', 'callable': <function Tools.dice_roll at 0x7f18997a87c0>, 'spec': {'name': 'dice_roll', 'description': '\n Rolls a number of dice and returns the result.\n ', 'parameters': {'properties': {'dice_str': {'description': 'A text describing a how many dice should be rolled. E.g. "d100" is one 100-sided die, "2d6" are two six sided dice. A modifier also can be added, e.g. "3d10+2".', 'type': 'string'}}, 'required': ['dice_str'], 'type': 'object'}}, 'pydantic_model': <class 'open_webui.utils.tools.dice_roll'>, 'file_handler': False, 'citation': False}}
2025-02-05 16:43:57 open-webui | INFO: 192.168.32.1:42608 - "POST /api/chat/completions HTTP/1.1" 200 OK
2025-02-05 16:43:57 open-webui |
2025-02-05 16:43:57 open-webui |
2025-02-05 16:43:57 open-webui | {'index': 0, 'id': 'call_2cf4317d-058f-4b18-b592-578355b950fc', 'type': 'function', 'function': {'name': 'dice_roll', 'arguments': "{'dice_str': '2d10'}"}}
2025-02-05 16:43:57 open-webui |
2025-02-05 16:43:57 open-webui |
2025-02-05 16:43:57 open-webui | INFO: 192.168.32.1:42608 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
2025-02-05 16:43:57 open-webui | INFO: 192.168.32.1:42620 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
2025-02-05 16:43:57 open-webui | INFO: 192.168.32.1:42612 - "POST /api/chat/completed HTTP/1.1" 200 OK
2025-02-05 16:43:57 open-webui | INFO: 192.168.32.1:42608 - "POST /api/v1/chats/ea42d230-9aa4-4bc7-acbb-8ac2fe7b9490 HTTP/1.1" 200 OK
2025-02-05 16:43:57 open-webui | INFO: 192.168.32.1:42608 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK

Originally created by @Banbury on GitHub (Feb 5, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/9414 # Bug Report ## Installation Method Docker ## Environment - **Open WebUI Version:** 0.5.9 - **Ollama (if applicable):** 0.5.7 - **Operating System:** Windows 11 - **Browser (if applicable):** Firefox 134.0.2 **Confirmation:** - [ x] I have read and followed all the instructions provided in the README.md. - [ x] I am on the latest version of both Open WebUI and Ollama. - [ x] I have included the browser console logs. - [ x] I have included the Docker container logs. - [ x] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below. ## Expected Behavior: The result of a native tool call is handed to the model for further processing. ## Actual Behavior: The model does nothing with the result of a native tool call. ## Description **Bug Summary:** Native tool calling works in principle, but the the model is not called again with the result. So no answer is generated. ## Reproduction Details **Steps to Reproduce:** * Activate native tool calling for a model (e.g. LLama 3.1 or 3.2). * Open a chat and activate a tool. * Ask the model something, that triggers the function call. * Open Webui creates an empty message, that only contains the details of the tool call, but no further processing by the model. ``` <details type="tool_calls" done="true" content="[{&quot;index&quot;: 0, &quot;id&quot;: &quot;call_2cf4317d-058f-4b18-b592-578355b950fc&quot;, &quot;type&quot;: &quot;function&quot;, &quot;function&quot;: {&quot;name&quot;: &quot;dice_roll&quot;, &quot;arguments&quot;: &quot;{&#x27;dice_str&#x27;: &#x27;2d10&#x27;}&quot;}}]" results="[{&quot;tool_call_id&quot;: &quot;call_2cf4317d-058f-4b18-b592-578355b950fc&quot;, &quot;content&quot;: 16}]"> <summary>Tool Executed</summary> > dice_roll: 16 </details> ``` ## Logs and Screenshots **Browser Console Logs:** [console-export-2025-2-5_16-42-23.txt](https://github.com/user-attachments/files/18674945/console-export-2025-2-5_16-42-23.txt) **Docker Container Logs:** 2025-02-05 16:43:56 open-webui | INFO: 192.168.32.1:42608 - "POST /api/v1/chats/ea42d230-9aa4-4bc7-acbb-8ac2fe7b9490 HTTP/1.1" 200 OK 2025-02-05 16:43:56 open-webui | INFO: 192.168.32.1:42608 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK 2025-02-05 16:43:56 open-webui | INFO [open_webui.utils.middleware] tools={'dice_roll': {'toolkit_id': 'dice_roller', 'callable': <function Tools.dice_roll at 0x7f18997a87c0>, 'spec': {'name': 'dice_roll', 'description': '\n Rolls a number of dice and returns the result.\n ', 'parameters': {'properties': {'dice_str': {'description': 'A text describing a how many dice should be rolled. E.g. "d100" is one 100-sided die, "2d6" are two six sided dice. A modifier also can be added, e.g. "3d10+2".', 'type': 'string'}}, 'required': ['dice_str'], 'type': 'object'}}, 'pydantic_model': <class 'open_webui.utils.tools.dice_roll'>, 'file_handler': False, 'citation': False}} 2025-02-05 16:43:57 open-webui | INFO: 192.168.32.1:42608 - "POST /api/chat/completions HTTP/1.1" 200 OK 2025-02-05 16:43:57 open-webui | 2025-02-05 16:43:57 open-webui | 2025-02-05 16:43:57 open-webui | {'index': 0, 'id': 'call_2cf4317d-058f-4b18-b592-578355b950fc', 'type': 'function', 'function': {'name': 'dice_roll', 'arguments': "{'dice_str': '2d10'}"}} 2025-02-05 16:43:57 open-webui | 2025-02-05 16:43:57 open-webui | 2025-02-05 16:43:57 open-webui | INFO: 192.168.32.1:42608 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK 2025-02-05 16:43:57 open-webui | INFO: 192.168.32.1:42620 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK 2025-02-05 16:43:57 open-webui | INFO: 192.168.32.1:42612 - "POST /api/chat/completed HTTP/1.1" 200 OK 2025-02-05 16:43:57 open-webui | INFO: 192.168.32.1:42608 - "POST /api/v1/chats/ea42d230-9aa4-4bc7-acbb-8ac2fe7b9490 HTTP/1.1" 200 OK 2025-02-05 16:43:57 open-webui | INFO: 192.168.32.1:42608 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
Author
Owner

@tjbck commented on GitHub (Feb 5, 2025):

It is called, llama3 models are known for being unreliable.

<!-- gh-comment-id:2637906324 --> @tjbck commented on GitHub (Feb 5, 2025): It is called, llama3 models are known for being unreliable.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#31024