Using tools generates errors #1903

Closed
opened 2025-11-11 14:56:03 -06:00 by GiteaMirror · 0 comments
Owner

Originally created by @raphael1-w on GitHub (Aug 27, 2024).

Bug Report

Installation Method

Docker

Environment

  • Open WebUI Version: v0.3.15

  • Ollama (if applicable): v0.3.5

  • Operating System: Unraid 6.12.10

  • Browser (if applicable): Brave 1.69.153

Confirmation:

  • I have read and followed all the instructions provided in the README.md.
  • I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.

Expected Behavior:

The model should use the tools and give back the response using tools.

Actual Behavior:

The model does not use tools and generates a normal response.

Description

Bug Summary:
When a tool is selected, errors are produced regarding the tools and the model cannot use the tools.

Reproduction Details

Steps to Reproduce:

  1. Create a tool (I used the default tool)
  2. Select the tool in the model to allow the model to use the tool.
  3. Start a new chat using said model and activate the tool.
  4. Sent a request.
  5. Review response and logs.

Logs and Screenshots

image
No tool use by model with tool usage turned on

Docker Container Logs:

INFO:     218.103. - "POST /api/v1/chats/new HTTP/1.1" 200 OK
INFO:     218.103.- "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
INFO  [main] tools={'calculator': {'toolkit_id': 'get_user_name_email_id_time_weather_and_calculate', 'callable': <function apply_extra_params_to_tool_function.<locals>.new_function at 0x14e84ff3c040>, 'spec': {'name': 'calculator', 'description': 'Calculate the result of an equation.', 'parameters': {'type': 'object', 'properties': {'equation': {'type': 'string', 'description': 'The equation to calculate.'}}, 'required': ['equation']}}, 'pydantic_model': <class 'utils.schemas.calculator'>, 'file_handler': False, 'citation': False}, 'get_current_time': {'toolkit_id': 'get_user_name_email_id_time_weather_and_calculate', 'callable': <function apply_extra_params_to_tool_function.<locals>.new_function at 0x14e84ff96980>, 'spec': {'name': 'get_current_time', 'description': 'Get the current time in a more human-readable format.', 'parameters': {'type': 'object', 'properties': {}, 'required': []}}, 'pydantic_model': <class 'utils.schemas.get_current_time'>, 'file_handler': False, 'citation': False}, 'get_user_name_and_email_and_id': {'toolkit_id': 'get_user_name_email_id_time_weather_and_calculate', 'callable': <function apply_extra_params_to_tool_function.<locals>.new_function at 0x14e8503a2b60>, 'spec': {'name': 'get_user_name_and_email_and_id', 'description': 'Get the user name, Email and ID from the user object.', 'parameters': {'type': 'object', 'properties': {}, 'required': []}}, 'pydantic_model': <class 'utils.schemas.get_user_name_and_email_and_id'>, 'file_handler': False, 'citation': False}}
INFO  [main] tools_function_calling_prompt='Available Tools: [{"name": "calculator", "description": "Calculate the result of an equation.", "parameters": {"type": "object", "properties": {"equation": {"type": "string", "description": "The equation to calculate."}}, "required": ["equation"]}}, {"name": "get_current_time", "description": "Get the current time in a more human-readable format.", "parameters": {"type": "object", "properties": {}, "required": []}}, {"name": "get_user_name_and_email_and_id", "description": "Get the user name, Email and ID from the user object.", "parameters": {"type": "object", "properties": {}, "required": []}}]\nReturn an empty string if no tools match the query. If a function tool matches, construct and return a JSON object in the format {"name": "functionName", "parameters": {"requiredFunctionParamKey": "requiredFunctionParamValue"}} using the appropriate tool and its parameters. Only return the object and limit the response to the JSON object without additional text.'
INFO  [apps.ollama.main] url: http://192.168.1.2:port
ERROR [main] Error: Expecting value: line 1 column 1 (char 0)
Traceback (most recent call last):
  File "/app/backend/main.py", line 421, in chat_completion_tools_handler
    result = json.loads(content)
             ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
INFO  [apps.ollama.main] url: http://192.168.1.2:port
No requirements found in frontmatter.
Loaded module: get_user_name_email_id_time_weather_and_calculate
INFO:     218.103. - "POST /ollama/api/chat HTTP/1.1" 200 OK
INFO:     218.103. - "POST /api/chat/completed HTTP/1.1" 200 OK
INFO:     218.103. - "POST /api/v1/chats/836b34ab-50a3-4956-9fac-336700a754fe HTTP/1.1" 200 OK
INFO:     218.103. - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
INFO:     218.103. - "POST /api/v1/chats/836b34ab-50a3-4956-9fac-336700a754fe HTTP/1.1" 200 OK
INFO:     218.103. - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
INFO  [apps.ollama.main] url: http://192.168.1.2:port
generate_title
gemma2:2b
INFO:     218.103 - "POST /api/task/title/completions HTTP/1.1" 200 OK
INFO:     218.103. - "POST /api/v1/chats/836b34ab-50a3-4956-9fac-336700a754fe HTTP/1.1" 200 OK
INFO:     218.103. - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
INFO:     218.103. - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK

Additional Information

Model: llama3.1:8b-instruct-q4_K_S (from ollama), with system prompt

System prompt: You're a virtual assistant who answers questions by the user. The user's name is  {{USER_NAME}}. When prompted, answer the user directly. Embody the role of the most qualified subject matter experts. Break down complexities into smaller steps with clear reasoning. Use headings and subheadings to highlight the main points and sections of the response. Avoid disclaimers about your level of expertise. Do not recommend external information sources. If asked to code, comment on the code clearly. The default coding language is Python. Do not use italics in your response. Do not hallucinate. Do not make up factual information. The current date is {{CURRENT_DATE}}.

Originally created by @raphael1-w on GitHub (Aug 27, 2024). # Bug Report ## Installation Method Docker ## Environment - **Open WebUI Version:** v0.3.15 - **Ollama (if applicable):** v0.3.5 - **Operating System:** Unraid 6.12.10 - **Browser (if applicable):** Brave 1.69.153 **Confirmation:** - [x] I have read and followed all the instructions provided in the README.md. - [x] I am on the latest version of both Open WebUI and Ollama. - [ ] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below. ## Expected Behavior: The model should use the tools and give back the response using tools. ## Actual Behavior: The model does not use tools and generates a normal response. ## Description **Bug Summary:** When a tool is selected, errors are produced regarding the tools and the model cannot use the tools. ## Reproduction Details **Steps to Reproduce:** 1. Create a tool (I used the default tool) 2. Select the tool in the model to allow the model to use the tool. 3. Start a new chat using said model and activate the tool. 4. Sent a request. 5. Review response and logs. ## Logs and Screenshots ![image](https://github.com/user-attachments/assets/3056169f-9950-40f6-9bf5-be4127d6ee5c) No tool use by model with tool usage turned on **Docker Container Logs:** ``` INFO: 218.103. - "POST /api/v1/chats/new HTTP/1.1" 200 OK INFO: 218.103.- "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK INFO [main] tools={'calculator': {'toolkit_id': 'get_user_name_email_id_time_weather_and_calculate', 'callable': <function apply_extra_params_to_tool_function.<locals>.new_function at 0x14e84ff3c040>, 'spec': {'name': 'calculator', 'description': 'Calculate the result of an equation.', 'parameters': {'type': 'object', 'properties': {'equation': {'type': 'string', 'description': 'The equation to calculate.'}}, 'required': ['equation']}}, 'pydantic_model': <class 'utils.schemas.calculator'>, 'file_handler': False, 'citation': False}, 'get_current_time': {'toolkit_id': 'get_user_name_email_id_time_weather_and_calculate', 'callable': <function apply_extra_params_to_tool_function.<locals>.new_function at 0x14e84ff96980>, 'spec': {'name': 'get_current_time', 'description': 'Get the current time in a more human-readable format.', 'parameters': {'type': 'object', 'properties': {}, 'required': []}}, 'pydantic_model': <class 'utils.schemas.get_current_time'>, 'file_handler': False, 'citation': False}, 'get_user_name_and_email_and_id': {'toolkit_id': 'get_user_name_email_id_time_weather_and_calculate', 'callable': <function apply_extra_params_to_tool_function.<locals>.new_function at 0x14e8503a2b60>, 'spec': {'name': 'get_user_name_and_email_and_id', 'description': 'Get the user name, Email and ID from the user object.', 'parameters': {'type': 'object', 'properties': {}, 'required': []}}, 'pydantic_model': <class 'utils.schemas.get_user_name_and_email_and_id'>, 'file_handler': False, 'citation': False}} INFO [main] tools_function_calling_prompt='Available Tools: [{"name": "calculator", "description": "Calculate the result of an equation.", "parameters": {"type": "object", "properties": {"equation": {"type": "string", "description": "The equation to calculate."}}, "required": ["equation"]}}, {"name": "get_current_time", "description": "Get the current time in a more human-readable format.", "parameters": {"type": "object", "properties": {}, "required": []}}, {"name": "get_user_name_and_email_and_id", "description": "Get the user name, Email and ID from the user object.", "parameters": {"type": "object", "properties": {}, "required": []}}]\nReturn an empty string if no tools match the query. If a function tool matches, construct and return a JSON object in the format {"name": "functionName", "parameters": {"requiredFunctionParamKey": "requiredFunctionParamValue"}} using the appropriate tool and its parameters. Only return the object and limit the response to the JSON object without additional text.' INFO [apps.ollama.main] url: http://192.168.1.2:port ERROR [main] Error: Expecting value: line 1 column 1 (char 0) Traceback (most recent call last): File "/app/backend/main.py", line 421, in chat_completion_tools_handler result = json.loads(content) ^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/json/__init__.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) INFO [apps.ollama.main] url: http://192.168.1.2:port No requirements found in frontmatter. Loaded module: get_user_name_email_id_time_weather_and_calculate INFO: 218.103. - "POST /ollama/api/chat HTTP/1.1" 200 OK INFO: 218.103. - "POST /api/chat/completed HTTP/1.1" 200 OK INFO: 218.103. - "POST /api/v1/chats/836b34ab-50a3-4956-9fac-336700a754fe HTTP/1.1" 200 OK INFO: 218.103. - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK INFO: 218.103. - "POST /api/v1/chats/836b34ab-50a3-4956-9fac-336700a754fe HTTP/1.1" 200 OK INFO: 218.103. - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK INFO [apps.ollama.main] url: http://192.168.1.2:port generate_title gemma2:2b INFO: 218.103 - "POST /api/task/title/completions HTTP/1.1" 200 OK INFO: 218.103. - "POST /api/v1/chats/836b34ab-50a3-4956-9fac-336700a754fe HTTP/1.1" 200 OK INFO: 218.103. - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK INFO: 218.103. - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK ``` ## Additional Information Model: llama3.1:8b-instruct-q4_K_S (from ollama), with system prompt System prompt: You're a virtual assistant who answers questions by the user. The user's name is  {{USER_NAME}}. When prompted, answer the user directly. Embody the role of the most qualified subject matter experts. Break down complexities into smaller steps with clear reasoning. Use headings and subheadings to highlight the main points and sections of the response. Avoid disclaimers about your level of expertise. Do not recommend external information sources. If asked to code, comment on the code clearly. The default coding language is Python. Do not use italics in your response. Do not hallucinate. Do not make up factual information. The current date is {{CURRENT_DATE}}.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#1903