[GH-ISSUE #10067] Bug - OpenAI exposed API not compatible with external tools #31281

Closed
opened 2026-04-25 05:17:31 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @Seniorsimo on GitHub (Feb 15, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/10067

Bug Report

Installation Method

Docker

Environment

  • Open WebUI Version: v0.5.12
  • Ollama (if applicable): 0.5.7-0-ga420a45-dirty
  • Operating System: Windows 11

Confirmation:

  • I have read and followed all the instructions provided in the README.md.
  • I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.

Expected Behavior:

When the OpenAI compatible APIs are used with any external tools, they should work as the official one.

In my case i'm using Open WebUI APIs from a langchain agent. Switching from the official OpenAI API to the Open WebUI, should work without any compatibility issues.

Actual Behavior:

The Open WebUI exposed API results in some consecutive problems and doesn't work as expected.

Description

Bug Summary:

After I switch my project from OpenAI to the Open WebUI API, the following subsequent problems arise. I manage to solve them in a local fork and now all seem working in my case, so I open this bug to track all che little changes done to make it works.

Sorry if this will be to verbose, I'll try to explain all as best as I can.

The problems found are, in order:

Usage stats doesn't follow the OpenAI standard

Usage stats exposed by the endpoint /api/chat/completions are as follow

"usage":{
	"response_token/s":66.43,
	"prompt_token/s":1114.29,
	"total_duration":3083735297,
	"load_duration":2018708373,
	"prompt_eval_count":156,
	"prompt_eval_duration":140000000,
	"eval_count":38,
	"eval_duration":572000000,
	"approximate_total":"0h0m3s"
}

Instead, following the OpenAI standaard, they should be:

"usage": {
    "prompt_tokens": 156, 
    "completion_tokens": 38, 
    "total_tokens": 194, 
    "completion_tokens_details": {
        "reasoning_tokens": 0, 
        "accepted_prediction_tokens": 0, 
        "rejected_prediction_tokens": 0
    }
}

This broke any external tool that use the exposed API usage field. In my case, langchain broke because no prompt_tokens, completion_tokens and total_tokens are found in the usage dictionary.

Suggested fix:
Change the usage returned to be as in the OpenAI standard.

d317085dd8

ChatMessage validation is incorrect

The ChatMessage validation for the input messages in prompt is not correct. Take this example call's messages

"messages": [
	{
		"content": "You are a helpful assistant", 
		"role": "system"
	}, {
		"content": "It feels so cold today in NY.", 
		"role": "user"
	}, {
		"role": "assistant", 
		"tool_calls": [
			{
				"type": "function", 
				"id": "call_1f82a3f7-7fe3-45a9-9de7-e03949b83563", 
				"function": {
					"name": "weather", 
					"arguments": "{\"city\": \"New York\"}"
				}
			}
		]
	}, {
		"content": "The weather in New York is 37 degrees.", 
		"role": "tool", 
		"tool_call_id": "call_1f82a3f7-7fe3-45a9-9de7-e03949b83563"
	}
]

This is actually a valid OpenAI call to do after a tool is used. Open WebUI doesn't allow this because the third message hasn't a content field.

Suggested Fix
Change the validator in a way that at least one of content or tool_calls should be defined

3f8c446f2f

Missing support for tool in OpenAI to Ollama message conversion

During a request as the one before, no tools were passed to Ollama, so in the last two messages, tool_calls and tool_call_id are missing in the Ollama converted message

Suggested Fix
Add support for this kind of messages

0758abc495

Ollama can generate a single message in call with stream=true

Last is that making the call above (stream=true), Ollama respond with a single message and the actual conversion from Ollama to OpenAI results in a message with null content.

Suggested Fix
Handle this corner case
c4617bea2e

Reproduction Details

Steps to Reproduce:

All this flow of errors can be easily reproduced creating a new python file with langchain, and using the standard OpenAI adapter to connect to the Open WebUI APIs. Then creating a simple agent makes me discover all of this.

Code for reference:

from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain import hub
from typing import Annotated

llm = ChatOpenAI(
    model="qwen2.5:14b",
    api_key="sk-<YOUR_OPENWEBUI_API_KEY_HERE>",
    openai_api_base="http://localhost:8080/api",
    verbose=True
)

@tool
def weather(city: Annotated[str, "The city to inspect current weather"]) -> int:
    """Retrieve the weather for a city."""
    return f"The weather in {city} is 37 degrees."

tools = [weather]

prompt = hub.pull("hwchase17/openai-functions-agent")
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

response = agent_executor.invoke({"input": "It feels so cold today in NY."})
print(response)

Logs and Screenshots

Browser Console Logs:
not applicable

Docker Container Logs:
INFO: 10.88.5.1:0 - "POST /api/chat/completions HTTP/1.1" 200 OK

There is no logs here, every problem found results in a 200 call and sunsequential error in the client due to the varius missing part

Screenshots/Screen Recordings (if applicable):
not applicable

Additional Information

I found all of this tring to integrate a Langchain agent to use Open WebUI, but I suppose that there could be other software that can face the same problems tring to use Open WebUI as a OpenAI compatible API.

Exposing a compatible OpenAI API is a great choice for me! Many other people may use this fantastic free tool as a backend for their project, so from my point of view making the exposed API 100% compatible with the official one is a critical choice!

Let me know if somethings that i reported seems wrong to you.
I'm happy to help in fixing this all, not only for my project, but possibly for others too.

Originally created by @Seniorsimo on GitHub (Feb 15, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/10067 # Bug Report ## Installation Method Docker ## Environment - **Open WebUI Version:** v0.5.12 - **Ollama (if applicable):** 0.5.7-0-ga420a45-dirty - **Operating System:** Windows 11 **Confirmation:** - [x] I have read and followed all the instructions provided in the README.md. - [x] I am on the latest version of both Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below. ## Expected Behavior: When the OpenAI compatible APIs are used with any external tools, they should work as the official one. In my case i'm using Open WebUI APIs from a langchain agent. Switching from the official OpenAI API to the Open WebUI, should work without any compatibility issues. ## Actual Behavior: The Open WebUI exposed API results in some consecutive problems and doesn't work as expected. ## Description **Bug Summary:** After I switch my project from OpenAI to the Open WebUI API, the following subsequent problems arise. I manage to solve them in a local fork and now all seem working in my case, so I open this bug to track all che little changes done to make it works. Sorry if this will be to verbose, I'll try to explain all as best as I can. The problems found are, in order: ### Usage stats doesn't follow the OpenAI standard Usage stats exposed by the endpoint `/api/chat/completions` are as follow ```json "usage":{ "response_token/s":66.43, "prompt_token/s":1114.29, "total_duration":3083735297, "load_duration":2018708373, "prompt_eval_count":156, "prompt_eval_duration":140000000, "eval_count":38, "eval_duration":572000000, "approximate_total":"0h0m3s" } ``` Instead, following the OpenAI standaard, they should be: ```json "usage": { "prompt_tokens": 156, "completion_tokens": 38, "total_tokens": 194, "completion_tokens_details": { "reasoning_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 } } ``` This broke any external tool that use the exposed API `usage` field. In my case, langchain broke because no `prompt_tokens`, `completion_tokens` and `total_tokens` are found in the `usage` dictionary. **Suggested fix**: Change the usage returned to be as in the OpenAI standard. https://github.com/open-webui/open-webui/commit/d317085dd81522a3d5ac09abdc8b4a38fc273763 ### ChatMessage validation is incorrect The ChatMessage validation for the input messages in prompt is not correct. Take this example call's messages ```json "messages": [ { "content": "You are a helpful assistant", "role": "system" }, { "content": "It feels so cold today in NY.", "role": "user" }, { "role": "assistant", "tool_calls": [ { "type": "function", "id": "call_1f82a3f7-7fe3-45a9-9de7-e03949b83563", "function": { "name": "weather", "arguments": "{\"city\": \"New York\"}" } } ] }, { "content": "The weather in New York is 37 degrees.", "role": "tool", "tool_call_id": "call_1f82a3f7-7fe3-45a9-9de7-e03949b83563" } ] ``` This is actually a valid OpenAI call to do after a tool is used. Open WebUI doesn't allow this because the third message hasn't a `content` field. **Suggested Fix** Change the validator in a way that at least one of `content` or `tool_calls` should be defined https://github.com/open-webui/open-webui/commit/3f8c446f2f95379ca511a206f1cd658b01927937 ### Missing support for tool in OpenAI to Ollama message conversion During a request as the one before, no tools were passed to Ollama, so in the last two messages, `tool_calls` and `tool_call_id` are missing in the Ollama converted message **Suggested Fix** Add support for this kind of messages https://github.com/open-webui/open-webui/commit/0758abc49551a2d810f9b85138acd371a29f3785 ### Ollama can generate a single message in call with stream=true Last is that making the call above (stream=true), Ollama respond with a single message and the actual conversion from Ollama to OpenAI results in a message with null `content`. **Suggested Fix** Handle this corner case https://github.com/open-webui/open-webui/commit/c4617bea2e6fbb1616a98e3781fcf2a98553bbfc ## Reproduction Details **Steps to Reproduce:** All this flow of errors can be easily reproduced creating a new python file with langchain, and using the standard OpenAI adapter to connect to the Open WebUI APIs. Then creating a simple agent makes me discover all of this. Code for reference: ```python from langchain_openai import ChatOpenAI from langchain_core.tools import tool from langchain.agents import AgentExecutor, create_tool_calling_agent from langchain import hub from typing import Annotated llm = ChatOpenAI(     model="qwen2.5:14b",     api_key="sk-<YOUR_OPENWEBUI_API_KEY_HERE>",     openai_api_base="http://localhost:8080/api",     verbose=True ) @tool def weather(city: Annotated[str, "The city to inspect current weather"]) -> int:     """Retrieve the weather for a city."""     return f"The weather in {city} is 37 degrees." tools = [weather] prompt = hub.pull("hwchase17/openai-functions-agent") agent = create_tool_calling_agent(llm, tools, prompt) agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) response = agent_executor.invoke({"input": "It feels so cold today in NY."}) print(response) ``` ## Logs and Screenshots **Browser Console Logs:** not applicable **Docker Container Logs:** INFO: 10.88.5.1:0 - "POST /api/chat/completions HTTP/1.1" 200 OK There is no logs here, every problem found results in a 200 call and sunsequential error in the client due to the varius missing part **Screenshots/Screen Recordings (if applicable):** not applicable ## Additional Information I found all of this tring to integrate a Langchain agent to use Open WebUI, but I suppose that there could be other software that can face the same problems tring to use Open WebUI as a OpenAI compatible API. Exposing a compatible OpenAI API is a great choice for me! Many other people may use this fantastic free tool as a backend for their project, so from my point of view making the exposed API 100% compatible with the official one is a critical choice! Let me know if somethings that i reported seems wrong to you. I'm happy to help in fixing this all, not only for my project, but possibly for others too.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#31281