[GH-ISSUE #12441] issue: Bad Request (400 no Body) from OpenAI Chat Completion API Endpoint in n8n-Tool-Integration #16603

Closed
opened 2026-04-19 22:29:55 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @floriankick on GitHub (Apr 4, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/12441

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

0.6.0

Ollama Version (if applicable)

0.6.2

Operating System

Linux

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have listed steps to reproduce the bug in detail.

Expected Behavior

The API endpoint /ollama/v1/chat/completions should be compatible with the corresponding endpoint of the Ollama instance.

Actual Behavior

The WebUI API endpoint /ollama/v1/chat/completions does not accept a message content=null whereas the corresponding Ollama API endpoint /v1/chat/completions can handle the same request.
This issue causes n8n OpenAI Chat model node to fail on the second request after calling a tool, because the tool call message from the LLM (that is sent as part of the conversation history within the context window) has an empty content field.

Steps to Reproduce

Send a request to http://webui.instance/ollama/v1/chat/completions with the following body:

{
    "model": "<insert any model name>",
    "messages": [{"role": "system", "content": "You are a helpful assistant"}, {"role": "user", "content": "calculate the square root of 55"}, {"role": "assistant", "content": null, "tool_calls": [{"id": "call_0o4z52ag", "type": "function", "function": {"name": "calculator", "arguments": "{\"input\":\"sqrt(55)\"}"}}]}, {"role": "tool", "content": "7.416198487095663", "tool_call_id": "call_0o4z52ag"}],
    "stream": false
}

Use any model available, it doesn't matter because the request won't reach the model anyways.

For Reference send the same Request to http://ollama.instance/v1/chat/completions which works as expected.

Logs & Screenshots

Docker logs:

2025-04-04 07:56:20.268 | ERROR    | open_webui.routers.ollama:generate_openai_chat_completion:1305 - Request data -----> {'model': 'mistral-small:latest', 'messages': [{'role': 'system', 'content': 'You are a helpful assistant'}, {'role': 'user', 'content': 'calculate the square root of 55'}, {'role': 'assistant', 'content': None, 'tool_calls': [{'id': 'call_0o4z52ag', 'type': 'function', 'function': {'name': 'calculator', 'arguments': '{"input":"sqrt(55)"}'}}]}, {'role': 'tool', 'content': '7.416198487095663', 'tool_call_id': 'call_0o4z52ag'}], 'stream': False} - {}
2025-04-04 07:56:20.269 | ERROR    | open_webui.routers.ollama:generate_openai_chat_completion:1309 - 2 validation errors for OpenAIChatCompletionForm
messages.2.content.str
  Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.10/v/string_type
messages.2.content.list[OpenAIChatMessageContent]
  Input should be a valid list [type=list_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.10/v/list_type - {}
Traceback (most recent call last):

  ... see attached file for full log!

[docker-logs.txt](https://github.com/user-attachments/files/19600487/docker-logs.txt)

pydantic_core._pydantic_core.ValidationError: 2 validation errors for OpenAIChatCompletionForm
messages.2.content.str
  Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.10/v/string_type
messages.2.content.list[OpenAIChatMessageContent]
  Input should be a valid list [type=list_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.10/v/list_type
2025-04-04 07:56:20.277 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:33412 - "POST /ollama/v1/chat/completions HTTP/1.1" 400 - {}

Additional Information

No response

Originally created by @floriankick on GitHub (Apr 4, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/12441 ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version 0.6.0 ### Ollama Version (if applicable) 0.6.2 ### Operating System Linux ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have listed steps to reproduce the bug in detail. ### Expected Behavior The API endpoint /ollama/v1/chat/completions should be compatible with the corresponding endpoint of the Ollama instance. ### Actual Behavior The WebUI API endpoint /ollama/v1/chat/completions does not accept a message content=null whereas the corresponding Ollama API endpoint /v1/chat/completions can handle the same request. This issue causes n8n OpenAI Chat model node to fail on the second request after calling a tool, because the tool call message from the LLM (that is sent as part of the conversation history within the context window) has an empty content field. ### Steps to Reproduce Send a request to http://webui.instance/ollama/v1/chat/completions with the following body: ``` { "model": "<insert any model name>", "messages": [{"role": "system", "content": "You are a helpful assistant"}, {"role": "user", "content": "calculate the square root of 55"}, {"role": "assistant", "content": null, "tool_calls": [{"id": "call_0o4z52ag", "type": "function", "function": {"name": "calculator", "arguments": "{\"input\":\"sqrt(55)\"}"}}]}, {"role": "tool", "content": "7.416198487095663", "tool_call_id": "call_0o4z52ag"}], "stream": false } ``` Use any model available, it doesn't matter because the request won't reach the model anyways. For Reference send the same Request to http://ollama.instance/v1/chat/completions which works as expected. ### Logs & Screenshots Docker logs: ``` 2025-04-04 07:56:20.268 | ERROR | open_webui.routers.ollama:generate_openai_chat_completion:1305 - Request data -----> {'model': 'mistral-small:latest', 'messages': [{'role': 'system', 'content': 'You are a helpful assistant'}, {'role': 'user', 'content': 'calculate the square root of 55'}, {'role': 'assistant', 'content': None, 'tool_calls': [{'id': 'call_0o4z52ag', 'type': 'function', 'function': {'name': 'calculator', 'arguments': '{"input":"sqrt(55)"}'}}]}, {'role': 'tool', 'content': '7.416198487095663', 'tool_call_id': 'call_0o4z52ag'}], 'stream': False} - {} 2025-04-04 07:56:20.269 | ERROR | open_webui.routers.ollama:generate_openai_chat_completion:1309 - 2 validation errors for OpenAIChatCompletionForm messages.2.content.str Input should be a valid string [type=string_type, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.10/v/string_type messages.2.content.list[OpenAIChatMessageContent] Input should be a valid list [type=list_type, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.10/v/list_type - {} Traceback (most recent call last): ... see attached file for full log! [docker-logs.txt](https://github.com/user-attachments/files/19600487/docker-logs.txt) pydantic_core._pydantic_core.ValidationError: 2 validation errors for OpenAIChatCompletionForm messages.2.content.str Input should be a valid string [type=string_type, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.10/v/string_type messages.2.content.list[OpenAIChatMessageContent] Input should be a valid list [type=list_type, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.10/v/list_type 2025-04-04 07:56:20.277 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 172.18.0.1:33412 - "POST /ollama/v1/chat/completions HTTP/1.1" 400 - {} ``` ### Additional Information _No response_
GiteaMirror added the bug label 2026-04-19 22:29:55 -05:00
Author
Owner

@gaby commented on GitHub (Apr 4, 2025):

@floriankick I think thats the wrong endpoint. You should use https://yourhostname/api

<!-- gh-comment-id:2778474738 --> @gaby commented on GitHub (Apr 4, 2025): @floriankick I think thats the wrong endpoint. You should use `https://yourhostname/api`
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#16603