[GH-ISSUE #14479] issue: When system-prompt in OpenAI model is set, there will be an error 400 #55937

Closed
opened 2026-05-05 18:18:29 -05:00 by GiteaMirror · 24 comments
Owner

Originally created by @Poxel2 on GitHub (May 29, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/14479

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Git Clone

Open WebUI Version

0.6.12

Ollama Version (if applicable)

No response

Operating System

Docker

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

The 'system' prompt in a model should be used normally and should not cause an error with OpenAI API.

Actual Behavior

Since 0.6.12 there is an error 400 when using a model with a predefined 'system' prompt. After clearing the 'system' prompt, it's working again.

Steps to Reproduce

Insert some text in the 'system' prompt field in an OpenAI Modell:

Image

Use model and ask something:

Image

When 'system' prompt is clear, model works fine.

Logs & Screenshots

2025-05-29 07:27:58.749 | ERROR | open_webui.routers.openai:generate_chat_completion:866 - 400, message='Bad Request', url='https://api.openai.com/v1/chat/completions' - {}
Traceback (most recent call last):

File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 4, in
uvicorn.main()
│ └
└ <module 'uvicorn' from '/usr/local/lib/python3.11/site-packages/uvicorn/init.py'>
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1442, in call
return self.main(args, **kwargs)
│ │ │ └ {}
│ │ └ ()
│ └ <function Command.main at 0x7f71c8cfe700>

File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1363, in main
rv = self.invoke(ctx)
│ │ └ <click.core.Context object at 0x7f71c9a88950>
│ └ <function Command.invoke at 0x7f71c8cfe3e0>

File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1226, in invoke
return ctx.invoke(self.callback, **ctx.params)
│ │ │ │ │ └ {'host': '0.0.0.0', 'port': 8080, 'forwarded_allow_ips': '
', 'workers': 1, 'app': 'open_webui.main:app', 'uds': None, 'fd': ...
│ │ │ │ └ <click.core.Context object at 0x7f71c9a88950>
│ │ │ └ <function main at 0x7f71c89acc20>
│ │ └
│ └ <function Context.invoke at 0x7f71c8cfd620>
└ <click.core.Context object at 0x7f71c9a88950>
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 794, in invoke
return callback(args, **kwargs)
│ │ └ {'host': '0.0.0.0', 'port': 8080, 'forwarded_allow_ips': '
', 'workers': 1, 'app': 'open_webui.main:app', 'uds': None, 'fd': ...
│ └ ()
└ <function main at 0x7f71c89acc20>
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main
run(
└ <function run at 0x7f71c8ddb740>
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run
server.run()
│ └ <function Server.run at 0x7f71c8c5c860>
└ <uvicorn.server.Server object at 0x7f71c8ddfc10>
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run
return asyncio.run(self.serve(sockets=sockets))
│ │ │ │ └ None
│ │ │ └ <function Server.serve at 0x7f71c8c5c900>
│ │ └ <uvicorn.server.Server object at 0x7f71c8ddfc10>
│ └ <function run at 0x7f71c9131300>
└ <module 'asyncio' from '/usr/local/lib/python3.11/asyncio/init.py'>
File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
│ │ └ <coroutine object Server.serve at 0x7f71c8b7c8b0>
│ └ <function Runner.run at 0x7f71c8fa4ea0>
└ <asyncio.runners.Runner object at 0x7f71c89a3010>
File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
│ │ │ └ <Task pending name='Task-1' coro=<Server.serve() running at /usr/local/lib/python3.11/site-packages/uvicorn/server.py:70> wai...
│ │ └ <cyfunction Loop.run_until_complete at 0x7f71c89a7370>
│ └ <uvloop.Loop running=True closed=False debug=False>
└ <asyncio.runners.Runner object at 0x7f71c89a3010>
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/base.py", line 141, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
│ │ │ │ └ <function BaseHTTPMiddleware.call..call_next..send_no_error at 0x7f702c788f40>
│ │ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
│ └ <starlette_compress.CompressMiddleware object at 0x7f71c7da2070>
└ <open_webui.main.RedirectMiddleware object at 0x7f705abd0a50>
File "/usr/local/lib/python3.11/site-packages/starlette_compress/init.py", line 92, in call
return await self._zstd(scope, receive, send)
│ │ │ │ └ <function BaseHTTPMiddleware.call..call_next..send_no_error at 0x7f702c788f40>
│ │ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
│ └ <member '_zstd' of 'CompressMiddleware' objects>
└ <starlette_compress.CompressMiddleware object at 0x7f71c7da2070>
File "/usr/local/lib/python3.11/site-packages/starlette_compress/_zstd_legacy.py", line 100, in call
await self.app(scope, receive, wrapper)
│ │ │ │ └ <function ZstdResponder.call..wrapper at 0x7f702c78a200>
│ │ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
│ └ <member 'app' of 'ZstdResponder' objects>
└ <starlette_compress._zstd_legacy.ZstdResponder object at 0x7f705a720340>
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
│ │ │ │ │ │ └ <function ZstdResponder.call..wrapper at 0x7f702c78a200>
│ │ │ │ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ │ │ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
│ │ │ └ <starlette.requests.Request object at 0x7f702c5c8c90>
│ │ └ <fastapi.routing.APIRouter object at 0x7f705ab55e10>
│ └ <starlette.middleware.exceptions.ExceptionMiddleware object at 0x7f7059fc3210>
└ <function wrap_app_handling_exceptions at 0x7f71c5df8900>
File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
│ │ │ └ <function wrap_app_handling_exceptions..wrapped_app..sender at 0x7f702c788360>
│ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
└ <fastapi.routing.APIRouter object at 0x7f705ab55e10>
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 715, in call
await self.middleware_stack(scope, receive, send)
│ │ │ │ └ <function wrap_app_handling_exceptions..wrapped_app..sender at 0x7f702c788360>
│ │ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
│ └ <bound method Router.app of <fastapi.routing.APIRouter object at 0x7f705ab55e10>>
└ <fastapi.routing.APIRouter object at 0x7f705ab55e10>
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
│ │ │ │ └ <function wrap_app_handling_exceptions..wrapped_app..sender at 0x7f702c788360>
│ │ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
│ └ <function Route.handle at 0x7f71c5df9f80>
└ APIRoute(path='/api/chat/completions', name='chat_completion', methods=['POST'])
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
│ │ │ │ └ <function wrap_app_handling_exceptions..wrapped_app..sender at 0x7f702c788360>
│ │ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
│ └ <function request_response..app at 0x7f7059bd5440>
└ APIRoute(path='/api/chat/completions', name='chat_completion', methods=['POST'])
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
│ │ │ │ │ └ <function wrap_app_handling_exceptions..wrapped_app..sender at 0x7f702c788360>
│ │ │ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ │ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
│ │ └ <starlette.requests.Request object at 0x7f702c5ca190>
│ └ <function request_response..app..app at 0x7f702c788180>
└ <function wrap_app_handling_exceptions at 0x7f71c5df8900>
File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
│ │ │ └ <function wrap_app_handling_exceptions..wrapped_app..sender at 0x7f702c789800>
│ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
└ <function request_response..app..app at 0x7f702c788180>
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
│ └ <starlette.requests.Request object at 0x7f702c5ca190>
└ <function get_request_handler..app at 0x7f7059bd5300>
File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
└ <function run_endpoint_function at 0x7f71c5dfbd80>
File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
│ │ └ {'user': UserModel(id='f2d0c7f2-d197-458b-a735-7ff8fcea1b35', name='Christian', email='my@home.com', role='admin',...
│ └ <function chat_completion at 0x7f7059b50a40>
└ Dependant(path_params=[], query_params=[], header_params=[], cookie_params=[], body_params=[ModelField(field_info=Body(Pydant...

File "/app/backend/open_webui/main.py", line 1265, in chat_completion
response = await chat_completion_handler(request, form_data, user)
│ │ │ └ UserModel(id='f2d0c7f2-d197-458b-a735-7ff8fcea1b35', name='Christian', email='my@home.com', role='admin', profile_...
│ │ └ {'stream': True, 'model': 'gpt-4.1-mini', 'messages': [{'role': 'system', 'content': 'Only a short text'}, {'role': 'user', '...
│ └ <starlette.requests.Request object at 0x7f702c5ca190>
└ <function generate_chat_completion at 0x7f705e70de40>

File "/app/backend/open_webui/utils/chat.py", line 278, in generate_chat_completion
return await generate_openai_chat_completion(
└ <function generate_chat_completion at 0x7f705e70c720>

File "/app/backend/open_webui/routers/openai.py", line 863, in generate_chat_completion
r.raise_for_status()
│ └ <function ClientResponse.raise_for_status at 0x7f71c6927b00>
└ <ClientResponse(https://api.openai.com/v1/chat/completions) [400 Bad Request]>
<CIMultiDictProxy('Date': 'Thu, 29 May 2025 07...

File "/usr/local/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 1161, in raise_for_status
raise ClientResponseError(
└ <class 'aiohttp.client_exceptions.ClientResponseError'>

aiohttp.client_exceptions.ClientResponseError: 400, message='Bad Request', url='https://api.openai.com/v1/chat/completions'

Additional Information

No response

Originally created by @Poxel2 on GitHub (May 29, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/14479 ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Git Clone ### Open WebUI Version 0.6.12 ### Ollama Version (if applicable) _No response_ ### Operating System Docker ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior The 'system' prompt in a model should be used normally and should not cause an error with OpenAI API. ### Actual Behavior Since 0.6.12 there is an error 400 when using a model with a predefined 'system' prompt. After clearing the 'system' prompt, it's working again. ### Steps to Reproduce Insert some text in the 'system' prompt field in an OpenAI Modell: ![Image](https://github.com/user-attachments/assets/1f6bed59-dd31-4425-95c8-06cdea6b59a3) Use model and ask something: ![Image](https://github.com/user-attachments/assets/4a201593-80b6-44ff-b18b-fedf9bf13fd7) When 'system' prompt is clear, model works fine. ### Logs & Screenshots 2025-05-29 07:27:58.749 | ERROR | open_webui.routers.openai:generate_chat_completion:866 - 400, message='Bad Request', url='https://api.openai.com/v1/chat/completions' - {} Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "/usr/local/lib/python3.11/site-packages/uvicorn/__main__.py", line 4, in <module> uvicorn.main() │ └ <Command main> └ <module 'uvicorn' from '/usr/local/lib/python3.11/site-packages/uvicorn/__init__.py'> File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1442, in __call__ return self.main(*args, **kwargs) │ │ │ └ {} │ │ └ () │ └ <function Command.main at 0x7f71c8cfe700> └ <Command main> File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1363, in main rv = self.invoke(ctx) │ │ └ <click.core.Context object at 0x7f71c9a88950> │ └ <function Command.invoke at 0x7f71c8cfe3e0> └ <Command main> File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1226, in invoke return ctx.invoke(self.callback, **ctx.params) │ │ │ │ │ └ {'host': '0.0.0.0', 'port': 8080, 'forwarded_allow_ips': '*', 'workers': 1, 'app': 'open_webui.main:app', 'uds': None, 'fd': ... │ │ │ │ └ <click.core.Context object at 0x7f71c9a88950> │ │ │ └ <function main at 0x7f71c89acc20> │ │ └ <Command main> │ └ <function Context.invoke at 0x7f71c8cfd620> └ <click.core.Context object at 0x7f71c9a88950> File "/usr/local/lib/python3.11/site-packages/click/core.py", line 794, in invoke return callback(*args, **kwargs) │ │ └ {'host': '0.0.0.0', 'port': 8080, 'forwarded_allow_ips': '*', 'workers': 1, 'app': 'open_webui.main:app', 'uds': None, 'fd': ... │ └ () └ <function main at 0x7f71c89acc20> File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main run( └ <function run at 0x7f71c8ddb740> File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run server.run() │ └ <function Server.run at 0x7f71c8c5c860> └ <uvicorn.server.Server object at 0x7f71c8ddfc10> File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run return asyncio.run(self.serve(sockets=sockets)) │ │ │ │ └ None │ │ │ └ <function Server.serve at 0x7f71c8c5c900> │ │ └ <uvicorn.server.Server object at 0x7f71c8ddfc10> │ └ <function run at 0x7f71c9131300> └ <module 'asyncio' from '/usr/local/lib/python3.11/asyncio/__init__.py'> File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) │ │ └ <coroutine object Server.serve at 0x7f71c8b7c8b0> │ └ <function Runner.run at 0x7f71c8fa4ea0> └ <asyncio.runners.Runner object at 0x7f71c89a3010> File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) │ │ │ └ <Task pending name='Task-1' coro=<Server.serve() running at /usr/local/lib/python3.11/site-packages/uvicorn/server.py:70> wai... │ │ └ <cyfunction Loop.run_until_complete at 0x7f71c89a7370> │ └ <uvloop.Loop running=True closed=False debug=False> └ <asyncio.runners.Runner object at 0x7f71c89a3010> File "/usr/local/lib/python3.11/site-packages/starlette/middleware/base.py", line 141, in coro await self.app(scope, receive_or_disconnect, send_no_error) │ │ │ │ └ <function BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.send_no_error at 0x7f702c788f40> │ │ │ └ <function BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.receive_or_disconnect at 0x7f702c789940> │ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c... │ └ <starlette_compress.CompressMiddleware object at 0x7f71c7da2070> └ <open_webui.main.RedirectMiddleware object at 0x7f705abd0a50> File "/usr/local/lib/python3.11/site-packages/starlette_compress/__init__.py", line 92, in __call__ return await self._zstd(scope, receive, send) │ │ │ │ └ <function BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.send_no_error at 0x7f702c788f40> │ │ │ └ <function BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.receive_or_disconnect at 0x7f702c789940> │ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c... │ └ <member '_zstd' of 'CompressMiddleware' objects> └ <starlette_compress.CompressMiddleware object at 0x7f71c7da2070> File "/usr/local/lib/python3.11/site-packages/starlette_compress/_zstd_legacy.py", line 100, in __call__ await self.app(scope, receive, wrapper) │ │ │ │ └ <function ZstdResponder.__call__.<locals>.wrapper at 0x7f702c78a200> │ │ │ └ <function BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.receive_or_disconnect at 0x7f702c789940> │ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c... │ └ <member 'app' of 'ZstdResponder' objects> └ <starlette_compress._zstd_legacy.ZstdResponder object at 0x7f705a720340> File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__ await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) │ │ │ │ │ │ └ <function ZstdResponder.__call__.<locals>.wrapper at 0x7f702c78a200> │ │ │ │ │ └ <function BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.receive_or_disconnect at 0x7f702c789940> │ │ │ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c... │ │ │ └ <starlette.requests.Request object at 0x7f702c5c8c90> │ │ └ <fastapi.routing.APIRouter object at 0x7f705ab55e10> │ └ <starlette.middleware.exceptions.ExceptionMiddleware object at 0x7f7059fc3210> └ <function wrap_app_handling_exceptions at 0x7f71c5df8900> File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app await app(scope, receive, sender) │ │ │ └ <function wrap_app_handling_exceptions.<locals>.wrapped_app.<locals>.sender at 0x7f702c788360> │ │ └ <function BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.receive_or_disconnect at 0x7f702c789940> │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c... └ <fastapi.routing.APIRouter object at 0x7f705ab55e10> File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 715, in __call__ await self.middleware_stack(scope, receive, send) │ │ │ │ └ <function wrap_app_handling_exceptions.<locals>.wrapped_app.<locals>.sender at 0x7f702c788360> │ │ │ └ <function BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.receive_or_disconnect at 0x7f702c789940> │ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c... │ └ <bound method Router.app of <fastapi.routing.APIRouter object at 0x7f705ab55e10>> └ <fastapi.routing.APIRouter object at 0x7f705ab55e10> File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 735, in app await route.handle(scope, receive, send) │ │ │ │ └ <function wrap_app_handling_exceptions.<locals>.wrapped_app.<locals>.sender at 0x7f702c788360> │ │ │ └ <function BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.receive_or_disconnect at 0x7f702c789940> │ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c... │ └ <function Route.handle at 0x7f71c5df9f80> └ APIRoute(path='/api/chat/completions', name='chat_completion', methods=['POST']) File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 288, in handle await self.app(scope, receive, send) │ │ │ │ └ <function wrap_app_handling_exceptions.<locals>.wrapped_app.<locals>.sender at 0x7f702c788360> │ │ │ └ <function BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.receive_or_disconnect at 0x7f702c789940> │ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c... │ └ <function request_response.<locals>.app at 0x7f7059bd5440> └ APIRoute(path='/api/chat/completions', name='chat_completion', methods=['POST']) File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 76, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) │ │ │ │ │ └ <function wrap_app_handling_exceptions.<locals>.wrapped_app.<locals>.sender at 0x7f702c788360> │ │ │ │ └ <function BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.receive_or_disconnect at 0x7f702c789940> │ │ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c... │ │ └ <starlette.requests.Request object at 0x7f702c5ca190> │ └ <function request_response.<locals>.app.<locals>.app at 0x7f702c788180> └ <function wrap_app_handling_exceptions at 0x7f71c5df8900> File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app await app(scope, receive, sender) │ │ │ └ <function wrap_app_handling_exceptions.<locals>.wrapped_app.<locals>.sender at 0x7f702c789800> │ │ └ <function BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.receive_or_disconnect at 0x7f702c789940> │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c... └ <function request_response.<locals>.app.<locals>.app at 0x7f702c788180> File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 73, in app response = await f(request) │ └ <starlette.requests.Request object at 0x7f702c5ca190> └ <function get_request_handler.<locals>.app at 0x7f7059bd5300> File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 301, in app raw_response = await run_endpoint_function( └ <function run_endpoint_function at 0x7f71c5dfbd80> File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 212, in run_endpoint_function return await dependant.call(**values) │ │ └ {'user': UserModel(id='f2d0c7f2-d197-458b-a735-7ff8fcea1b35', name='Christian', email='my@home.com', role='admin',... │ └ <function chat_completion at 0x7f7059b50a40> └ Dependant(path_params=[], query_params=[], header_params=[], cookie_params=[], body_params=[ModelField(field_info=Body(Pydant... File "/app/backend/open_webui/main.py", line 1265, in chat_completion response = await chat_completion_handler(request, form_data, user) │ │ │ └ UserModel(id='f2d0c7f2-d197-458b-a735-7ff8fcea1b35', name='Christian', email='my@home.com', role='admin', profile_... │ │ └ {'stream': True, 'model': 'gpt-4.1-mini', 'messages': [{'role': 'system', 'content': 'Only a short text'}, {'role': 'user', '... │ └ <starlette.requests.Request object at 0x7f702c5ca190> └ <function generate_chat_completion at 0x7f705e70de40> File "/app/backend/open_webui/utils/chat.py", line 278, in generate_chat_completion return await generate_openai_chat_completion( └ <function generate_chat_completion at 0x7f705e70c720> > File "/app/backend/open_webui/routers/openai.py", line 863, in generate_chat_completion r.raise_for_status() │ └ <function ClientResponse.raise_for_status at 0x7f71c6927b00> └ <ClientResponse(https://api.openai.com/v1/chat/completions) [400 Bad Request]> <CIMultiDictProxy('Date': 'Thu, 29 May 2025 07... File "/usr/local/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 1161, in raise_for_status raise ClientResponseError( └ <class 'aiohttp.client_exceptions.ClientResponseError'> aiohttp.client_exceptions.ClientResponseError: 400, message='Bad Request', url='https://api.openai.com/v1/chat/completions' ### Additional Information _No response_
GiteaMirror added the bug label 2026-05-05 18:18:30 -05:00
Author
Owner

@Poxel2 commented on GitHub (May 29, 2025):

I made a rollback to 0.6.11 and everything works again.

(and thank you very much for the performance tweaks in 0.6.12.. very cool :))

<!-- gh-comment-id:2918614257 --> @Poxel2 commented on GitHub (May 29, 2025): I made a rollback to 0.6.11 and everything works again. (and thank you very much for the performance tweaks in 0.6.12.. very cool :))
Author
Owner

@zzzjinwook commented on GitHub (May 29, 2025):

I’m encountering the same issue :(

<!-- gh-comment-id:2918629617 --> @zzzjinwook commented on GitHub (May 29, 2025): I’m encountering the same issue :(
Author
Owner

@grivatyBox commented on GitHub (May 29, 2025):

I encountered the same problem.

<!-- gh-comment-id:2918649219 --> @grivatyBox commented on GitHub (May 29, 2025): I encountered the same problem.
Author
Owner

@galvanoid commented on GitHub (May 29, 2025):

Same here.

<!-- gh-comment-id:2918656581 --> @galvanoid commented on GitHub (May 29, 2025): Same here.
Author
Owner

@grivatyBox commented on GitHub (May 29, 2025):

Based on my testing,Grok is good, only the GPT series models have issues.

<!-- gh-comment-id:2918665902 --> @grivatyBox commented on GitHub (May 29, 2025): Based on my testing,Grok is good, only the GPT series models have issues.
Author
Owner

@gepdev commented on GitHub (May 29, 2025):

same issue

<!-- gh-comment-id:2918718527 --> @gepdev commented on GitHub (May 29, 2025): same issue
Author
Owner

@tjbck commented on GitHub (May 29, 2025):

Related: #14469

should be addressed in dev, 0.6.13 will be released shortly.

<!-- gh-comment-id:2918817506 --> @tjbck commented on GitHub (May 29, 2025): Related: #14469 should be addressed in dev, 0.6.13 will be released shortly.
Author
Owner

@NewEpoch2020 commented on GitHub (May 29, 2025):

Based on my testing,Grok is good, only the GPT series models have issues.

No, same issues for Gemini series models.

<!-- gh-comment-id:2918818159 --> @NewEpoch2020 commented on GitHub (May 29, 2025): > Based on my testing,Grok is good, only the GPT series models have issues. No, same issues for Gemini series models.
Author
Owner

@cjccjj commented on GitHub (May 29, 2025):

same issue

<!-- gh-comment-id:2918962185 --> @cjccjj commented on GitHub (May 29, 2025): same issue
Author
Owner

@i0ntempest commented on GitHub (May 29, 2025):

Apparantly all parameters meant for ollama is passed to OpenAI models:

Image

<!-- gh-comment-id:2919176031 --> @i0ntempest commented on GitHub (May 29, 2025): Apparantly all parameters meant for ollama is passed to OpenAI models: ![Image](https://github.com/user-attachments/assets/5b6c5fe5-9ade-4f5b-8571-a4d921deb239)
Author
Owner

@Fusseldieb commented on GitHub (May 29, 2025):

Yep, same issue. Broke it completely.

For those who have broken setups, use this image in the meantime: ghcr.io/open-webui/open-webui:git-b8e1621.

In essence, replace "main" by "git-b8e1621" in your docker-compose, run docker compose pull and run your compose. It should then be rolled back to 0.6.11. This should also work for those who use the docker command directly - just replace the image name and run.

<!-- gh-comment-id:2919299218 --> @Fusseldieb commented on GitHub (May 29, 2025): Yep, same issue. Broke it completely. For those who have broken setups, use this image in the meantime: `ghcr.io/open-webui/open-webui:git-b8e1621`. In essence, replace "**main**" by "**git-b8e1621**" in your `docker-compose`, run `docker compose pull` and run your compose. It should then be rolled back to 0.6.11. This should also work for those who use the docker command directly - just replace the image name and run.
Author
Owner

@firmansi commented on GitHub (May 29, 2025):

OpenAI and Google experience the same issue here

<!-- gh-comment-id:2919619231 --> @firmansi commented on GitHub (May 29, 2025): OpenAI and Google experience the same issue here
Author
Owner

@Aculeasis commented on GitHub (May 29, 2025):

I found three incorrect keys that are incompatible with Gemini OpenAI and cause 400 error. They should all be removed from the JSON being sent.:
"stream_response", "system", "function_calling"

<!-- gh-comment-id:2919716417 --> @Aculeasis commented on GitHub (May 29, 2025): I found three incorrect keys that are incompatible with Gemini OpenAI and cause 400 error. They should all be removed from the JSON being sent.: ```"stream_response", "system", "function_calling"```
Author
Owner

@tjbck commented on GitHub (May 29, 2025):

@i0ntempest this is an intended behaviour and Ollama params must be set to default for OpenAI models.

<!-- gh-comment-id:2920362829 --> @tjbck commented on GitHub (May 29, 2025): @i0ntempest this is an intended behaviour and Ollama params must be set to default for OpenAI models.
Author
Owner

@i0ntempest commented on GitHub (May 29, 2025):

@i0ntempest this is an intended behaviour and Ollama params must be set to default for OpenAI models.

They are set to default in the model advanced params section. I've only changed them in my user settings, and this worked before.

<!-- gh-comment-id:2920370880 --> @i0ntempest commented on GitHub (May 29, 2025): > [@i0ntempest](https://github.com/i0ntempest) this is an intended behaviour and Ollama params must be set to default for OpenAI models. They are set to default in the model advanced params section. I've only changed them in my user settings, and this worked before.
Author
Owner

@tjbck commented on GitHub (May 29, 2025):

Not anymore. Do not enable Ollama params for OpenAI models, this behaviour change is required to better support OpenAI compatible LLM providers (e.g. vLLM)

<!-- gh-comment-id:2920376457 --> @tjbck commented on GitHub (May 29, 2025): Not anymore. Do not enable Ollama params for OpenAI models, this behaviour change is required to better support OpenAI compatible LLM providers (e.g. vLLM)
Author
Owner

@i0ntempest commented on GitHub (May 29, 2025):

Not anymore. Do not enable Ollama params for OpenAI models, this behaviour change is required to better support OpenAI compatible LLM providers (e.g. vLLM)

Then what do I do if I use both local and OpenAI models? Do I have to set them for every one of my ~30 local models now? If that's the case would you consider adding an explicit "ignore" option for the model advanced parameters so that parameters set to it will not get added? Doing this for all my current and future local models is tedious, especially when https://github.com/ollama/ollama/pull/6854 isn't merged into ollama yet.

<!-- gh-comment-id:2920411978 --> @i0ntempest commented on GitHub (May 29, 2025): > Not anymore. Do not enable Ollama params for OpenAI models, this behaviour change is required to better support OpenAI compatible LLM providers (e.g. vLLM) Then what do I do if I use both local and OpenAI models? Do I have to set them for every one of my ~30 local models now? If that's the case would you consider adding an explicit "ignore" option for the model advanced parameters so that parameters set to it will not get added? Doing this for all my current and future local models is tedious, especially when https://github.com/ollama/ollama/pull/6854 isn't merged into ollama yet.
Author
Owner

@VideoFX commented on GitHub (May 30, 2025):

This was not fixed for me in 0.6.13. Gemini API still broken, but local models work fine. I don't understand what happened or what to do to fix it. Rolling back to ghcr.io/open-webui/open-webui:git-b8e1621 fixed it (thanks Fusseldieb). I don't use any system prompts and everything is default but I still get 400 in 0.6.13.

<!-- gh-comment-id:2921297264 --> @VideoFX commented on GitHub (May 30, 2025): This was not fixed for me in 0.6.13. Gemini API still broken, but local models work fine. I don't understand what happened or what to do to fix it. Rolling back to ghcr.io/open-webui/open-webui:git-b8e1621 fixed it (thanks Fusseldieb). I don't use any system prompts and everything is default but I still get 400 in 0.6.13.
Author
Owner

@KyleF0X commented on GitHub (Jun 1, 2025):

can confirm this issue.

keep_alive is for local ollama, surely not reliant to external API propitiatory AIs

Image

<!-- gh-comment-id:2926617015 --> @KyleF0X commented on GitHub (Jun 1, 2025): can confirm this issue. keep_alive is for local ollama, surely not reliant to external API propitiatory AIs ![Image](https://github.com/user-attachments/assets/4b5fcc21-9ec9-48c4-941b-201c6b744b6a)
Author
Owner

@apensotti commented on GitHub (Jun 3, 2025):

Has anyone solved this issue in v0.6.13?

<!-- gh-comment-id:2933014788 --> @apensotti commented on GitHub (Jun 3, 2025): Has anyone solved this issue in v0.6.13?
Author
Owner

@apensotti commented on GitHub (Jun 3, 2025):

ghcr.io/open-webui/open-webui:git-82716f3

reverting to this version worked better for me than ghcr.io/open-webui/open-webui:git-b8e1621

<!-- gh-comment-id:2933033421 --> @apensotti commented on GitHub (Jun 3, 2025): ghcr.io/open-webui/open-webui:git-82716f3 reverting to this version worked better for me than ghcr.io/open-webui/open-webui:git-b8e1621
Author
Owner

@KyleF0X commented on GitHub (Jun 3, 2025):

Has anyone solved this issue in v0.6.13?

im having this issue in v0.6.13

<!-- gh-comment-id:2933085433 --> @KyleF0X commented on GitHub (Jun 3, 2025): > Has anyone solved this issue in v0.6.13? im having this issue in v0.6.13
Author
Owner

@djmaze commented on GitHub (Jun 3, 2025):

For me, this was solved with the update to 0.6.13.

<!-- gh-comment-id:2936832110 --> @djmaze commented on GitHub (Jun 3, 2025): For me, this was solved with the update to 0.6.13.
Author
Owner

@pankou11 commented on GitHub (Jun 24, 2025):

Guys ... thanks a lot! back to 0.6.11 (from 0.6.15) and after 1 week of attempts to bypassing the drop params (400 bad request error). finally worked!

<!-- gh-comment-id:2999406427 --> @pankou11 commented on GitHub (Jun 24, 2025): Guys ... thanks a lot! back to 0.6.11 (from 0.6.15) and after 1 week of attempts to bypassing the drop params (400 bad request error). finally worked!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#55937