mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 03:18:23 -05:00
[GH-ISSUE #14479] issue: When system-prompt in OpenAI model is set, there will be an error 400 #55937
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Poxel2 on GitHub (May 29, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/14479
Check Existing Issues
Installation Method
Git Clone
Open WebUI Version
0.6.12
Ollama Version (if applicable)
No response
Operating System
Docker
Browser (if applicable)
No response
Confirmation
README.md.Expected Behavior
The 'system' prompt in a model should be used normally and should not cause an error with OpenAI API.
Actual Behavior
Since 0.6.12 there is an error 400 when using a model with a predefined 'system' prompt. After clearing the 'system' prompt, it's working again.
Steps to Reproduce
Insert some text in the 'system' prompt field in an OpenAI Modell:
Use model and ask something:
When 'system' prompt is clear, model works fine.
Logs & Screenshots
2025-05-29 07:27:58.749 | ERROR | open_webui.routers.openai:generate_chat_completion:866 - 400, message='Bad Request', url='https://api.openai.com/v1/chat/completions' - {}
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 4, in
uvicorn.main()
│ └
└ <module 'uvicorn' from '/usr/local/lib/python3.11/site-packages/uvicorn/init.py'>
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1442, in call
return self.main(args, **kwargs)
│ │ │ └ {}
│ │ └ ()
│ └ <function Command.main at 0x7f71c8cfe700>
└
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1363, in main
rv = self.invoke(ctx)
│ │ └ <click.core.Context object at 0x7f71c9a88950>
│ └ <function Command.invoke at 0x7f71c8cfe3e0>
└
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1226, in invoke
return ctx.invoke(self.callback, **ctx.params)
│ │ │ │ │ └ {'host': '0.0.0.0', 'port': 8080, 'forwarded_allow_ips': '', 'workers': 1, 'app': 'open_webui.main:app', 'uds': None, 'fd': ...
│ │ │ │ └ <click.core.Context object at 0x7f71c9a88950>
│ │ │ └ <function main at 0x7f71c89acc20>
│ │ └
│ └ <function Context.invoke at 0x7f71c8cfd620>
└ <click.core.Context object at 0x7f71c9a88950>
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 794, in invoke
return callback(args, **kwargs)
│ │ └ {'host': '0.0.0.0', 'port': 8080, 'forwarded_allow_ips': '', 'workers': 1, 'app': 'open_webui.main:app', 'uds': None, 'fd': ...
│ └ ()
└ <function main at 0x7f71c89acc20>
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main
run(
└ <function run at 0x7f71c8ddb740>
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run
server.run()
│ └ <function Server.run at 0x7f71c8c5c860>
└ <uvicorn.server.Server object at 0x7f71c8ddfc10>
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run
return asyncio.run(self.serve(sockets=sockets))
│ │ │ │ └ None
│ │ │ └ <function Server.serve at 0x7f71c8c5c900>
│ │ └ <uvicorn.server.Server object at 0x7f71c8ddfc10>
│ └ <function run at 0x7f71c9131300>
└ <module 'asyncio' from '/usr/local/lib/python3.11/asyncio/init.py'>
File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
│ │ └ <coroutine object Server.serve at 0x7f71c8b7c8b0>
│ └ <function Runner.run at 0x7f71c8fa4ea0>
└ <asyncio.runners.Runner object at 0x7f71c89a3010>
File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
│ │ │ └ <Task pending name='Task-1' coro=<Server.serve() running at /usr/local/lib/python3.11/site-packages/uvicorn/server.py:70> wai...
│ │ └ <cyfunction Loop.run_until_complete at 0x7f71c89a7370>
│ └ <uvloop.Loop running=True closed=False debug=False>
└ <asyncio.runners.Runner object at 0x7f71c89a3010>
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/base.py", line 141, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
│ │ │ │ └ <function BaseHTTPMiddleware.call..call_next..send_no_error at 0x7f702c788f40>
│ │ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
│ └ <starlette_compress.CompressMiddleware object at 0x7f71c7da2070>
└ <open_webui.main.RedirectMiddleware object at 0x7f705abd0a50>
File "/usr/local/lib/python3.11/site-packages/starlette_compress/init.py", line 92, in call
return await self._zstd(scope, receive, send)
│ │ │ │ └ <function BaseHTTPMiddleware.call..call_next..send_no_error at 0x7f702c788f40>
│ │ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
│ └ <member '_zstd' of 'CompressMiddleware' objects>
└ <starlette_compress.CompressMiddleware object at 0x7f71c7da2070>
File "/usr/local/lib/python3.11/site-packages/starlette_compress/_zstd_legacy.py", line 100, in call
await self.app(scope, receive, wrapper)
│ │ │ │ └ <function ZstdResponder.call..wrapper at 0x7f702c78a200>
│ │ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
│ └ <member 'app' of 'ZstdResponder' objects>
└ <starlette_compress._zstd_legacy.ZstdResponder object at 0x7f705a720340>
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
│ │ │ │ │ │ └ <function ZstdResponder.call..wrapper at 0x7f702c78a200>
│ │ │ │ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ │ │ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
│ │ │ └ <starlette.requests.Request object at 0x7f702c5c8c90>
│ │ └ <fastapi.routing.APIRouter object at 0x7f705ab55e10>
│ └ <starlette.middleware.exceptions.ExceptionMiddleware object at 0x7f7059fc3210>
└ <function wrap_app_handling_exceptions at 0x7f71c5df8900>
File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
│ │ │ └ <function wrap_app_handling_exceptions..wrapped_app..sender at 0x7f702c788360>
│ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
└ <fastapi.routing.APIRouter object at 0x7f705ab55e10>
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 715, in call
await self.middleware_stack(scope, receive, send)
│ │ │ │ └ <function wrap_app_handling_exceptions..wrapped_app..sender at 0x7f702c788360>
│ │ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
│ └ <bound method Router.app of <fastapi.routing.APIRouter object at 0x7f705ab55e10>>
└ <fastapi.routing.APIRouter object at 0x7f705ab55e10>
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
│ │ │ │ └ <function wrap_app_handling_exceptions..wrapped_app..sender at 0x7f702c788360>
│ │ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
│ └ <function Route.handle at 0x7f71c5df9f80>
└ APIRoute(path='/api/chat/completions', name='chat_completion', methods=['POST'])
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
│ │ │ │ └ <function wrap_app_handling_exceptions..wrapped_app..sender at 0x7f702c788360>
│ │ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
│ └ <function request_response..app at 0x7f7059bd5440>
└ APIRoute(path='/api/chat/completions', name='chat_completion', methods=['POST'])
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
│ │ │ │ │ └ <function wrap_app_handling_exceptions..wrapped_app..sender at 0x7f702c788360>
│ │ │ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ │ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
│ │ └ <starlette.requests.Request object at 0x7f702c5ca190>
│ └ <function request_response..app..app at 0x7f702c788180>
└ <function wrap_app_handling_exceptions at 0x7f71c5df8900>
File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
│ │ │ └ <function wrap_app_handling_exceptions..wrapped_app..sender at 0x7f702c789800>
│ │ └ <function BaseHTTPMiddleware.call..call_next..receive_or_disconnect at 0x7f702c789940>
│ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('172.17.0.5', 8080), 'c...
└ <function request_response..app..app at 0x7f702c788180>
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
│ └ <starlette.requests.Request object at 0x7f702c5ca190>
└ <function get_request_handler..app at 0x7f7059bd5300>
File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
└ <function run_endpoint_function at 0x7f71c5dfbd80>
File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
│ │ └ {'user': UserModel(id='f2d0c7f2-d197-458b-a735-7ff8fcea1b35', name='Christian', email='my@home.com', role='admin',...
│ └ <function chat_completion at 0x7f7059b50a40>
└ Dependant(path_params=[], query_params=[], header_params=[], cookie_params=[], body_params=[ModelField(field_info=Body(Pydant...
File "/app/backend/open_webui/main.py", line 1265, in chat_completion
response = await chat_completion_handler(request, form_data, user)
│ │ │ └ UserModel(id='f2d0c7f2-d197-458b-a735-7ff8fcea1b35', name='Christian', email='my@home.com', role='admin', profile_...
│ │ └ {'stream': True, 'model': 'gpt-4.1-mini', 'messages': [{'role': 'system', 'content': 'Only a short text'}, {'role': 'user', '...
│ └ <starlette.requests.Request object at 0x7f702c5ca190>
└ <function generate_chat_completion at 0x7f705e70de40>
File "/app/backend/open_webui/utils/chat.py", line 278, in generate_chat_completion
return await generate_openai_chat_completion(
└ <function generate_chat_completion at 0x7f705e70c720>
File "/usr/local/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 1161, in raise_for_status
raise ClientResponseError(
└ <class 'aiohttp.client_exceptions.ClientResponseError'>
aiohttp.client_exceptions.ClientResponseError: 400, message='Bad Request', url='https://api.openai.com/v1/chat/completions'
Additional Information
No response
@Poxel2 commented on GitHub (May 29, 2025):
I made a rollback to 0.6.11 and everything works again.
(and thank you very much for the performance tweaks in 0.6.12.. very cool :))
@zzzjinwook commented on GitHub (May 29, 2025):
I’m encountering the same issue :(
@grivatyBox commented on GitHub (May 29, 2025):
I encountered the same problem.
@galvanoid commented on GitHub (May 29, 2025):
Same here.
@grivatyBox commented on GitHub (May 29, 2025):
Based on my testing,Grok is good, only the GPT series models have issues.
@gepdev commented on GitHub (May 29, 2025):
same issue
@tjbck commented on GitHub (May 29, 2025):
Related: #14469
should be addressed in dev, 0.6.13 will be released shortly.
@NewEpoch2020 commented on GitHub (May 29, 2025):
No, same issues for Gemini series models.
@cjccjj commented on GitHub (May 29, 2025):
same issue
@i0ntempest commented on GitHub (May 29, 2025):
Apparantly all parameters meant for ollama is passed to OpenAI models:
@Fusseldieb commented on GitHub (May 29, 2025):
Yep, same issue. Broke it completely.
For those who have broken setups, use this image in the meantime:
ghcr.io/open-webui/open-webui:git-b8e1621.In essence, replace "main" by "git-b8e1621" in your
docker-compose, rundocker compose pulland run your compose. It should then be rolled back to 0.6.11. This should also work for those who use the docker command directly - just replace the image name and run.@firmansi commented on GitHub (May 29, 2025):
OpenAI and Google experience the same issue here
@Aculeasis commented on GitHub (May 29, 2025):
I found three incorrect keys that are incompatible with Gemini OpenAI and cause 400 error. They should all be removed from the JSON being sent.:
"stream_response", "system", "function_calling"@tjbck commented on GitHub (May 29, 2025):
@i0ntempest this is an intended behaviour and Ollama params must be set to default for OpenAI models.
@i0ntempest commented on GitHub (May 29, 2025):
They are set to default in the model advanced params section. I've only changed them in my user settings, and this worked before.
@tjbck commented on GitHub (May 29, 2025):
Not anymore. Do not enable Ollama params for OpenAI models, this behaviour change is required to better support OpenAI compatible LLM providers (e.g. vLLM)
@i0ntempest commented on GitHub (May 29, 2025):
Then what do I do if I use both local and OpenAI models? Do I have to set them for every one of my ~30 local models now? If that's the case would you consider adding an explicit "ignore" option for the model advanced parameters so that parameters set to it will not get added? Doing this for all my current and future local models is tedious, especially when https://github.com/ollama/ollama/pull/6854 isn't merged into ollama yet.
@VideoFX commented on GitHub (May 30, 2025):
This was not fixed for me in 0.6.13. Gemini API still broken, but local models work fine. I don't understand what happened or what to do to fix it. Rolling back to ghcr.io/open-webui/open-webui:git-b8e1621 fixed it (thanks Fusseldieb). I don't use any system prompts and everything is default but I still get 400 in 0.6.13.
@KyleF0X commented on GitHub (Jun 1, 2025):
can confirm this issue.
keep_alive is for local ollama, surely not reliant to external API propitiatory AIs
@apensotti commented on GitHub (Jun 3, 2025):
Has anyone solved this issue in v0.6.13?
@apensotti commented on GitHub (Jun 3, 2025):
ghcr.io/open-webui/open-webui:git-82716f3
reverting to this version worked better for me than ghcr.io/open-webui/open-webui:git-b8e1621
@KyleF0X commented on GitHub (Jun 3, 2025):
im having this issue in v0.6.13
@djmaze commented on GitHub (Jun 3, 2025):
For me, this was solved with the update to 0.6.13.
@pankou11 commented on GitHub (Jun 24, 2025):
Guys ... thanks a lot! back to 0.6.11 (from 0.6.15) and after 1 week of attempts to bypassing the drop params (400 bad request error). finally worked!