mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 10:58:17 -05:00
[GH-ISSUE #21611] issue: intermittent "Model not found" errors #35066
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @m1g32 on GitHub (Feb 19, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/21611
Check Existing Issues
Installation Method
Docker
Open WebUI Version
v0.7.2
Ollama Version (if applicable)
No response
Operating System
kubernetes 1.32
Browser (if applicable)
No response
Confirmation
README.md.Expected Behavior
Actual Behavior
Steps to Reproduce
– The main response generation fails with “model not found”, or
– The main response succeeds but follow-up tasks (follow-up question or chat title generation) fail with “model not found” for the same model.
Logs & Screenshots
| ERROR | open_webui.routers.tasks:generate_follow_ups:314 - Exception occurred
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.11/multiprocessing/spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
│ │ └ 4
│ └ 21
└ <function _main at 0x7f35a7606160>
File "/usr/local/lib/python3.11/multiprocessing/spawn.py", line 135, in _main
return self._bootstrap(parent_sentinel)
│ │ └ 4
│ └ <function BaseProcess._bootstrap at 0x7f35a7973740>
└
File "/usr/local/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
self.run()
│ └ <function BaseProcess.run at 0x7f35a7972ca0>
└
File "/usr/local/lib/python3.11/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
│ │ │ │ │ └ {'config': <uvicorn.config.Config object at 0x7f35a7602d50>, 'target': <bound method Process.target of <uvicorn.supervisors.m...
│ │ │ │ └
│ │ │ └ ()
│ │ └
│ └ <function subprocess_started at 0x7f35a6b74cc0>
└
File "/usr/local/lib/python3.11/site-packages/uvicorn/_subprocess.py", line 80, in subprocess_started
target(sockets=sockets)
│ └ [<socket.socket fd=3, family=2, type=1, proto=0, laddr=('0.0.0.0', 8080)>]
└ <bound method Process.target of <uvicorn.supervisors.multiprocess.Process object at 0x7f35a689ce50>>
File "/usr/local/lib/python3.11/site-packages/uvicorn/supervisors/multiprocess.py", line 64, in target
return self.real_target(sockets)
│ │ └ [<socket.socket fd=3, family=2, type=1, proto=0, laddr=('0.0.0.0', 8080)>]
│ └ <bound method Server.run of <uvicorn.server.Server object at 0x7f35a689cf90>>
└ <uvicorn.supervisors.multiprocess.Process object at 0x7f35a689ce50>
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 67, in run
return asyncio_run(self.serve(sockets=sockets), loop_factory=self.config.get_loop_factory())
│ │ │ │ │ │ └ <function Config.get_loop_factory at 0x7f35a6af0180>
│ │ │ │ │ └ <uvicorn.config.Config object at 0x7f35a7602d50>
│ │ │ │ └ <uvicorn.server.Server object at 0x7f35a689cf90>
│ │ │ └ [<socket.socket fd=3, family=2, type=1, proto=0, laddr=('0.0.0.0', 8080)>]
│ │ └ <function Server.serve at 0x7f35a6b57a60>
│ └ <uvicorn.server.Server object at 0x7f35a689cf90>
└ <function asyncio_run at 0x7f35a6bd0ae0>
File "/usr/local/lib/python3.11/site-packages/uvicorn/_compat.py", line 30, in asyncio_run
return runner.run(main)
│ │ └ <coroutine object Server.serve at 0x7f35a686cb80>
│ └ <function Runner.run at 0x7f35a6ec94e0>
└ <asyncio.runners.Runner object at 0x7f35a7a9e550>
File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
│ │ │ └ <Task pending name='Task-1' coro=<Server.serve() running at /usr/local/lib/python3.11/site-packages/uvicorn/server.py:71> wai...
│ │ └ <cyfunction Loop.run_until_complete at 0x7f35a685be80>
│ └ <uvloop.Loop running=True closed=False debug=False>
└ <asyncio.runners.Runner object at 0x7f35a7a9e550>
File "/app/backend/open_webui/main.py", line 1729, in process_chat
return await process_chat_response(
└ <function process_chat_response at 0x7f3418d68a40>
File "/app/backend/open_webui/utils/middleware.py", line 3677, in process_chat_response
return await response_handler(response, events)
│ │ └ []
│ └ <starlette.responses.StreamingResponse object at 0x7f33f74a8cd0>
└ <function process_chat_response..response_handler at 0x7f34142ec040>
File "/app/backend/open_webui/utils/middleware.py", line 3659, in response_handler
await background_tasks_handler()
└ <function process_chat_response..background_tasks_handler at 0x7f33f75332e0>
File "/app/backend/open_webui/utils/middleware.py", line 1963, in background_tasks_handler
res = await generate_follow_ups(
└ <function generate_follow_ups at 0x7f341aeb4860>
File "/app/backend/open_webui/utils/chat.py", line 194, in generate_chat_completion
raise Exception("Model not found")
Exception: Model not found
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.11/multiprocessing/spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
│ │ └ 4
│ └ 21
└ <function _main at 0x7ff7c1bc2160>
File "/usr/local/lib/python3.11/multiprocessing/spawn.py", line 135, in _main
return self._bootstrap(parent_sentinel)
│ │ └ 4
│ └ <function BaseProcess._bootstrap at 0x7ff7c1f2f740>
└
File "/usr/local/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
self.run()
│ └ <function BaseProcess.run at 0x7ff7c1f2eca0>
└
File "/usr/local/lib/python3.11/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
│ │ │ │ │ └ {'config': <uvicorn.config.Config object at 0x7ff7c1bbed50>, 'target': <bound method Process.target of <uvicorn.supervisors.m...
│ │ │ │ └
│ │ │ └ ()
│ │ └
│ └ <function subprocess_started at 0x7ff7c1130cc0>
└
File "/usr/local/lib/python3.11/site-packages/uvicorn/_subprocess.py", line 80, in subprocess_started
target(sockets=sockets)
│ └ [<socket.socket fd=3, family=2, type=1, proto=0, laddr=('0.0.0.0', 8080)>]
└ <bound method Process.target of <uvicorn.supervisors.multiprocess.Process object at 0x7ff7c0e58e50>>
File "/usr/local/lib/python3.11/site-packages/uvicorn/supervisors/multiprocess.py", line 64, in target
return self.real_target(sockets)
│ │ └ [<socket.socket fd=3, family=2, type=1, proto=0, laddr=('0.0.0.0', 8080)>]
│ └ <bound method Server.run of <uvicorn.server.Server object at 0x7ff7c0e58f90>>
└ <uvicorn.supervisors.multiprocess.Process object at 0x7ff7c0e58e50>
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 67, in run
return asyncio_run(self.serve(sockets=sockets), loop_factory=self.config.get_loop_factory())
│ │ │ │ │ │ └ <function Config.get_loop_factory at 0x7ff7c10ac180>
│ │ │ │ │ └ <uvicorn.config.Config object at 0x7ff7c1bbed50>
│ │ │ │ └ <uvicorn.server.Server object at 0x7ff7c0e58f90>
│ │ │ └ [<socket.socket fd=3, family=2, type=1, proto=0, laddr=('0.0.0.0', 8080)>]
│ │ └ <function Server.serve at 0x7ff7c1113a60>
│ └ <uvicorn.server.Server object at 0x7ff7c0e58f90>
└ <function asyncio_run at 0x7ff7c118cae0>
File "/usr/local/lib/python3.11/site-packages/uvicorn/_compat.py", line 30, in asyncio_run
return runner.run(main)
│ │ └ <coroutine object Server.serve at 0x7ff7c0e28b80>
│ └ <function Runner.run at 0x7ff7c14854e0>
└ <asyncio.runners.Runner object at 0x7ff7c0f4ead0>
File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
│ │ │ └ <Task pending name='Task-1' coro=<Server.serve() running at /usr/local/lib/python3.11/site-packages/uvicorn/server.py:71> wai...
│ │ └ <cyfunction Loop.run_until_complete at 0x7ff7c0e17e80>
│ └ <uvloop.Loop running=True closed=False debug=False>
└ <asyncio.runners.Runner object at 0x7ff7c0f4ead0>
File "/app/backend/open_webui/main.py", line 1710, in process_chat
form_data, metadata, events = await process_chat_payload(
│ │ └ <function process_chat_payload at 0x7ff6333b49a0>
│ └ {'user_id': '51ef38bd-45f6-46e0-877c-3089c393add3', 'chat_id': 'bfaa226a-db9e-49d7-909a-e8428d037823', 'message_id': 'ee665e7...
└ {'stream': True, 'model': 'gpt-5.2', 'messages': [{'role': 'user', 'content': 'I want to research xyz ...
File "/app/backend/open_webui/utils/middleware.py", line 1581, in process_chat_payload
form_data = await chat_web_search_handler(
└ <function chat_web_search_handler at 0x7ff6333b44a0>
File "/app/backend/open_webui/routers/tasks.py", line 533, in generate_queries
raise e
File "/app/backend/open_webui/routers/tasks.py", line 531, in generate_queries
payload = await process_pipeline_inlet_filter(request, payload, user, models)
│ │ │ │ └ <open_webui.socket.utils.RedisDict object at 0x7ff6f9e3e610>
│ │ │ └ UserModel(id='51ef38bd-45f6-46e0-877c-3089c393add3', email='x.y@z.com', username=None, role='user', name='Emil'...
│ │ └ {'model': 'gpt-5.2', 'messages': [{'role': 'user', 'content': '### Task:\nAnalyze the chat history to determine the necessity...
│ └ <starlette.requests.Request object at 0x7ff6293e0510>
└ <function process_pipeline_inlet_filter at 0x7ff6356fe3e0>
File "/app/backend/open_webui/routers/pipelines.py", line 63, in process_pipeline_inlet_filter
model = models[model_id]
│ └ 'gpt-5.2'
└ <open_webui.socket.utils.RedisDict object at 0x7ff6f9e3e610>
File "/app/backend/open_webui/socket/utils.py", line 66, in getitem
raise KeyError(key)
└ 'gpt-5.2'
KeyError: 'gpt-5.2'
Additional Information
@Classic298 commented on GitHub (Feb 19, 2026):
Yes this might be due to the new models not being properly synced across all workers. Simply restart Open WebUI then it should work just fine
@m1g32 commented on GitHub (Feb 19, 2026):
We are using a multi-worker deployment, this issue still appears even after restarting all pods.
@Classic298 commented on GitHub (Feb 19, 2026):
Ok in that case cannot really reproduce.
I could reproduce as long as i don't restart the server, but once I do, it works.
Seems the model list is not properly synced with redis