mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 19:08:59 -05:00
[GH-ISSUE #21334] Bug: Latency of base model connection could cause all base models not to appear #58111
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @antpar-rf on GitHub (Feb 12, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/21334
Bug: Latency of one base models could cause all base models not to appear & increasing timeout could cause issues with event loop.
2b26355002/backend/open_webui/routers/openai.py (L416)asyncio.gather()behavior: The code usesresponses = await asyncio.gather(*request_tasks). By default,asyncio.gather()will propagate the first exception it encounters and cancel remaining tasks, which would cause the entire function to fail.Exception handling in
send_get_request: When a timeout or client error occurs insend_get_request, it raises anHTTPException[1]:Impact on multiple base models: If you have 5-10 base models and one times out, the exception will propagate up and the entire
get_all_models_responsesfunction will fail, not return partial results.To make it resilient, the code would need to either:
asyncio.gather(*request_tasks, return_exceptions=True)to capture exceptions as values instead of propagating themAs currently written, one timeout will cause the entire model retrieval operation to fail rather than returning results from the successful requests.
@Classic298 commented on GitHub (Feb 12, 2026):
Thanks for the report! This has actually already been addressed in dev it seems.
The send_get_request function catches all exceptions and returns None instead of raising an HTTPException:
Because of this, asyncio.gather will never see an exception propagate from a timed-out or failing connection - it simply receives None for that slot. The downstream code already handles this by skipping None responses, so models from all other successful providers are still returned normally.
Closing as already resolved. If you're still experiencing this on the latest version, please reopen with reproduction steps for the dev branch and we can take another look.