mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 03:18:23 -05:00
[GH-ISSUE #15979] issue: Sporadic "no running event loop" errors #56405
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Ithanil on GitHub (Jul 24, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/15979
Check Existing Issues
Installation Method
Git Clone
Open WebUI Version
0.6.18
Ollama Version (if applicable)
No response
Operating System
Debian 12
Browser (if applicable)
No response
Confirmation
README.md.Expected Behavior
No errors popping up.
Actual Behavior
Since upgrading from 0.6.15 to 0.6.18, I see new, sporadic errors messages popping up. They don't appear to correspond to any user-facing malfunction, as they relate to the execution of cleanup tasks, but nevertheless this is a seemingly new behavior that I want to report.
If anyone else sees similar logs or has more insight, please feel free to comment.
Steps to Reproduce
Multi-Replica setup with Redis Sentinel
Logs & Screenshots
Logs, cut&redacted:
Additional Information
Just want to understand what could be the cause or what has changed here.
@rgaricano commented on GitHub (Jul 24, 2025):
It seem that the cause is a timeout trying to connect to redis, and a uncached exception of the call:
5fbfe2bdca/backend/open_webui/tasks.py (L86)maybe you can find more info in redis logs.
(I didn't saw that errors in my end)
@Ithanil commented on GitHub (Jul 24, 2025):
Yes, might be there was a timeout due to lost packets. Redis was running fine.
I was kinda triggered by the "No Running event loop" and know there has been a major change in how Redis Sentinel connections are handled (https://github.com/open-webui/open-webui/pull/15718), plus refactoring in the tasks code. So I thought I'd better bring this up to be sure.
@tjbck commented on GitHub (Jul 24, 2025):
@sihyeonn
@sihyeonn commented on GitHub (Jul 24, 2025):
Thank you for your @tjbck mentions. I'll check it as soon as possible!
@sihyeonn commented on GitHub (Jul 25, 2025):
@Ithanil Hi, thank you for reporting this issue!
Just to confirm — are you referring to the logs appearing during the shutdown process?
Have you observed shutdowns occurring frequently or under specific conditions?
Let us know if you can share more details.
@Ithanil commented on GitHub (Jul 25, 2025):
Hi @sihyeonn, thanks looking into this. To my understanding, there was no shutdown, the container kept running fine, just as was the Redis. I just reported these logs because they appeared unusual to me.
@rgaricano commented on GitHub (Jul 25, 2025):
could it be due because sentinel isn't installed/configurated and there isn't a master?
redis-cli -p 6379@Ithanil commented on GitHub (Jul 25, 2025):
@Ithanil No, we are running with Redis Sentinel for at least 4 months now (and IMO failover was working perfectly fine even before https://github.com/open-webui/open-webui/pull/15718, as it should be per
67ab74d705/redis/sentinel.py (L367)). As I said, there is no really an issue except for some unusual error messages related to this cleanup task popping up like once per day. So if noone else sees anything similar, it might be just coincidence.@rgaricano commented on GitHub (Jul 25, 2025):
Ok, sorry for the intervention, I just want to understand the process well.
@Ithanil commented on GitHub (Jul 25, 2025):
That said, the error did appear again few moments ago so I'm still leaning towards some kind of regression between 0.6.15 and 0.6.18
@Ithanil commented on GitHub (Jul 29, 2025):
I think that hint is worth looking into: https://github.com/open-webui/open-webui/pull/16014#issuecomment-3127300738