mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 19:08:59 -05:00
[GH-ISSUE #12005] issue: Activated Memory feature leads to OpenWebUI crash 500: Internal Error (v0.5.20 and older versions) #31963
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @deliciousbob on GitHub (Mar 24, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/12005
Check Existing Issues
Installation Method
Docker
Open WebUI Version
v0.5.20
Ollama Version (if applicable)
LiteLLM as AI-Gateway
Operating System
Docker Container on Ubuntu 22.04
Browser (if applicable)
Chrome
Confirmation
README.md.Expected Behavior
Just activated the Memory feature, and got a "Socket undefined disconnected due to ping timeout"
OpenWebUI is not reacting to any request after that, whole page goes down.
Actual Behavior
Just activated the Memory feature, and got a "Socket undefined disconnected due to ping timeout"
OpenWebUI is not reacting to any request after that, whole page goes down.
Steps to Reproduce
After several weeks of OpenWebUI got unresponsive for no reasons (no log entry for a timeout)
We now identified some users used the Memory feature causing the whole OpenWebUI to crash and not responding to any requests / site is not accesible anymore (500: Internal Error).
How to reproduce:
Restart the Docker container, everything works again
Disbale the Memory feature = everything works fine as expected
Enable the Memory feature = instant crash of OpenWebUI
Is there an option to disable Memory-Feature globally for all users?
This would be a quick temporary fix, but i did not find any Environment Variable or Admin setting for that.
Thank you for your help!
Logs & Screenshots
The Docker container logs only show the last connections before the timeout.
Additional Information
We have theses issues since several weeks / months. We always updated to the latest version when available.
But it did not help, we first thought it was caused by LiteLLM getting too many requests.
@tjbck commented on GitHub (Mar 24, 2025):
Could you confirm you can reproduce this issue without LiteLLM as well?