mirror of
https://github.com/open-webui/open-webui.git
synced 2026-03-22 06:02:06 -05:00
k8s: data persistent issue #298
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @jkh1 on GitHub (Feb 14, 2024).
Bug Report
Description
Bug Summary:
I am running ollama-webui on k8s. I've had user accounts disappear on several occasions. In two cases this seems to have been caused by a k8s upgrade that restarted the pods. In another case, this seems to have been caused by a crash.
I believe this is because the user accounts are saved in the container. Can they be saved on a persistent volume instead?
Can this be made configurable?
Steps to Reproduce:
Create user accounts. Stop the web ui pod and restart it.
Expected Behavior:
User accounts should still be there.
Actual Behavior:
User accounts are gone.
This could be a security issue because pods are automatically restarted and the first login becomes admin.
Environment
Kubernetes
Confirmation:
Installation Method
kubectl apply -k ./kubernetes/manifest
@tjbck commented on GitHub (Feb 14, 2024):
Hi, thanks for reporting this issue. Could you verify that everything is configured correctly and docker volume (
-v ollama-webui:/app/backend/dataflag) has been mounted to the webui? It sounds like the container was unable to find the same volume that was used previously. Keep us updated!@jkh1 commented on GitHub (Feb 14, 2024):
Thanks for the quick reply. I can see /app/backend/data/ollama.db from inside the running container.
@jkh1 commented on GitHub (Feb 14, 2024):
Maybe this lack of persistence is due to a quirk of the k8s cluster I have access to.
I think this is fixed by changing the webui deployment to a statefulset with a persistent volume claim mounted to /app/backend/data.
Fo this, I renamed webui-deployment.yaml to webui-statefulset.yaml and made the following changes:
Note that compared to the original from the repo, I increased the amount of memory because I thought it might have crashed due to an OOM error.
@jonasbg commented on GitHub (Feb 14, 2024):
What cluster provider do you use? I've deployed this to microk8s with NFS as storage provider and that works well.
@jkh1 commented on GitHub (Feb 14, 2024):
This is a managed cluster provided by our IT services. Storage is NFS but I think it's only persistent if used via a volume claim. I'll update this issue if my fix doesn't work.
Eventually, I'd like to delegate user management to our LDAP server so I am watching #668 and #483.
@tjbck commented on GitHub (Feb 14, 2024):
I don't use k8s personally, maybe @dnviti or @braveokafor could help you with that front.
@jkh1 commented on GitHub (Feb 15, 2024):
Just to clarify, it's not all data. The downloaded models for example survived which is what led me to getting a pvc for the webui.
@jannikstdl commented on GitHub (Feb 15, 2024):
Yes we use this. Be default the Ollama Pod has a PVC on its models. The WebUI K8s yamls for now do not have a PVC by default (like the Docker Images, they have). We set a PVC on /app/backend/data this includes all relevant data.
You can set this by yourself or i recommend to add this in the k8s files.
I'll have a look. @dnviti maybe he can also change that.