k8s: data persistent issue #298

Closed
opened 2025-11-11 14:15:54 -06:00 by GiteaMirror · 8 comments
Owner

Originally created by @jkh1 on GitHub (Feb 14, 2024).

Bug Report

Description

Bug Summary:
I am running ollama-webui on k8s. I've had user accounts disappear on several occasions. In two cases this seems to have been caused by a k8s upgrade that restarted the pods. In another case, this seems to have been caused by a crash.

I believe this is because the user accounts are saved in the container. Can they be saved on a persistent volume instead?
Can this be made configurable?

Steps to Reproduce:
Create user accounts. Stop the web ui pod and restart it.

Expected Behavior:
User accounts should still be there.

Actual Behavior:
User accounts are gone.

This could be a security issue because pods are automatically restarted and the first login becomes admin.

Environment

Kubernetes

Confirmation:

  • [ x] I have read and followed all the instructions provided in the README.md.
  • [ x] I have reviewed the troubleshooting.md document.

Installation Method

kubectl apply -k ./kubernetes/manifest

Originally created by @jkh1 on GitHub (Feb 14, 2024). # Bug Report ## Description **Bug Summary:** I am running ollama-webui on k8s. I've had user accounts disappear on several occasions. In two cases this seems to have been caused by a k8s upgrade that restarted the pods. In another case, this seems to have been caused by a crash. I believe this is because the user accounts are saved in the container. Can they be saved on a persistent volume instead? Can this be made configurable? **Steps to Reproduce:** Create user accounts. Stop the web ui pod and restart it. **Expected Behavior:** User accounts should still be there. **Actual Behavior:** User accounts are gone. This could be a security issue because pods are automatically restarted and the first login becomes admin. ## Environment Kubernetes **Confirmation:** - [ x] I have read and followed all the instructions provided in the README.md. - [ x] I have reviewed the troubleshooting.md document. ## Installation Method kubectl apply -k ./kubernetes/manifest
Author
Owner

@tjbck commented on GitHub (Feb 14, 2024):

Hi, thanks for reporting this issue. Could you verify that everything is configured correctly and docker volume (-v ollama-webui:/app/backend/data flag) has been mounted to the webui? It sounds like the container was unable to find the same volume that was used previously. Keep us updated!

@tjbck commented on GitHub (Feb 14, 2024): Hi, thanks for reporting this issue. Could you verify that everything is configured correctly and docker volume (`-v ollama-webui:/app/backend/data` flag) has been mounted to the webui? It sounds like the container was unable to find the same volume that was used previously. Keep us updated!
Author
Owner

@jkh1 commented on GitHub (Feb 14, 2024):

Thanks for the quick reply. I can see /app/backend/data/ollama.db from inside the running container.

@jkh1 commented on GitHub (Feb 14, 2024): Thanks for the quick reply. I can see /app/backend/data/ollama.db from inside the running container.
Author
Owner

@jkh1 commented on GitHub (Feb 14, 2024):

Maybe this lack of persistence is due to a quirk of the k8s cluster I have access to.
I think this is fixed by changing the webui deployment to a statefulset with a persistent volume claim mounted to /app/backend/data.
Fo this, I renamed webui-deployment.yaml to webui-statefulset.yaml and made the following changes:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: ollama-webui
...
        resources:
          limits:
            cpu: "1000m"
            memory: "4Gi"
        volumeMounts:
        - name: webui-volume 
          mountPath: /app/backend/data
 ...       
  volumeClaimTemplates:
  - metadata:
      name: webui-volume
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 20Gi

Note that compared to the original from the repo, I increased the amount of memory because I thought it might have crashed due to an OOM error.

@jkh1 commented on GitHub (Feb 14, 2024): Maybe this lack of persistence is due to a quirk of the k8s cluster I have access to. I think this is fixed by changing the webui deployment to a statefulset with a persistent volume claim mounted to /app/backend/data. Fo this, I renamed webui-deployment.yaml to webui-statefulset.yaml and made the following changes: ``` apiVersion: apps/v1 kind: StatefulSet metadata: name: ollama-webui ... resources: limits: cpu: "1000m" memory: "4Gi" volumeMounts: - name: webui-volume mountPath: /app/backend/data ... volumeClaimTemplates: - metadata: name: webui-volume spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 20Gi ``` Note that compared to the original from the repo, I increased the amount of memory because I thought it might have crashed due to an OOM error.
Author
Owner

@jonasbg commented on GitHub (Feb 14, 2024):

What cluster provider do you use? I've deployed this to microk8s with NFS as storage provider and that works well.

@jonasbg commented on GitHub (Feb 14, 2024): What cluster provider do you use? I've deployed this to microk8s with NFS as storage provider and that works well.
Author
Owner

@jkh1 commented on GitHub (Feb 14, 2024):

This is a managed cluster provided by our IT services. Storage is NFS but I think it's only persistent if used via a volume claim. I'll update this issue if my fix doesn't work.
Eventually, I'd like to delegate user management to our LDAP server so I am watching #668 and #483.

@jkh1 commented on GitHub (Feb 14, 2024): This is a managed cluster provided by our IT services. Storage is NFS but I think it's only persistent if used via a volume claim. I'll update this issue if my fix doesn't work. Eventually, I'd like to delegate user management to our LDAP server so I am watching #668 and #483.
Author
Owner

@tjbck commented on GitHub (Feb 14, 2024):

I don't use k8s personally, maybe @dnviti or @braveokafor could help you with that front.

@tjbck commented on GitHub (Feb 14, 2024): I don't use k8s personally, maybe @dnviti or @braveokafor could help you with that front.
Author
Owner

@jkh1 commented on GitHub (Feb 15, 2024):

Just to clarify, it's not all data. The downloaded models for example survived which is what led me to getting a pvc for the webui.

@jkh1 commented on GitHub (Feb 15, 2024): Just to clarify, it's not all data. The downloaded models for example survived which is what led me to getting a pvc for the webui.
Author
Owner

@jannikstdl commented on GitHub (Feb 15, 2024):

Yes we use this. Be default the Ollama Pod has a PVC on its models. The WebUI K8s yamls for now do not have a PVC by default (like the Docker Images, they have). We set a PVC on /app/backend/data this includes all relevant data.

You can set this by yourself or i recommend to add this in the k8s files.

I'll have a look. @dnviti maybe he can also change that.

@jannikstdl commented on GitHub (Feb 15, 2024): Yes we use this. Be default the Ollama Pod has a PVC on its models. The WebUI K8s yamls for now do not have a PVC by default (like the Docker Images, they have). We set a PVC on **/app/backend/data** this includes all relevant data. You can set this by yourself or i recommend to add this in the k8s files. I'll have a look. @dnviti maybe he can also change that.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#298