mirror of
https://github.com/open-webui/open-webui.git
synced 2026-03-10 07:43:10 -05:00
Bug: models are kept locally even when PostgreSQL connection is defined #3328
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @noamgloberman0 on GitHub (Jan 20, 2025).
Bug Report
Installation Method
[Describe the method you used to install the project, e.g., git clone, Docker, pip, etc.]
Environment
Open WebUI Version: 0.5.4
Operating System: Linux (k8s)
Confirmation:
Expected Behavior:
Storing model metadata under the table "public.model" in the configured PostgreSQL
Actual Behavior:
All data is being kept as it should in the PostgreSQL (like user data, chats and more), besides model metadata. The table is blank, yet Open-WebUI works completely fine.
Description
Bug Summary:
Model metadata is pulled from the ENV configuration of OPENAI_API_KEY, as you can see below under logs.
This is weird because there is a dedicated table for model, and it remains empty.
Reproduction Details
Steps to Reproduce:
Deploy Open-WebUI with a configuration for DATABASE_URL (postgreSQL), and OPENAI_API_KEY (to integrate GPT), both in environment variables.
Logs and Screenshots
Browser Console Logs:
[Log] Backend config: – {status: true, name: "Open WebUI", version: "0.5.4", …} (0.CPYBhKJv.js, line 1)
{status: true, name: "Open WebUI", version: "0.5.4", default_locale: "", oauth: {providers: {oidc: "Okta"}}, …}Object
[Log] {id: "02c34ede-c2da-4168-978f-05919c390c6f", email: "hiding this", name: "Noam Globerman", role: "admin", profile_image_url: "/user.png", …} (33._Z7U-bfm.js, line 5)
[Log] connected – "hiding this" (0.CPYBhKJv.js, line 1)
[Log] user-list – {user_ids: ["hiding this"]} (0.CPYBhKJv.js, line 1)
[Log] mounted (Help.Cno5kgre.js, line 137)
[Log] (RichTextInput.BbWzu9Z7.js, line 204)
Docker Container Logs:
v0.5.4 - building the best open-source AI user interface.
https://github.com/open-webui/open-webui
Fetching 30 files: 0% 0/30 [00:00<?, ?it/s]
data_config.json: 0% 0.00/39.3k [00:00<?, ?B/s]
.gitattributes: 0% 0.00/1.23k [00:00<?, ?B/s]
.gitattributes: 100% 1.23k/1.23k [00:00<00:00, 11.9MB/s]
data_config.json: 100% 39.3k/39.3k [00:00<00:00, 3.47MB/s]
Fetching 30 files: 3% 1/30 [00:00<00:03, 9.23it/s]
model_O2.onnx: 0% 0.00/90.3M [00:00<?, ?B/s]
model.onnx: 0% 0.00/90.4M [00:00<?, ?B/s]
model_O1.onnx: 0% 0.00/90.4M [00:00<?, ?B/s]
model_O4.onnx: 0% 0.00/45.2M [00:00<?, ?B/s]
model_qint8_arm64.onnx: 0% 0.00/23.0M [00:00<?, ?B/s]
model_O3.onnx: 0% 0.00/90.3M [00:00<?, ?B/s]
model_O2.onnx: 12% 10.5M/90.3M [00:00<00:01, 67.0MB/s]
model.onnx: 12% 10.5M/90.4M [00:00<00:01, 62.1MB/s]
model_O1.onnx: 12% 10.5M/90.4M [00:00<00:01, 57.7MB/s]
model_O4.onnx: 23% 10.5M/45.2M [00:00<00:00, 56.4MB/s]
model_O3.onnx: 12% 10.5M/90.3M [00:00<00:01, 53.4MB/s]
model_qint8_arm64.onnx: 46% 10.5M/23.0M [00:00<00:00, 50.1MB/s]
model.onnx: 23% 21.0M/90.4M [00:00<00:00, 77.0MB/s]
model_O2.onnx: 23% 21.0M/90.3M [00:00<00:00, 72.2MB/s]
model_O1.onnx: 23% 21.0M/90.4M [00:00<00:01, 69.0MB/s]
model_O4.onnx: 46% 21.0M/45.2M [00:00<00:00, 71.9MB/s]
model_O3.onnx: 23% 21.0M/90.3M [00:00<00:00, 70.9MB/s]
model_qint8_arm64.onnx: 91% 21.0M/23.0M [00:00<00:00, 65.6MB/s]
model_qint8_arm64.onnx: 100% 23.0M/23.0M [00:00<00:00, 62.2MB/s]
model.onnx: 35% 31.5M/90.4M [00:00<00:00, 82.2MB/s]
model_O2.onnx: 35% 31.5M/90.3M [00:00<00:00, 78.7MB/s]
model_O1.onnx: 35% 31.5M/90.4M [00:00<00:00, 80.5MB/s]
model_quint8_avx2.onnx: 0% 0.00/23.0M [00:00<?, ?B/s]
model_O4.onnx: 93% 41.9M/45.2M [00:00<00:00, 94.7MB/s]
openvino_model.bin: 0% 0.00/90.3M [00:00<?, ?B/s]
model_O1.onnx: 46% 41.9M/90.4M [00:00<00:00, 88.0MB/s]
model_O3.onnx: 46% 41.9M/90.3M [00:00<00:00, 90.7MB/s]
model_O4.onnx: 100% 45.2M/45.2M [00:00<00:00, 86.0MB/s]
model.onnx: 58% 52.4M/90.4M [00:00<00:00, 105MB/s]
model_O2.onnx: 58% 52.4M/90.3M [00:00<00:00, 103MB/s]
openvino_model_qint8_quantized.bin: 0% 0.00/22.9M [00:00<?, ?B/s]
model_O1.onnx: 58% 52.4M/90.4M [00:00<00:00, 92.1MB/s]
openvino/openvino_model.xml: 0% 0.00/211k [00:00<?, ?B/s]
model_quint8_avx2.onnx: 91% 21.0M/23.0M [00:00<00:00, 110MB/s]
openvino_model.bin: 12% 10.5M/90.3M [00:00<00:01, 58.0MB/s]
openvino/openvino_model.xml: 100% 211k/211k [00:00<00:00, 7.72MB/s]
model_quint8_avx2.onnx: 100% 23.0M/23.0M [00:00<00:00, 96.6MB/s]
openvino_model_qint8_quantized.bin: 46% 10.5M/22.9M [00:00<00:00, 80.8MB/s]
model_O1.onnx: 70% 62.9M/90.4M [00:00<00:00, 85.0MB/s]
model_O3.onnx: 70% 62.9M/90.3M [00:00<00:00, 89.5MB/s]
model.onnx: 81% 73.4M/90.4M [00:00<00:00, 101MB/s]
model_O2.onnx: 81% 73.4M/90.3M [00:00<00:00, 99.8MB/s]
(…)nvino/openvino_model_qint8_quantized.xml: 0% 0.00/368k [00:00<?, ?B/s]
openvino_model.bin: 23% 21.0M/90.3M [00:00<00:00, 77.3MB/s]
(…)nvino/openvino_model_qint8_quantized.xml: 100% 368k/368k [00:00<00:00, 35.0MB/s]
pytorch_model.bin: 0% 0.00/90.9M [00:00<?, ?B/s]
openvino_model_qint8_quantized.bin: 91% 21.0M/22.9M [00:00<00:00, 91.6MB/s]
openvino_model_qint8_quantized.bin: 100% 22.9M/22.9M [00:00<00:00, 81.2MB/s]
model_O1.onnx: 81% 73.4M/90.4M [00:00<00:00, 81.5MB/s]
rust_model.ot: 0% 0.00/90.9M [00:00<?, ?B/s]
model_O3.onnx: 81% 73.4M/90.3M [00:00<00:00, 84.9MB/s]
model_O2.onnx: 93% 83.9M/90.3M [00:00<00:00, 93.1MB/s]
model.onnx: 93% 83.9M/90.4M [00:00<00:00, 91.5MB/s]
openvino_model.bin: 35% 31.5M/90.3M [00:00<00:00, 79.1MB/s]
pytorch_model.bin: 12% 10.5M/90.9M [00:00<00:00, 84.3MB/s]
tf_model.h5: 0% 0.00/91.0M [00:00<?, ?B/s]
model_O1.onnx: 93% 83.9M/90.4M [00:01<00:00, 85.1MB/s]
model.onnx: 100% 90.4M/90.4M [00:01<00:00, 87.3MB/s]
model_O2.onnx: 100% 90.3M/90.3M [00:01<00:00, 86.7MB/s]
Fetching 30 files: 30% 9/30 [00:01<00:02, 7.46it/s]
model_O3.onnx: 93% 83.9M/90.3M [00:01<00:00, 81.3MB/s]
openvino_model.bin: 46% 41.9M/90.3M [00:00<00:00, 75.6MB/s]
pytorch_model.bin: 23% 21.0M/90.9M [00:00<00:00, 76.4MB/s]
tf_model.h5: 12% 10.5M/91.0M [00:00<00:01, 73.8MB/s]
rust_model.ot: 12% 10.5M/90.9M [00:00<00:01, 47.2MB/s]
model_O1.onnx: 100% 90.4M/90.4M [00:01<00:00, 80.0MB/s]
train_script.py: 0% 0.00/13.2k [00:00<?, ?B/s]
train_script.py: 100% 13.2k/13.2k [00:00<00:00, 37.2MB/s]
model_O3.onnx: 100% 90.3M/90.3M [00:01<00:00, 80.0MB/s]
Fetching 30 files: 40% 12/30 [00:01<00:01, 9.73it/s]
openvino_model.bin: 58% 52.4M/90.3M [00:00<00:00, 83.2MB/s]
pytorch_model.bin: 46% 41.9M/90.9M [00:00<00:00, 106MB/s]
tf_model.h5: 35% 31.5M/91.0M [00:00<00:00, 111MB/s]
rust_model.ot: 35% 31.5M/90.9M [00:00<00:00, 91.8MB/s]
openvino_model.bin: 81% 73.4M/90.3M [00:00<00:00, 107MB/s]
pytorch_model.bin: 69% 62.9M/90.9M [00:00<00:00, 135MB/s]
tf_model.h5: 58% 52.4M/91.0M [00:00<00:00, 136MB/s]
rust_model.ot: 58% 52.4M/90.9M [00:00<00:00, 117MB/s]
openvino_model.bin: 100% 90.3M/90.3M [00:00<00:00, 121MB/s]
openvino_model.bin: 100% 90.3M/90.3M [00:00<00:00, 97.3MB/s]
Fetching 30 files: 60% 18/30 [00:01<00:00, 13.20it/s]
pytorch_model.bin: 92% 83.9M/90.9M [00:00<00:00, 154MB/s]
tf_model.h5: 81% 73.4M/91.0M [00:00<00:00, 154MB/s]
pytorch_model.bin: 100% 90.9M/90.9M [00:00<00:00, 131MB/s]
rust_model.ot: 81% 73.4M/90.9M [00:00<00:00, 140MB/s]
tf_model.h5: 100% 91.0M/91.0M [00:00<00:00, 158MB/s]
tf_model.h5: 100% 91.0M/91.0M [00:00<00:00, 140MB/s]
rust_model.ot: 100% 90.9M/90.9M [00:00<00:00, 127MB/s]
Fetching 30 files: 77% 23/30 [00:01<00:00, 16.31it/s]
Fetching 30 files: 100% 30/30 [00:01<00:00, 16.84it/s]
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.