mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 10:58:17 -05:00
[GH-ISSUE #17203] issue: Adaptive Memory v4 Configuration Persistence Fails, Valves Ignored, Missing function_configs Table #18206
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Piste on GitHub (Sep 4, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/17203
Check Existing Issues
Installation Method
Docker
Open WebUI Version
v0.6.26
Ollama Version (if applicable)
0.11.8
Operating System
Ubuntu 24.04 (host for Docker)
Browser (if applicable)
No response
Confirmation
README.md.Expected Behavior
Bug Summary: The Adaptive Memory v4 plugin fails to persist custom valve configurations (e.g., setting LLM Provider to "ollama" and Model to "mistral-small:latest"), defaulting to OpenAI’s endpoint (https://api.openai.com/v1/chat/completions) with a 401 error. The function_configs table is missing from the database, causing "Configuration persistence check failed" warnings and preventing memory processing.
Expected Behavior:
Custom valve settings (e.g., LLM Provider: "ollama", LLM Model Name: "mistral-small:latest", LLM Api Endpoint Url: "http://ollama.home.lan:11434/v1") should be saved and applied, routing LLM queries to the local Ollama instance without authentication errors.
The function_configs table should be created in the database to store plugin configurations.
Actual Behavior
Valve updates are ignored, and the plugin continues using OpenAI defaults, resulting in 401 errors.
Logs show "Configuration persistence check failed" repeatedly.
Database query reveals no function_configs table, even after reinitializing webui.db.
Steps to Reproduce
Install Open WebUI v0.6.26 via Docker Compose with the following docker-compose.yml:
Pull a model in Ollama: ollama pull mistral-small:latest.
Access Open WebUI at https://openwebui.myurl, navigate to Functions, and install Adaptive Memory v4.
Go to Adaptive Memory v4’s valve settings, set:
Setup Mode: Advanced
LLM Provider: ollama
LLM Model Name: mistral-small:latest
LLM Api Key: (blank)
LLM Api Endpoint Url: (blank or remove)
Save the changes.
Send a test message in a chat (e.g., "I love plum jam...") to trigger memory processing.
Check logs (via docker logs open-webui) for LLM queries and errors.
Troubleshooting Steps Attempted:
Changed Setup Mode from "Simple" to "Advanced" to avoid auto-configuration overrides—no effect.
Updated valves multiple times with "ollama" provider and custom settings—still defaults to OpenAI.
Restarted the container (docker restart open-webui) after valve changes—issue persists.
Removed $DOCKERDIR/appdata/open-webui/webui.db, restarted the container to reinitialize the database—function_configs table still missing, OpenAI defaults remain.
Verified Ollama connectivity: docker exec open-webui curl http://ollama.home.lan:11434/api/tags returns model list successfully.
Added LOG_LEVEL=DEBUG to docker-compose.yml, restarted, but no schema creation logs for function_configs.
Attempted manual API update: curl -X POST http://openwebui.myurl/api/v1/functions/id/adaptive_memory_4/valves/update -H "Content-Type: application/json" -d '{"LLM_Provider": "ollama", "LLM_Model_Name": "mistral-small:latest", "LLM_Api_Key": ""}' (with authentication token)—returns 200 but no change in behavior.
Checked database: docker exec open-webui sqlite3 /app/backend/data/webui.db "SELECT name FROM sqlite_master WHERE type='table';" shows no function_configs table.
Logs & Screenshots
[2025-09-02T18:44:29.266266+00:00] [FILTER] [INFO] [user:system] [sync_to_async] Sync outlet called from async context - scheduling async execution
[2025-09-02T18:44:29.270687+00:00] [CONFIG] [WARN] [user:system] No API key provided for OpenAI-compatible provider. Filter will work but LLM features may fail.
2025-09-02 14:44:29.271 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 100.90.124.64:0 - "POST /api/chat/completed HTTP/1.1" 200
[2025-09-02T18:44:29.274714+00:00] [OUTLET] [WARN] [user:d85b1d53-bf51-4112-8239-ae093d2c8b31] [config_check] Configuration persistence check failed, continuing with current config
[2025-09-02T18:44:29.294312+00:00] [CONFIG] [WARN] [user:system] No API key provided for OpenAI-compatible provider. Filter will work but LLM features may fail.
[2025-09-02T18:44:29.308313+00:00] [SYSTEM] [INFO] [user:system] LLM Query: Provider=openai_compatible, Model=gpt-4o-mini, URL=https://api.openai.com/v1/chat/completions
[2025-09-02T18:44:29.826583+00:00] [SYSTEM] [ERROR] [user:system] AdaptiveMemory Exception: Authentication failed for openai_compatible
[2025-09-02T18:44:29.827127+00:00] [SYSTEM] [ERROR] [user:system] LLM Error during identification: Error: LLM API (openai_compatible) failed: API (openai_compatible) returned 401: {
"error": {
"message": "You didn't provide an API key...",
"type": "invalid_request_error",
"param": null,
"code": null
}
}
[2025-09-02T18:44:29.827614+00:00] [SYSTEM] [INFO] [user:system] No valid memories to process after filtering/identification.
Additional Information
Note: Earlier logs (e.g., 13:58:00) show POST /api/v1/functions/id/adaptive_memory_4/valves/update HTTP/1.1" 200 without a success confirmation, and sqlite3 query failed with "no such table: function_configs".
Additional Context:
The goal is to use Adaptive Memory v4 with an Ollama instance on the local network, not on the same host, instead of OpenAI, but the plugin consistently ignores custom settings.
Permissions on $DOCKERDIR/appdata/open-webui are set to chown -R 1000:1000 and chmod -R 755.
Request for Assistance:
Please advise on how to ensure the function_configs table is created or fix the persistence issue.
Suggest any patches or workarounds for Adaptive Memory v4 compatibility with Open WebUI v0.6.26.
Recommend whether downgrading to v3 is a viable solution.
@tjbck commented on GitHub (Sep 4, 2025):
The function itself needs to be updated.
@Piste commented on GitHub (Sep 4, 2025):
Okay, thank you @tjbck . I've updated the function-side issue: https://github.com/donbcd/adaptive_memory_owui/issues/2
Is this the proper way to signal them?