[GH-ISSUE #17203] issue: Adaptive Memory v4 Configuration Persistence Fails, Valves Ignored, Missing function_configs Table #33735

Closed
opened 2026-04-25 07:37:01 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @Piste on GitHub (Sep 4, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/17203

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

v0.6.26

Ollama Version (if applicable)

0.11.8

Operating System

Ubuntu 24.04 (host for Docker)

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

Bug Summary: The Adaptive Memory v4 plugin fails to persist custom valve configurations (e.g., setting LLM Provider to "ollama" and Model to "mistral-small:latest"), defaulting to OpenAI’s endpoint (https://api.openai.com/v1/chat/completions) with a 401 error. The function_configs table is missing from the database, causing "Configuration persistence check failed" warnings and preventing memory processing.
Expected Behavior:

Custom valve settings (e.g., LLM Provider: "ollama", LLM Model Name: "mistral-small:latest", LLM Api Endpoint Url: "http://ollama.home.lan:11434/v1") should be saved and applied, routing LLM queries to the local Ollama instance without authentication errors.
The function_configs table should be created in the database to store plugin configurations.

Actual Behavior

Valve updates are ignored, and the plugin continues using OpenAI defaults, resulting in 401 errors.
Logs show "Configuration persistence check failed" repeatedly.
Database query reveals no function_configs table, even after reinitializing webui.db.

Steps to Reproduce

Install Open WebUI v0.6.26 via Docker Compose with the following docker-compose.yml:

services:
  open-webui:
    image: ghcr.io/open-webui/open-webui
    container_name: open-webui
    ports:
      - "3000:8080"
    volumes:
      - $DOCKERDIR/appdata/open-webui:/app/backend/data
    environment:
      - ENABLE_OPENAI_API=false
      - HUGGINGFACE_API_TOKEN=$HUGGINGFACE_API_TOKEN
      - WEBUI_URL=https://openwebui.myurl
      - LLM_PROVIDER=ollama
      - OLLAMA_BASE_URL=http://ollama.home.lan:11434
    restart: unless-stopped

Pull a model in Ollama: ollama pull mistral-small:latest.
Access Open WebUI at https://openwebui.myurl, navigate to Functions, and install Adaptive Memory v4.
Go to Adaptive Memory v4’s valve settings, set:

Setup Mode: Advanced
LLM Provider: ollama
LLM Model Name: mistral-small:latest
LLM Api Key: (blank)
LLM Api Endpoint Url: (blank or remove)
Save the changes.

Send a test message in a chat (e.g., "I love plum jam...") to trigger memory processing.
Check logs (via docker logs open-webui) for LLM queries and errors.

Troubleshooting Steps Attempted:

Changed Setup Mode from "Simple" to "Advanced" to avoid auto-configuration overrides—no effect.
Updated valves multiple times with "ollama" provider and custom settings—still defaults to OpenAI.
Restarted the container (docker restart open-webui) after valve changes—issue persists.
Removed $DOCKERDIR/appdata/open-webui/webui.db, restarted the container to reinitialize the database—function_configs table still missing, OpenAI defaults remain.
Verified Ollama connectivity: docker exec open-webui curl http://ollama.home.lan:11434/api/tags returns model list successfully.
Added LOG_LEVEL=DEBUG to docker-compose.yml, restarted, but no schema creation logs for function_configs.
Attempted manual API update: curl -X POST http://openwebui.myurl/api/v1/functions/id/adaptive_memory_4/valves/update -H "Content-Type: application/json" -d '{"LLM_Provider": "ollama", "LLM_Model_Name": "mistral-small:latest", "LLM_Api_Key": ""}' (with authentication token)—returns 200 but no change in behavior.
Checked database: docker exec open-webui sqlite3 /app/backend/data/webui.db "SELECT name FROM sqlite_master WHERE type='table';" shows no function_configs table.

Logs & Screenshots

[2025-09-02T18:44:29.266266+00:00] [FILTER] [INFO] [user:system] [sync_to_async] Sync outlet called from async context - scheduling async execution
[2025-09-02T18:44:29.270687+00:00] [CONFIG] [WARN] [user:system] No API key provided for OpenAI-compatible provider. Filter will work but LLM features may fail.
2025-09-02 14:44:29.271 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 100.90.124.64:0 - "POST /api/chat/completed HTTP/1.1" 200
[2025-09-02T18:44:29.274714+00:00] [OUTLET] [WARN] [user:d85b1d53-bf51-4112-8239-ae093d2c8b31] [config_check] Configuration persistence check failed, continuing with current config
[2025-09-02T18:44:29.294312+00:00] [CONFIG] [WARN] [user:system] No API key provided for OpenAI-compatible provider. Filter will work but LLM features may fail.
[2025-09-02T18:44:29.308313+00:00] [SYSTEM] [INFO] [user:system] LLM Query: Provider=openai_compatible, Model=gpt-4o-mini, URL=https://api.openai.com/v1/chat/completions
[2025-09-02T18:44:29.826583+00:00] [SYSTEM] [ERROR] [user:system] AdaptiveMemory Exception: Authentication failed for openai_compatible
[2025-09-02T18:44:29.827127+00:00] [SYSTEM] [ERROR] [user:system] LLM Error during identification: Error: LLM API (openai_compatible) failed: API (openai_compatible) returned 401: {
"error": {
"message": "You didn't provide an API key...",
"type": "invalid_request_error",
"param": null,
"code": null
}
}
[2025-09-02T18:44:29.827614+00:00] [SYSTEM] [INFO] [user:system] No valid memories to process after filtering/identification.

Additional Information

Note: Earlier logs (e.g., 13:58:00) show POST /api/v1/functions/id/adaptive_memory_4/valves/update HTTP/1.1" 200 without a success confirmation, and sqlite3 query failed with "no such table: function_configs".

Additional Context:

The goal is to use Adaptive Memory v4 with an Ollama instance on the local network, not on the same host, instead of OpenAI, but the plugin consistently ignores custom settings.
Permissions on $DOCKERDIR/appdata/open-webui are set to chown -R 1000:1000 and chmod -R 755.

Request for Assistance:

Please advise on how to ensure the function_configs table is created or fix the persistence issue.
Suggest any patches or workarounds for Adaptive Memory v4 compatibility with Open WebUI v0.6.26.
Recommend whether downgrading to v3 is a viable solution.

Originally created by @Piste on GitHub (Sep 4, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/17203 ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version v0.6.26 ### Ollama Version (if applicable) 0.11.8 ### Operating System Ubuntu 24.04 (host for Docker) ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior Bug Summary: The Adaptive Memory v4 plugin fails to persist custom valve configurations (e.g., setting LLM Provider to "ollama" and Model to "mistral-small:latest"), defaulting to OpenAI’s endpoint (https://api.openai.com/v1/chat/completions) with a 401 error. The function_configs table is missing from the database, causing "Configuration persistence check failed" warnings and preventing memory processing. Expected Behavior: Custom valve settings (e.g., LLM Provider: "ollama", LLM Model Name: "mistral-small:latest", LLM Api Endpoint Url: "http://ollama.home.lan:11434/v1") should be saved and applied, routing LLM queries to the local Ollama instance without authentication errors. The function_configs table should be created in the database to store plugin configurations. ### Actual Behavior Valve updates are ignored, and the plugin continues using OpenAI defaults, resulting in 401 errors. Logs show "Configuration persistence check failed" repeatedly. Database query reveals no function_configs table, even after reinitializing webui.db. ### Steps to Reproduce Install Open WebUI v0.6.26 via Docker Compose with the following docker-compose.yml: ```yaml services: open-webui: image: ghcr.io/open-webui/open-webui container_name: open-webui ports: - "3000:8080" volumes: - $DOCKERDIR/appdata/open-webui:/app/backend/data environment: - ENABLE_OPENAI_API=false - HUGGINGFACE_API_TOKEN=$HUGGINGFACE_API_TOKEN - WEBUI_URL=https://openwebui.myurl - LLM_PROVIDER=ollama - OLLAMA_BASE_URL=http://ollama.home.lan:11434 restart: unless-stopped ``` Pull a model in Ollama: ollama pull mistral-small:latest. Access Open WebUI at https://openwebui.myurl, navigate to Functions, and install Adaptive Memory v4. Go to Adaptive Memory v4’s valve settings, set: Setup Mode: Advanced LLM Provider: ollama LLM Model Name: mistral-small:latest LLM Api Key: (blank) LLM Api Endpoint Url: (blank or remove) Save the changes. Send a test message in a chat (e.g., "I love plum jam...") to trigger memory processing. Check logs (via docker logs open-webui) for LLM queries and errors. Troubleshooting Steps Attempted: Changed Setup Mode from "Simple" to "Advanced" to avoid auto-configuration overrides—no effect. Updated valves multiple times with "ollama" provider and custom settings—still defaults to OpenAI. Restarted the container (docker restart open-webui) after valve changes—issue persists. Removed $DOCKERDIR/appdata/open-webui/webui.db, restarted the container to reinitialize the database—function_configs table still missing, OpenAI defaults remain. Verified Ollama connectivity: docker exec open-webui curl http://ollama.home.lan:11434/api/tags returns model list successfully. Added LOG_LEVEL=DEBUG to docker-compose.yml, restarted, but no schema creation logs for function_configs. Attempted manual API update: curl -X POST http://openwebui.myurl/api/v1/functions/id/adaptive_memory_4/valves/update -H "Content-Type: application/json" -d '{"LLM_Provider": "ollama", "LLM_Model_Name": "mistral-small:latest", "LLM_Api_Key": ""}' (with authentication token)—returns 200 but no change in behavior. Checked database: docker exec open-webui sqlite3 /app/backend/data/webui.db "SELECT name FROM sqlite_master WHERE type='table';" shows no function_configs table. ### Logs & Screenshots [2025-09-02T18:44:29.266266+00:00] [FILTER] [INFO] [user:system] [sync_to_async] Sync outlet called from async context - scheduling async execution [2025-09-02T18:44:29.270687+00:00] [CONFIG] [WARN] [user:system] No API key provided for OpenAI-compatible provider. Filter will work but LLM features may fail. 2025-09-02 14:44:29.271 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 100.90.124.64:0 - "POST /api/chat/completed HTTP/1.1" 200 [2025-09-02T18:44:29.274714+00:00] [OUTLET] [WARN] [user:d85b1d53-bf51-4112-8239-ae093d2c8b31] [config_check] Configuration persistence check failed, continuing with current config [2025-09-02T18:44:29.294312+00:00] [CONFIG] [WARN] [user:system] No API key provided for OpenAI-compatible provider. Filter will work but LLM features may fail. [2025-09-02T18:44:29.308313+00:00] [SYSTEM] [INFO] [user:system] LLM Query: Provider=openai_compatible, Model=gpt-4o-mini, URL=https://api.openai.com/v1/chat/completions [2025-09-02T18:44:29.826583+00:00] [SYSTEM] [ERROR] [user:system] AdaptiveMemory Exception: Authentication failed for openai_compatible [2025-09-02T18:44:29.827127+00:00] [SYSTEM] [ERROR] [user:system] LLM Error during identification: Error: LLM API (openai_compatible) failed: API (openai_compatible) returned 401: { "error": { "message": "You didn't provide an API key...", "type": "invalid_request_error", "param": null, "code": null } } [2025-09-02T18:44:29.827614+00:00] [SYSTEM] [INFO] [user:system] No valid memories to process after filtering/identification. ### Additional Information Note: Earlier logs (e.g., 13:58:00) show POST /api/v1/functions/id/adaptive_memory_4/valves/update HTTP/1.1" 200 without a success confirmation, and sqlite3 query failed with "no such table: function_configs". Additional Context: The goal is to use Adaptive Memory v4 with an Ollama instance on the local network, not on the same host, instead of OpenAI, but the plugin consistently ignores custom settings. Permissions on $DOCKERDIR/appdata/open-webui are set to chown -R 1000:1000 and chmod -R 755. Request for Assistance: Please advise on how to ensure the function_configs table is created or fix the persistence issue. Suggest any patches or workarounds for Adaptive Memory v4 compatibility with Open WebUI v0.6.26. Recommend whether downgrading to v3 is a viable solution.
GiteaMirror added the bug label 2026-04-25 07:37:01 -05:00
Author
Owner

@tjbck commented on GitHub (Sep 4, 2025):

The function itself needs to be updated.

<!-- gh-comment-id:3254456935 --> @tjbck commented on GitHub (Sep 4, 2025): The function itself needs to be updated.
Author
Owner

@Piste commented on GitHub (Sep 4, 2025):

Okay, thank you @tjbck . I've updated the function-side issue: https://github.com/donbcd/adaptive_memory_owui/issues/2
Is this the proper way to signal them?

<!-- gh-comment-id:3254681066 --> @Piste commented on GitHub (Sep 4, 2025): Okay, thank you @tjbck . I've updated the function-side issue: https://github.com/donbcd/adaptive_memory_owui/issues/2 Is this the proper way to signal them?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#33735