[GH-ISSUE #21795] bug: Adaptive Memory v3 - extraction fails with openai_compatible provider due to response_format conflict #19572

Closed
opened 2026-04-20 02:03:51 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @garnetlyx on GitHub (Feb 23, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/21795

Bug Description

Adaptive Memory v3 plugin has multiple issues in the openai_compatible extraction pipeline that cause memory extraction to silently fail or produce incorrect/duplicate entries.

Environment

  • Open WebUI: 0.8.3
  • Plugin: Adaptive Memory v3 (adaptive_memory_v2)
  • LLM Provider: openai_compatible (llama-server with Qwen3-1.7B)
  • OS: macOS

Bug 1 (High): response_format: json_object conflicts with extraction prompt

The extraction prompt requires a JSON array [{...}], but the API request includes "response_format": {"type": "json_object"} which constrains output to a JSON object {...}.

Impact:

  • LLM cannot output an array — grammar constraint forces {...} format
  • Some models/backends degrade to empty output ([], length 3) → no memories saved at all
  • When {...} is returned, it falls to the lossy fallback path (Bug 2/3)
  • Multi-memory extraction is impossible

Reproduction:

# With response_format: json_object
{"name": "Sally", "occupation": "works at Anthropic"}  # wrong format

# Without response_format — correct
[{"operation": "NEW", "content": "Sally is the user's sister...", "tags": ["relationship"], "memory_bank": "Personal"}]

Fix: Remove "response_format": {"type": "json_object"} from the openai_compatible request body in query_llm_with_retry(). The prompt already enforces JSON formatting.

Bug 2 (Medium): ignore_keys missing structural fields → junk entries

In _convert_dict_to_operations(), the fallback path's ignore_keys is:

ignore_keys = {"notes", "meta", "trivia"}

Missing: "operation", "tags", "memory_bank", "id". When LLM returns a dict, memory_bank: "Personal" gets saved as an independent memory:

[Tags: preference] Memory bank: Personal [Memory Bank: General]  ← junk

Fix:

ignore_keys = {"notes", "meta", "trivia", "operation", "tags", "memory_bank", "id"}

Bug 3 (Low): Single dict response loses tags and memory_bank

When LLM returns a valid single operation dict (with operation, content, tags, memory_bank fields), the code doesn't recognize it as a memory operation. It skips the primary list check, then falls to key-value flattening where tags (list type) is dropped, and the hardcoded default tag = "preference" is used.

Fix: Add a check between primary list handling and fallback: if dict contains content (str) + operation, treat as single memory operation directly, preserving original tags/bank.

Summary

Bug Impact Location
response_format: json_object Extraction fails or wrong format query_llm_with_retry()
Missing ignore_keys Junk duplicate entries _convert_dict_to_operations()
No single-dict handling Tags/bank default to preference/General _convert_dict_to_operations()

Note: Ollama path is not affected by Bug 1 (uses "format": "json" which allows arrays).

Originally created by @garnetlyx on GitHub (Feb 23, 2026). Original GitHub issue: https://github.com/open-webui/open-webui/issues/21795 ## Bug Description Adaptive Memory v3 plugin has multiple issues in the `openai_compatible` extraction pipeline that cause memory extraction to silently fail or produce incorrect/duplicate entries. ## Environment - Open WebUI: 0.8.3 - Plugin: Adaptive Memory v3 (`adaptive_memory_v2`) - LLM Provider: `openai_compatible` (llama-server with Qwen3-1.7B) - OS: macOS ## Bug 1 (High): `response_format: json_object` conflicts with extraction prompt The extraction prompt requires a **JSON array** `[{...}]`, but the API request includes `"response_format": {"type": "json_object"}` which constrains output to a JSON **object** `{...}`. **Impact:** - LLM cannot output an array — grammar constraint forces `{...}` format - Some models/backends degrade to empty output (`[]`, length 3) → no memories saved at all - When `{...}` is returned, it falls to the lossy fallback path (Bug 2/3) - Multi-memory extraction is impossible **Reproduction:** ```python # With response_format: json_object {"name": "Sally", "occupation": "works at Anthropic"} # wrong format # Without response_format — correct [{"operation": "NEW", "content": "Sally is the user's sister...", "tags": ["relationship"], "memory_bank": "Personal"}] ``` **Fix:** Remove `"response_format": {"type": "json_object"}` from the openai_compatible request body in `query_llm_with_retry()`. The prompt already enforces JSON formatting. ## Bug 2 (Medium): `ignore_keys` missing structural fields → junk entries In `_convert_dict_to_operations()`, the fallback path's `ignore_keys` is: ```python ignore_keys = {"notes", "meta", "trivia"} ``` Missing: `"operation"`, `"tags"`, `"memory_bank"`, `"id"`. When LLM returns a dict, `memory_bank: "Personal"` gets saved as an independent memory: ``` [Tags: preference] Memory bank: Personal [Memory Bank: General] ← junk ``` **Fix:** ```python ignore_keys = {"notes", "meta", "trivia", "operation", "tags", "memory_bank", "id"} ``` ## Bug 3 (Low): Single dict response loses tags and memory_bank When LLM returns a valid single operation dict (with `operation`, `content`, `tags`, `memory_bank` fields), the code doesn't recognize it as a memory operation. It skips the primary list check, then falls to key-value flattening where `tags` (list type) is dropped, and the hardcoded default `tag = "preference"` is used. **Fix:** Add a check between primary list handling and fallback: if dict contains `content` (str) + `operation`, treat as single memory operation directly, preserving original tags/bank. ## Summary | Bug | Impact | Location | |-----|--------|----------| | `response_format: json_object` | Extraction fails or wrong format | `query_llm_with_retry()` | | Missing `ignore_keys` | Junk duplicate entries | `_convert_dict_to_operations()` | | No single-dict handling | Tags/bank default to preference/General | `_convert_dict_to_operations()` | Note: Ollama path is not affected by Bug 1 (uses `"format": "json"` which allows arrays).
Author
Owner

@Classic298 commented on GitHub (Feb 23, 2026):

Why do you open bug reports for third party plugins in the Open WebUI repository? Report it to the plugin creator.

<!-- gh-comment-id:3947067890 --> @Classic298 commented on GitHub (Feb 23, 2026): Why do you open bug reports for third party plugins in the Open WebUI repository? Report it to the plugin creator.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#19572