[PR #22331] [CLOSED] fix: preserve edited message content when enriching with DB output items #26617

Closed
opened 2026-04-20 06:36:21 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/open-webui/open-webui/pull/22331
Author: @Classic298
Created: 3/6/2026
Status: Closed

Base: devHead: fix/edited-message-content-lost


📝 Commits (2)

  • 6c256b0 fix: preserve edited message content when loading from DB
  • 0e5702a fix: preserve edited message content when enriching with DB output items

📊 Changes

1 file changed (+36 additions, -5 deletions)

View changed files

📝 backend/open_webui/utils/middleware.py (+36 -5)

📄 Description

fix: preserve edited message content when enriching with DB output items

Fixes #22327

Problem

When a user edits an assistant's response (e.g., changing "Cerulean" to "Yellow") and then sends a follow-up message, the LLM receives the original unedited content instead of the edited version. This causes the AI to reference information the user explicitly corrected.

Root Cause

In process_chat_payload, the backend loads the message chain from the database and fully replaces the frontend-sent messages with the DB-loaded versions. The frontend sends the correct, edited content, but the backend discards it in favor of the (potentially stale) DB content.

This DB-loading feature was introduced to preserve structured output items (for tool call reconstruction) that the frontend strips when building messages. However, its implementation replaced all message fields — including content — rather than merging only the DB-specific fields.

Fix

The frontend messages are now treated as authoritative for content (they reflect user edits). The DB is only used to enrich messages with:

  • output: structured tool call data needed for proper OpenAI-format message reconstruction
  • files: image file metadata for content injection

Content from the DB never overwrites the frontend-sent content.

Testing DONE

  1. Start a chat and ask a question that gets a specific answer
  2. Edit the assistant's response to say something different
  3. Ask a follow-up that references the edited content
  4. Verify the AI correctly references the edited version, not the original

Contributor License Agreement

By submitting this pull request, I confirm that I have read and fully agree to the Contributor License Agreement (CLA), and I am providing my contributions under its terms.

Note

Deleting the CLA section will lead to immediate closure of your PR and it will not be merged in.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/open-webui/open-webui/pull/22331 **Author:** [@Classic298](https://github.com/Classic298) **Created:** 3/6/2026 **Status:** ❌ Closed **Base:** `dev` ← **Head:** `fix/edited-message-content-lost` --- ### 📝 Commits (2) - [`6c256b0`](https://github.com/open-webui/open-webui/commit/6c256b0c6c18f418e470a4e24efe3fd03aaaab1c) fix: preserve edited message content when loading from DB - [`0e5702a`](https://github.com/open-webui/open-webui/commit/0e5702a2382bb50c8640557c9e3e755c22ee3f66) fix: preserve edited message content when enriching with DB output items ### 📊 Changes **1 file changed** (+36 additions, -5 deletions) <details> <summary>View changed files</summary> 📝 `backend/open_webui/utils/middleware.py` (+36 -5) </details> ### 📄 Description ## fix: preserve edited message content when enriching with DB output items Fixes #22327 ### Problem When a user edits an assistant's response (e.g., changing "Cerulean" to "Yellow") and then sends a follow-up message, the LLM receives the **original** unedited content instead of the edited version. This causes the AI to reference information the user explicitly corrected. ### Root Cause In process_chat_payload, the backend loads the message chain from the database and **fully replaces** the frontend-sent messages with the DB-loaded versions. The frontend sends the correct, edited content, but the backend discards it in favor of the (potentially stale) DB content. This DB-loading feature was introduced to preserve structured output items (for tool call reconstruction) that the frontend strips when building messages. However, its implementation replaced **all** message fields — including content — rather than merging only the DB-specific fields. ### Fix The frontend messages are now treated as authoritative for content (they reflect user edits). The DB is only used to enrich messages with: - **output**: structured tool call data needed for proper OpenAI-format message reconstruction - **files**: image file metadata for content injection Content from the DB never overwrites the frontend-sent content. # Testing DONE 1. Start a chat and ask a question that gets a specific answer 2. Edit the assistant's response to say something different 3. Ask a follow-up that references the edited content 4. Verify the AI correctly references the edited version, not the original ### Contributor License Agreement <!-- 🚨 DO NOT DELETE THE TEXT BELOW 🚨 Keep the "Contributor License Agreement" confirmation text intact. Deleting it will trigger the CLA-Bot to INVALIDATE your PR. --> By submitting this pull request, I confirm that I have read and fully agree to the [Contributor License Agreement (CLA)](https://github.com/open-webui/open-webui/blob/main/CONTRIBUTOR_LICENSE_AGREEMENT), and I am providing my contributions under its terms. > [!NOTE] > Deleting the CLA section will lead to immediate closure of your PR and it will not be merged in. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-20 06:36:21 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#26617