[GH-ISSUE #22327] issue: Wrong chat history is sent to inference server #58367

Closed
opened 2026-05-05 23:03:09 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @LostPhysx on GitHub (Mar 6, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/22327

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

0.8.8

Ollama Version (if applicable)

No response

Operating System

Ubuntu 24.04

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

Create a new chat. Ask "What is your favorite color?". Get a response, e.g. "Blue". Edit the response to say something else, e.g. "Yellow". Ask "What color did you name just now?". Correct response "Yellow".

Actual Behavior

Create a new chat. Ask "What is your favorite color?". Get a response, e.g. "Blue". Edit the response to say something else, e.g. "Yellow". Ask "What color did you name just now?". Wrong response "Blue".

Steps to Reproduce

  1. Create a new chat
  2. Ask a question
  3. Edit the answer to something different
  4. Ask something else, referring to the modified answer

Result: Get a response referring the original answer before it was edited.

Logs & Screenshots

2026-03-06 17:35:32.407 | DEBUG | open_webui.utils.chat:generate_chat_completion:165 - generate_chat_completion: {'stream': True, 'model': 'meta-llama/llama-4-scout', 'messages': [{'role': 'user', 'content': 'What color do you like best? Only name one color, without explanation.'}, {'role': 'assistant', 'content': 'Cerulean.'}, {'role': 'user', 'content': 'What color did you say?'}], 'metadata': {'user_id': 'd2577244-07ea-4dd1-b4e5-3640097f5460', 'chat_id': '20482e83-d704-4407-8e65-e98c87c3a85d', 'message_id': 'f04c7cc6-8138-424a-878f-868fb08fa20c', 'parent_message': {'id': 'c2db7624-894f-499c-8f9c-b7c3dbfa4ccf', 'parentId': 'fb715f6a-2f64-4b6a-9c59-6bda8dc947b3', 'childrenIds': ['7ae40e6b-e9f6-414c-99b6-465480d35819', 'b24fb4fc-ffa7-4f89-af8a-6ee64161eff3', 'cedaac50-9944-4242-84d7-97c2477fafae', 'f04c7cc6-8138-424a-878f-868fb08fa20c'], 'role': 'user', 'content': 'What color did you say?', 'timestamp': 1772818162, 'models': ['deepseek/deepseek-v3.2']}, 'parent_message_id': 'c2db7624-894f-499c-8f9c-b7c3dbfa4ccf', 'session_id': 'b8kZY9Dwn-rTdSImAAAV', 'filter_ids': [], 'tool_ids': None, 'tool_servers': [], 'files': None, 'features': {'voice': False, 'image_generation': False, 'code_interpreter': False, 'web_search': False}, 'variables': {'{{USER_NAME}}': 'Andre', '{{USER_EMAIL}}': 'admin@mydomain.eu', '{{USER_LOCATION}}': 'Unknown', '{{CURRENT_DATETIME}}': '2026-03-06 18:35:33', '{{CURRENT_DATE}}': '2026-03-06', '{{CURRENT_TIME}}': '18:35:33', '{{CURRENT_WEEKDAY}}': 'Friday', '{{CURRENT_TIMEZONE}}': 'Europe/Berlin', '{{USER_LANGUAGE}}': 'de-DE'}, 'model': {'id': 'meta-llama/llama-4-scout', 'name': 'meta-llama/llama-4-scout', 'owned_by': 'openai', 'openai': {'id': 'meta-llama/llama-4-scout', 'name': 'meta-llama/llama-4-scout', 'owned_by': 'openai', 'openai': {'id': 'meta-llama/llama-4-scout'}, 'urlIdx': 0, 'connection_type': 'external'}, 'urlIdx': 0, 'connection_type': 'external', 'actions': [], 'filters': [], 'tags': []}, 'direct': False, 'params': {'stream_delta_chunk_size': None, 'reasoning_tags': None, 'function_calling': 'default'}, 'terminal_id': None}}

Additional Information

I was upgrading directly from 6.x to 0.8.5, when I first noticed the bug. Updated and checked every version since then, and the bug is consistent. I even set up a completely fresh v0.8.8 instance to verify this.

From the logs, it's clear that it's not an issue with the inferencing provider. Open-webui actually sends the wrong (i.e. unedited) chat history. I edited the LLM response from "Cerulean." to "Yellow."

Originally created by @LostPhysx on GitHub (Mar 6, 2026). Original GitHub issue: https://github.com/open-webui/open-webui/issues/22327 ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!). - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version 0.8.8 ### Ollama Version (if applicable) _No response_ ### Operating System Ubuntu 24.04 ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior Create a new chat. Ask "What is your favorite color?". Get a response, e.g. "Blue". Edit the response to say something else, e.g. "Yellow". Ask "What color did you name just now?". Correct response "Yellow". ### Actual Behavior Create a new chat. Ask "What is your favorite color?". Get a response, e.g. "Blue". Edit the response to say something else, e.g. "Yellow". Ask "What color did you name just now?". Wrong response "Blue". ### Steps to Reproduce 1. Create a new chat 2. Ask a question 3. Edit the answer to something different 4. Ask something else, referring to the modified answer Result: Get a response referring the original answer before it was edited. ### Logs & Screenshots 2026-03-06 17:35:32.407 | DEBUG | open_webui.utils.chat:generate_chat_completion:165 - generate_chat_completion: {'stream': True, 'model': 'meta-llama/llama-4-scout', 'messages': [{'role': 'user', 'content': 'What color do you like best? Only name one color, without explanation.'}, {'role': 'assistant', 'content': 'Cerulean.'}, {'role': 'user', 'content': 'What color did you say?'}], 'metadata': {'user_id': 'd2577244-07ea-4dd1-b4e5-3640097f5460', 'chat_id': '20482e83-d704-4407-8e65-e98c87c3a85d', 'message_id': 'f04c7cc6-8138-424a-878f-868fb08fa20c', 'parent_message': {'id': 'c2db7624-894f-499c-8f9c-b7c3dbfa4ccf', 'parentId': 'fb715f6a-2f64-4b6a-9c59-6bda8dc947b3', 'childrenIds': ['7ae40e6b-e9f6-414c-99b6-465480d35819', 'b24fb4fc-ffa7-4f89-af8a-6ee64161eff3', 'cedaac50-9944-4242-84d7-97c2477fafae', 'f04c7cc6-8138-424a-878f-868fb08fa20c'], 'role': 'user', 'content': 'What color did you say?', 'timestamp': 1772818162, 'models': ['deepseek/deepseek-v3.2']}, 'parent_message_id': 'c2db7624-894f-499c-8f9c-b7c3dbfa4ccf', 'session_id': 'b8kZY9Dwn-rTdSImAAAV', 'filter_ids': [], 'tool_ids': None, 'tool_servers': [], 'files': None, 'features': {'voice': False, 'image_generation': False, 'code_interpreter': False, 'web_search': False}, 'variables': {'{{USER_NAME}}': 'Andre', '{{USER_EMAIL}}': 'admin@mydomain.eu', '{{USER_LOCATION}}': 'Unknown', '{{CURRENT_DATETIME}}': '2026-03-06 18:35:33', '{{CURRENT_DATE}}': '2026-03-06', '{{CURRENT_TIME}}': '18:35:33', '{{CURRENT_WEEKDAY}}': 'Friday', '{{CURRENT_TIMEZONE}}': 'Europe/Berlin', '{{USER_LANGUAGE}}': 'de-DE'}, 'model': {'id': 'meta-llama/llama-4-scout', 'name': 'meta-llama/llama-4-scout', 'owned_by': 'openai', 'openai': {'id': 'meta-llama/llama-4-scout', 'name': 'meta-llama/llama-4-scout', 'owned_by': 'openai', 'openai': {'id': 'meta-llama/llama-4-scout'}, 'urlIdx': 0, 'connection_type': 'external'}, 'urlIdx': 0, 'connection_type': 'external', 'actions': [], 'filters': [], 'tags': []}, 'direct': False, 'params': {'stream_delta_chunk_size': None, 'reasoning_tags': None, 'function_calling': 'default'}, 'terminal_id': None}} ### Additional Information I was upgrading directly from 6.x to 0.8.5, when I first noticed the bug. Updated and checked every version since then, and the bug is consistent. I even set up a completely fresh v0.8.8 instance to verify this. From the logs, it's clear that it's not an issue with the inferencing provider. Open-webui actually sends the wrong (i.e. unedited) chat history. I edited the LLM response from "Cerulean." to "Yellow."
GiteaMirror added the bug label 2026-05-05 23:03:09 -05:00
Author
Owner

@LostPhysx commented on GitHub (Mar 6, 2026):

I have also set up another container, reverting to 0.6.43 and the bug is not yet present

<!-- gh-comment-id:4013220104 --> @LostPhysx commented on GitHub (Mar 6, 2026): I have also set up another container, reverting to 0.6.43 and the bug is not yet present
Author
Owner

@Classic298 commented on GitHub (Mar 6, 2026):

let me try and reproduce this

<!-- gh-comment-id:4013250737 --> @Classic298 commented on GitHub (Mar 6, 2026): let me try and reproduce this
Author
Owner

@Classic298 commented on GitHub (Mar 6, 2026):

can reproduce

<!-- gh-comment-id:4013281906 --> @Classic298 commented on GitHub (Mar 6, 2026): can reproduce
Author
Owner
<!-- gh-comment-id:4013297942 --> @Classic298 commented on GitHub (Mar 6, 2026): https://github.com/user-attachments/assets/674fbb47-9a00-4d47-a0da-d3deb2ede527
Author
Owner

@Classic298 commented on GitHub (Mar 6, 2026):

https://github.com/open-webui/open-webui/pull/22331
this fixes it

<!-- gh-comment-id:4013647039 --> @Classic298 commented on GitHub (Mar 6, 2026): https://github.com/open-webui/open-webui/pull/22331 this fixes it
Author
Owner

@tjbck commented on GitHub (Mar 6, 2026):

Duplicate.

<!-- gh-comment-id:4013800145 --> @tjbck commented on GitHub (Mar 6, 2026): Duplicate.
Author
Owner

@LostPhysx commented on GitHub (Mar 7, 2026):

Duplicate.

Duplicate where? I tried to find this issue existing, if you link it, I know what to search for next time?

<!-- gh-comment-id:4015997733 --> @LostPhysx commented on GitHub (Mar 7, 2026): > Duplicate. Duplicate where? I tried to find this issue existing, if you link it, I know what to search for next time?
Author
Owner

@Classic298 commented on GitHub (Mar 7, 2026):

@LostPhysx it's the same issue as with continue response not working and edit message not working

<!-- gh-comment-id:4016006218 --> @Classic298 commented on GitHub (Mar 7, 2026): @LostPhysx it's the same issue as with continue response not working and edit message not working
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#58367