[GH-ISSUE #21564] issue: "Edit" and "Continue Response" don't work properly #35047

Open
opened 2026-04-25 09:15:13 -05:00 by GiteaMirror · 33 comments
Owner

Originally created by @TheObserver-000 on GitHub (Feb 18, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/21564

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

0.8.3

Ollama Version (if applicable)

0.16.1

Operating System

Windows 11

Browser (if applicable)

Chrome 145

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

"Continue Response" must complete the "Paris is the capital of" with "France." and further attempts with "Continue Response" should not generate anything else because the response is finished, as the prompt says.

Actual Behavior

"Edit" Doesn't actually edit the message, and "Continue Response" generates an additional response instead of continuing the current one.

Steps to Reproduce

See the attached video.

Logs & Screenshots

https://github.com/user-attachments/assets/8a16b5ee-3956-40e7-b108-f171a379211f

Additional Information

No response

Originally created by @TheObserver-000 on GitHub (Feb 18, 2026). Original GitHub issue: https://github.com/open-webui/open-webui/issues/21564 ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!). - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version 0.8.3 ### Ollama Version (if applicable) 0.16.1 ### Operating System Windows 11 ### Browser (if applicable) Chrome 145 ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior "Continue Response" must complete the "Paris is the capital of" with "France." and further attempts with "Continue Response" should not generate anything else because the response is finished, as the prompt says. ### Actual Behavior "Edit" Doesn't actually edit the message, and "Continue Response" generates an additional response instead of continuing the current one. ### Steps to Reproduce See the attached video. ### Logs & Screenshots https://github.com/user-attachments/assets/8a16b5ee-3956-40e7-b108-f171a379211f ### Additional Information _No response_
GiteaMirror added the bug label 2026-04-25 09:15:13 -05:00
Author
Owner

@Ghosthree3 commented on GitHub (Feb 18, 2026):

It has broken in two stages since 0.8.0.

In 0.8.0 responses that had been completed when edited would begin to write output from the end of where you edited the message, but appended to the original completed message. So in your video example, if you deleted 'France.' and hit continue it would output, "Paris is the capital of France.France.". If you stopped the output before it managed to complete then the continue output would function as intended after editing the response.

Then in 0.8.2 it broke further, now edits are ignored entirely and pressing continue gives an appended output similar to if you had pressed regenerate, appended to whatever the previous final output was (as shown in your video).

I've pinned to 0.8.1 for the time being as while it is annoying to lose the continue functionality if the output does complete, if I continuously catch and stop it before it does then it functions as it used to prior to 0.8.0 (and if it does complete I hit regenerate and quickly stop and paste in my edited output before continuing).

<!-- gh-comment-id:3921024321 --> @Ghosthree3 commented on GitHub (Feb 18, 2026): It has broken in two stages since 0.8.0. In 0.8.0 responses that had been _completed_ when edited would begin to write output from the end of where you edited the message, but appended to the original completed message. So in your video example, if you deleted 'France.' and hit continue it would output, "Paris is the capital of France.France.". If you stopped the output before it managed to complete then the continue output would function as intended after editing the response. Then in 0.8.2 it broke further, now edits are ignored entirely and pressing continue gives an appended output similar to if you had pressed regenerate, appended to whatever the previous final output was (as shown in your video). I've pinned to 0.8.1 for the time being as while it is annoying to lose the continue functionality if the output does complete, if I continuously catch and stop it before it does then it functions as it used to prior to 0.8.0 (and if it does complete I hit regenerate and quickly stop and paste in my edited output before continuing).
Author
Owner

@MrOutsider commented on GitHub (Feb 20, 2026):

Same issue to the T.
My Ollama version is 0.16.2

<!-- gh-comment-id:3930833155 --> @MrOutsider commented on GitHub (Feb 20, 2026): Same issue to the T. My Ollama version is 0.16.2
Author
Owner

@quark6789 commented on GitHub (Feb 21, 2026):

I have encountered the same thing.

  • Installation method: uv
  • Open WebUI version: 0.8.3
  • LM Studio version: 0.4.3
  • OS: macOS Tahoe 26.3 and iOS 26.2.1
  • Browser: Safari (on both macOS and iOS)
<!-- gh-comment-id:3938369939 --> @quark6789 commented on GitHub (Feb 21, 2026): I have encountered the same thing. - Installation method: uv - Open WebUI version: 0.8.3 - LM Studio version: 0.4.3 - OS: macOS Tahoe 26.3 and iOS 26.2.1 - Browser: Safari (on both macOS and iOS)
Author
Owner

@TheObserver-000 commented on GitHub (Feb 23, 2026):

As of v0.8.5, it's still not fixed

<!-- gh-comment-id:3944516016 --> @TheObserver-000 commented on GitHub (Feb 23, 2026): As of v0.8.5, it's still not fixed
Author
Owner

@andrewmm511 commented on GitHub (Feb 23, 2026):

Same. I use the continue functionality every day for pre-fill.

<!-- gh-comment-id:3947911312 --> @andrewmm511 commented on GitHub (Feb 23, 2026): Same. I use the continue functionality every day for pre-fill.
Author
Owner

@Varalan commented on GitHub (Feb 26, 2026):

I'm getting the same exact thing with 0.8.5/docker/LM Studio 0.4.4

<!-- gh-comment-id:3963032148 --> @Varalan commented on GitHub (Feb 26, 2026): I'm getting the same exact thing with 0.8.5/docker/LM Studio 0.4.4
Author
Owner

@BillyJin-NSRL-V-KVSHIN commented on GitHub (Mar 3, 2026):

0.8.8, still not fixed

<!-- gh-comment-id:3989163478 --> @BillyJin-NSRL-V-KVSHIN commented on GitHub (Mar 3, 2026): 0.8.8, still not fixed
Author
Owner

@MenceyBentayga commented on GitHub (Mar 8, 2026):

PR still hasn't gone through at 0.8.9, saying it's waiting for a maintainer.

Isn't it kind of a basic feature to wait two weeks on?

<!-- gh-comment-id:4018584612 --> @MenceyBentayga commented on GitHub (Mar 8, 2026): PR still hasn't gone through at 0.8.9, saying it's waiting for a maintainer. Isn't it kind of a basic feature to wait two weeks on?
Author
Owner

@frenzybiscuit commented on GitHub (Mar 8, 2026):

Same issue.

I will let the LLM respond. Then I will go back and edit the LLM message.

Then, I reply.

It will recall the deleted portion of the response.

<!-- gh-comment-id:4020099154 --> @frenzybiscuit commented on GitHub (Mar 8, 2026): Same issue. I will let the LLM respond. Then I will go back and edit the LLM message. Then, I reply. It will recall the deleted portion of the response.
Author
Owner

@MenceyBentayga commented on GitHub (Mar 9, 2026):

As of 0.8.10, the 20-line pull request is still waiting for a maintainer to see it since two weeks ago. Hopefully it is possible to merge the fix such an important user-facing feature soon.

<!-- gh-comment-id:4020396541 --> @MenceyBentayga commented on GitHub (Mar 9, 2026): As of 0.8.10, the 20-line pull request is still waiting for a maintainer to see it since two weeks ago. Hopefully it is possible to merge the fix such an important user-facing feature soon.
Author
Owner

@Iniquitatis commented on GitHub (Mar 9, 2026):

important user-facing feature

I'd argue a fundamental feature.

<!-- gh-comment-id:4020411083 --> @Iniquitatis commented on GitHub (Mar 9, 2026): > important user-facing feature I'd argue a fundamental feature.
Author
Owner

@Classic298 commented on GitHub (Mar 9, 2026):

has anyone tested the PR? I see zero comments under the PR of people testing it and confirming it also has no downstream effects or unintended side effects.

It would definitely be easier for Tim to merge it if anyone here would actually also test the PR and help get it merged.

<!-- gh-comment-id:4020455178 --> @Classic298 commented on GitHub (Mar 9, 2026): has anyone tested the PR? I see zero comments under the PR of people testing it and confirming it also has no downstream effects or unintended side effects. It would definitely be easier for Tim to merge it if anyone here would actually also test the PR and help get it merged.
Author
Owner

@MenceyBentayga commented on GitHub (Mar 9, 2026):

has anyone tested the PR? I see zero comments under the PR of people testing it and confirming it also has no downstream effects or unintended side effects.

It would definitely be easier for Tim to merge it if anyone here would actually also test the PR and help get it merged.

Will do at my earliest possible chance and report back. Is there anything in particular that should be tested other than the visible stuff the changed code could affect?

<!-- gh-comment-id:4020460564 --> @MenceyBentayga commented on GitHub (Mar 9, 2026): > has anyone tested the PR? I see zero comments under the PR of people testing it and confirming it also has no downstream effects or unintended side effects. > > It would definitely be easier for Tim to merge it if anyone here would actually also test the PR and help get it merged. Will do at my earliest possible chance and report back. Is there anything in particular that should be tested other than the visible stuff the changed code could affect?
Author
Owner

@Classic298 commented on GitHub (Mar 9, 2026):

Test for the obvious which is what this issue is about

Continue response
Edit message

But also check around message branching
Edit your own message to create message branches and check if the correct branch is sent with the correct edited assistant message or correct continued response. And same if you branch the assistant message by regenerating the assistant message.

So yeah everything around message branching here should also be tested if that still behaves the same way and the fix correctly works for these cases also.

And what happens if i edit or continue response for older messages in the history

That's what i can think of right now would be good to test.

<!-- gh-comment-id:4020479559 --> @Classic298 commented on GitHub (Mar 9, 2026): Test for the obvious which is what this issue is about Continue response Edit message But also check around message branching Edit your own message to create message branches and check if the correct branch is sent with the correct edited assistant message or correct continued response. And same if you branch the assistant message by regenerating the assistant message. So yeah everything around message branching here should also be tested if that still behaves the same way and the fix correctly works for these cases also. And what happens if i edit or continue response for older messages in the history That's what i can think of right now would be good to test.
Author
Owner

@sayan1999 commented on GitHub (Mar 11, 2026):

please fix this this is very very important feature

<!-- gh-comment-id:4040436637 --> @sayan1999 commented on GitHub (Mar 11, 2026): please fix this this is very very important feature
Author
Owner

@sayan1999 commented on GitHub (Mar 15, 2026):

Why is none looking into this fundamental bug? Can anyone test the fix, and add comments to the pr?

<!-- gh-comment-id:4062721141 --> @sayan1999 commented on GitHub (Mar 15, 2026): Why is none looking into this fundamental bug? Can anyone test the fix, and add comments to the pr?
Author
Owner

@Classic298 commented on GitHub (Mar 15, 2026):

Hey @sayan1999 — to answer your questions directly:

"Why is no one looking into this?" — people are. Two PRs have been opened, one was closed because it was slop. The other is still open but someone already reported that the second PR is also not working at all. The issue is tracked. "No one looking into it" and "not yet merged" are very different things. Fixing message branching edge cases isn't a one-liner; there are real downstream risks which I outlined above, and a rushed merge of a broken fix is worse than the current state.

"Can anyone test the fix?" — Yes - you can. That's exactly what I asked the community to do a week ago. The PR has been sitting there with zero community-verified testing. If this feature matters enough for you to comment TWICE asking why it's not fixed, then it should matter enough for you to spend 10 minutes pulling the branch and running through the test cases I listed. That feedback directly helps Tim make the merge decision - or closing the second PR as well if it also turns out to not work.

Open source moves faster when people contribute, not just when they ask.

<!-- gh-comment-id:4062729230 --> @Classic298 commented on GitHub (Mar 15, 2026): Hey @sayan1999 — to answer your questions directly: "Why is no one looking into this?" — people are. Two PRs have been opened, one was closed because it was slop. The other is still open but someone already reported that the second PR is also not working at all. The issue is tracked. "No one looking into it" and "not yet merged" are very different things. Fixing message branching edge cases isn't a one-liner; there are real downstream risks which I outlined above, and a rushed merge of a broken fix is worse than the current state. "Can anyone test the fix?" — Yes - <ins>**you can**</ins>. That's exactly what I asked the community to do a week ago. The PR has been sitting there with zero community-verified testing. If this feature matters enough for you to comment **TWICE** asking why it's not fixed, then it should matter enough for you to spend 10 minutes pulling the branch and running through the test cases I listed. That feedback directly helps Tim make the merge decision - or closing the second PR as well if it also turns out to not work. Open source moves faster when people contribute, not just when they ask.
Author
Owner

@Classic298 commented on GitHub (Mar 15, 2026):

@MenceyBentayga any chance you got around to testing yet?

<!-- gh-comment-id:4062732731 --> @Classic298 commented on GitHub (Mar 15, 2026): @MenceyBentayga any chance you got around to testing yet?
Author
Owner

@MenceyBentayga commented on GitHub (Mar 15, 2026):

@MenceyBentayga any chance you got around to testing yet?

Hello! I though I had seen a response saying that someone else had tested it and it wasn't working, so I didn't try it myself.

Now, I look and this thread and that response is nowhere? Maybe the user deleted it. I will test myself tonight, in that case.

<!-- gh-comment-id:4063750965 --> @MenceyBentayga commented on GitHub (Mar 15, 2026): > [@MenceyBentayga](https://github.com/MenceyBentayga) any chance you got around to testing yet? Hello! I though I had seen a response saying that someone else had tested it and it wasn't working, so I didn't try it myself. Now, I look and this thread and that response is nowhere? Maybe the user deleted it. I will test myself tonight, in that case.
Author
Owner

@MenceyBentayga commented on GitHub (Mar 15, 2026):

Image Image

Tried on 0.8.5 merging that pull request. Sadly, it doesn't seem to work.

<!-- gh-comment-id:4063956381 --> @MenceyBentayga commented on GitHub (Mar 15, 2026): <img width="1042" height="533" alt="Image" src="https://github.com/user-attachments/assets/1a543041-8d09-4619-9fb2-9021ce2c7619" /> <img width="1196" height="459" alt="Image" src="https://github.com/user-attachments/assets/a0a4fb9b-d135-4096-8506-7495352cf7c1" /> Tried on 0.8.5 merging that pull request. Sadly, it doesn't seem to work.
Author
Owner

@Classic298 commented on GitHub (Mar 15, 2026):

Thanks. Closing that pr.

<!-- gh-comment-id:4064018614 --> @Classic298 commented on GitHub (Mar 15, 2026): Thanks. Closing that pr.
Author
Owner

@eniraa commented on GitHub (Mar 17, 2026):

Would appreciate if someone could take a look at my pull request (#22773), which only fixes message editing functionality. I'm happy to make any changes needed to get this merged.

<!-- gh-comment-id:4077423267 --> @eniraa commented on GitHub (Mar 17, 2026): Would appreciate if someone could take a look at my pull request (#22773), which only fixes message editing functionality. I'm happy to make any changes needed to get this merged.
Author
Owner

@MenceyBentayga commented on GitHub (Mar 17, 2026):

Would appreciate if someone could take a look at my pull request (#22773), which only fixes message editing functionality. I'm happy to make any changes needed to get this merged.

Image

LGTM!

<!-- gh-comment-id:4078639881 --> @MenceyBentayga commented on GitHub (Mar 17, 2026): > Would appreciate if someone could take a look at my pull request ([#22773](https://github.com/open-webui/open-webui/pull/22773)), which only fixes message editing functionality. I'm happy to make any changes needed to get this merged. <img width="1623" height="871" alt="Image" src="https://github.com/user-attachments/assets/1c3a8d4e-f109-49a3-95e0-9b71c0bcf4c4" /> LGTM!
Author
Owner

@frenzybiscuit commented on GitHub (Mar 26, 2026):

Hello,

This is still an issue in 0.8.11. In addition, regenerations don't work properly either. The first time you regenerate the LLM will mix context and get confused. The -second- time you regenerate, it works fine.

<!-- gh-comment-id:4136313809 --> @frenzybiscuit commented on GitHub (Mar 26, 2026): Hello, This is still an issue in 0.8.11. In addition, regenerations don't work properly either. The first time you regenerate the LLM will mix context and get confused. The -second- time you regenerate, it works fine.
Author
Owner

@syndicatedshannon commented on GitHub (Mar 30, 2026):

I briefly attempted to verify this fix, but still saw the issue I believe this is intended to resolve. The edit appears to be completed, but then is undone when continuing any further operation.

Initial question:
Image

Edit, deleting "Paris."
Image

Then press "Continue":
Image

Alternately edit, replacing "Paris" with "Rome":
Image

Then press "Continue":
Image

I didn't trace exactly what is happening, but based on the responses it seems to receive an amalgam of both the original and edited content.

Regarding the portion of the reported defect above that says "Continue" shouldn't produce any further text once it completes, that isn't my understanding of the "Continue" feature. My understanding is that if the model generates an end token, that make be the apparent outcome, but it is not a guarantee. Continue may be used to give the assistant another opportunity to generate a token besides an end token. My understanding may be wrong.

Specifically, I tested the OCI image I understand would be built by the PR, using the following:

docker build --no-cache -t open-webui-pr22773 https://github.com/eniraa/open-webui.git#dev

<!-- gh-comment-id:4157194545 --> @syndicatedshannon commented on GitHub (Mar 30, 2026): I briefly attempted to verify this fix, but still saw the issue I believe this is intended to resolve. The edit appears to be completed, but then is undone when continuing any further operation. Initial question: <img width="521" height="298" alt="Image" src="https://github.com/user-attachments/assets/f6949f18-b261-4b56-b797-df0ade51a7e4" /> Edit, deleting "Paris." <img width="506" height="282" alt="Image" src="https://github.com/user-attachments/assets/50db67ff-8835-476f-9df0-00fcf348a069" /> Then press "Continue": <img width="495" height="231" alt="Image" src="https://github.com/user-attachments/assets/45cbb062-5cf3-4e5a-a419-3f751a70de4a" /> Alternately edit, replacing "Paris" with "Rome": <img width="507" height="234" alt="Image" src="https://github.com/user-attachments/assets/41d4c3c0-e3aa-4772-b5e6-9ba6a3fc8009" /> Then press "Continue": <img width="504" height="308" alt="Image" src="https://github.com/user-attachments/assets/ecab8957-d0cb-4ac5-bb3f-b12223c02f37" /> I didn't trace exactly what is happening, but based on the responses it seems to receive an amalgam of both the original and edited content. Regarding the portion of the reported defect above that says "Continue" shouldn't produce any further text once it completes, that isn't my understanding of the "Continue" feature. My understanding is that if the model generates an end token, that make be the apparent outcome, but it is not a guarantee. Continue may be used to give the assistant another opportunity to generate a token besides an end token. My understanding may be wrong. Specifically, I tested the OCI image I understand would be built by the PR, using the following: `docker build --no-cache -t open-webui-pr22773 https://github.com/eniraa/open-webui.git#dev`
Author
Owner

@eniraa commented on GitHub (Mar 31, 2026):

Thanks for taking a look into the continuation functionality. My current fix only addresses the edit not working properly, but I'll take a crack at the continue issue once I have more time.

<!-- gh-comment-id:4159135069 --> @eniraa commented on GitHub (Mar 31, 2026): Thanks for taking a look into the continuation functionality. My current fix only addresses the edit not working properly, but I'll take a crack at the continue issue once I have more time.
Author
Owner

@syndicatedshannon commented on GitHub (Mar 31, 2026):

Thanks @eniraa . I apologize I misunderstood the PR (and maybe the issue) and I see that edit by itself appears to work correctly. When I edit and then submit a new user turn e.g. "Please repeat your last response verbatim.", I get the assistant response I'd expect.

<!-- gh-comment-id:4159629452 --> @syndicatedshannon commented on GitHub (Mar 31, 2026): Thanks @eniraa . I apologize I misunderstood the PR (and maybe the issue) and I see that edit by itself appears to work correctly. When I edit and then submit a new user turn e.g. "Please repeat your last response verbatim.", I get the assistant response I'd expect.
Author
Owner

@syndicatedshannon commented on GitHub (Mar 31, 2026):

also, I noticed that the behavior is different when streaming is enabled vs. disabled. When disabled, just the continuation is left in the assistant response (too little text). When enabled, the result above (too much text) is seen.

Afterwards (in both cases), the client sends the "api/chat/completed" text displayed in the UI, which then persists after refresh. Seems sus to me, but is probably a separate issue, if not intended.

@eniraa

<!-- gh-comment-id:4159719746 --> @syndicatedshannon commented on GitHub (Mar 31, 2026): also, I noticed that the behavior is different when streaming is enabled vs. disabled. When disabled, just the continuation is left in the assistant response (too little text). When enabled, the result above (too much text) is seen. Afterwards (in both cases), the client sends the "api/chat/completed" text displayed in the UI, which then persists after refresh. Seems sus to me, but is probably a separate issue, if not intended. @eniraa
Author
Owner

@YashasviMantha commented on GitHub (Apr 4, 2026):

Still broken. I am on v0.8.12

<!-- gh-comment-id:4187717476 --> @YashasviMantha commented on GitHub (Apr 4, 2026): Still broken. I am on v0.8.12
Author
Owner

@dathbe commented on GitHub (Apr 14, 2026):

Possibly a related issue (if not, let me know and I'll open another Issue). When I use Open-WebUI to edit an image, it will work fine. But if I ask it to "regenerate", it will lose all context and give me a text response. It's not yet clear to me whether it is dropping the uploaded image or unticking the "image" integration, but either way regenerate fails.

Running 0.8.12 via Docker ghcr.io/open-webui/open-webui:ollama

Update: I get the same issue when I try to create image (not using an input image), so the problem seems to be with the "image" integration getting unticked for the regenerate stage.

<!-- gh-comment-id:4240407346 --> @dathbe commented on GitHub (Apr 14, 2026): Possibly a related issue (if not, let me know and I'll open another Issue). When I use Open-WebUI to edit an image, it will work fine. But if I ask it to "regenerate", it will lose all context and give me a text response. It's not yet clear to me whether it is dropping the uploaded image or unticking the "image" integration, but either way regenerate fails. Running 0.8.12 via Docker ghcr.io/open-webui/open-webui:ollama Update: I get the same issue when I try to create image (not using an input image), so the problem seems to be with the "image" integration getting unticked for the regenerate stage.
Author
Owner

@CookSleep commented on GitHub (Apr 17, 2026):

In version 0.8.12, even after editing a model's response, sending a new message still results in the model seeing the old version. This issue is really frustrating me 😭

<!-- gh-comment-id:4270488272 --> @CookSleep commented on GitHub (Apr 17, 2026): In version 0.8.12, even after editing a model's response, sending a new message still results in the model seeing the old version. This issue is really frustrating me 😭
Author
Owner

@MenceyBentayga commented on GitHub (Apr 21, 2026):

Given the branch has been merged, can someone confirm the behaviour for both edits and continuations is as expected in 0.9.0?

<!-- gh-comment-id:4287502931 --> @MenceyBentayga commented on GitHub (Apr 21, 2026): Given the branch has been merged, can someone confirm the behaviour for both edits and continuations is as expected in 0.9.0?
Author
Owner

@superjamie commented on GitHub (Apr 21, 2026):

This is not fixed in v0.9.0.

Looking at the actual response in the llama-swap Activity tab, Open-WebUI is still sending the full previous response to the backend OpenAI server, not the new edited short response.

(i am still running v0.7.2 as a workaround. this is fine for me)

<!-- gh-comment-id:4287766693 --> @superjamie commented on GitHub (Apr 21, 2026): This is not fixed in v0.9.0. Looking at the actual response in the llama-swap Activity tab, Open-WebUI is still sending the full previous response to the backend OpenAI server, not the new edited short response. (i am still running v0.7.2 as a workaround. this is fine for me)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#35047