mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 19:08:59 -05:00
[GH-ISSUE #8165] “Fluidly stream large external response chunks”This function has disappeared. #15024
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @sabibi12 on GitHub (Dec 28, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/8165
Bug Report
Installation Method
Docker
Environment
Confirmation:
Expected Behavior:
In previous versions, when using external models, large responses should be streamed fluidly to the frontend, allowing users to see the content progressively and smoothly as it is generated, rather than waiting for the entire response to be displayed at once. This smooth streaming was particularly helpful in mitigating the choppy, stuttering output sometimes experienced with models like Gemini.
Actual Behavior:
The "Fluidly stream large external response chunks" functionality has disappeared. Now, when receiving large responses from external models, users must wait for the entire response to complete before seeing it, rather than having the content displayed progressively and smoothly as it is generated. The smooth streaming experience is gone. This results in a very poor user experience, especially with models like Gemini, which are now even more prone to choppy, stuttering output without the smooth streaming.
Description
Bug Summary:
The "Fluidly stream large external response chunks" functionality has disappeared. When using external models, responses are no longer displayed in a smooth, streaming manner, with the text appearing gradually as it's generated. Instead, the entire response is delivered all at once after generation is complete. This issue is particularly noticeable with models like Gemini, causing a very choppy, stuttering output due to the lack of smooth streaming.
Reproduction Details
Steps to Reproduce:
Additional Information
The lack of "Fluidly stream large external response chunks" makes using models like Gemini very difficult due to the extremely choppy and stuttering output as the text is no longer displayed in a smooth, progressive manner.
Bug Report
安装方法
Docker
环境
确认:
预期行为:
在之前的版本中,当使用外部模型时,大型响应应该以流式的方式平滑地传输到前端,用户可以逐步且平滑地看到内容,而不是等待整个响应完成���一次性显示。 这种平滑的流式传输尤其有助于缓解诸如 Gemini 模型有时出现的卡顿和断断续续的输出。
实际行为:
“Fluidly stream large external response chunks” 功能消失了,现在接收来自外部模型的大型响应时,用户必须等待整个响应完成才能看到结果,而不是像之前那样看到逐步且平滑地显示的内容。平滑的流式传输体验消失了。这导致了非常糟糕的用户体验,特别是对于像 Gemini 这样的模型,在没有平滑流式传输的情况下,现在更容易出现卡顿和断断续续的输出。
描述
错误摘要:
“Fluidly stream large external response chunks” (平滑地流式传输大型外部响应块) 功能已消失。使用外部模型时,响应不再以平滑的流式传输方式显示,文本不再是逐步生成并显示。而是需要在整个响应生成完成后一次性显示。这个问题在使用 Gemini 这样的模型时尤其明显,由于缺少平滑的流式传输,导致输出非常卡顿和断断续续。
重现细节
重现步骤:
附加信息
由于缺少“Fluidly stream large external response chunks”功能,导致使用像 Gemini 这样的模型非常困难,因为文本不再是平滑、逐步地显示,导致输出非常卡顿和断断续续。
@sabibi12 commented on GitHub (Dec 28, 2024):
@tjbck commented on GitHub (Dec 28, 2024):
It has been deprecated, however community contributions are welcome here to bring back the functionality.