mirror of
https://github.com/open-webui/open-webui.git
synced 2026-03-12 18:14:16 -05:00
Page Refresh is Required to View Model Response #3981
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ksketch on GitHub (Feb 19, 2025).
Bug Report
Installation Method
Using the v0.5.14 (latest) Docker image
Environment
Open WebUI Version: v0.5.14
Ollama (if applicable): N/A
Operating System: Linux x86_64 AWS ECS Fargate (unsure of exact OS - presumably Amazon Linux)
Browser (if applicable): Firefox 135.0 and Chrome 133.0.6943.127
High level architecture: OpenWebUI is running on a container in an AWS ECS cluster with 2 vCPU and 8GB RAM. Originally tested with 1vCPU and 3GB RAM.
In the same cluster is a LiteLLM container proxying to Azure OpenAI and an AWS Bedrock Access Gateway proxy to Llama3.3 70B, Claude 3.5 Sonnet V2 and Cohere multilingual V3 embedding models. Each proxy has 2vCPU and 4GB RAM.
Testing and metrics can confirm hardware resources do not appear to be an issue.
OpenWebUI is connected to the LiteLLM and Bedrock Access Gateway proxies via the OpenAI connections. The Integration with Amazon Bedrock guide was referenced for the Bedrock Access Gateway implementation
Environment is only accessible via VPN
Confirmation:
Expected Behavior:
When sending a prompt/message to an LLM I expect the conversation to be displayed in the UI once the LLM response is received. Specifically, data with the prompt and response should be sent and received by the frontend to then display in the UI via the websocket to my understanding
Actual Behavior:
Sometimes when sending a prompt/message to an LLM the UI appears to "hang"/just display the "lines" as though it is waiting for a response. Once the page is refreshed the response will appear.
There does not appear to be much rhyme or reason to when this behavior occurs. Sometimes things will work normally right away, other times this hanging behavior will occur; however, it seems the most reliable way to reproduce this behavior is starting a new conversation with a different model or changing the model in the same conversation. Specifically, using a "custom" model (base model with custom system prompt) seems to be the most reliable; however, this behavior was noted to occur when using a base model as well.
Description
Bug Summary:
As described above, the UI appears to "hang" and not display the model response in some situations. It was noted that when this occurs there does not appear to be any data sent or received via the web socket connection that would otherwise provide the data to the model and response to the UI. I do not experience this issue running OpenWebUI without the above proxies. Coupled with the "inconsistent" reproducibility, this makes me think it may be latency related issue? Possibly related to issue #1461?
Reproduction Details
Steps to Reproduce:
Logs and Screenshots
Browser Console Logs:
Originally, there was a svelte warning log about duplicate element? ['CodeBlock'] that "may cause issues."This warning no longer seems to appear. The below logs appear in the console when the issue arises.
Chat.svelte:1221 submitPrompt Write me a system promtp for a coding expert 6247f95f-debb-45aa-9b28-5a19330d1dc3 Chat.svelte:183 saveSessionSelectedModels ['system-prompt-assistant'] ["system-prompt-assistant"] MessageInput.svelte:344 destroy Chat.svelte:1388 modelId system-prompt-assistant +layout.svelte:100 usage {models: Array(1)}models: ['system-prompt-assistant'][[Prototype]]: Object +layout.svelte:100 usage {models: Array(1)}models: ['system-prompt-assistant'][[Prototype]]: Object Chat.svelte:1621 {status: true, task_id: 'b28d593c-9985-4113-aebf-144d8f00235a'}status: truetask_id: "b28d593c-9985-4113-aebf-144d8f00235a"[[Prototype]]: Object +layout.svelte:100 usage {models: Array(0)}models: [][[Prototype]]: ObjectDocker Container Logs:
The below logs are seen when the issue occurs on the openwebui container
`2025-02-19T17:58:24.229Z
INFO: [IP]:0 - "GET /api/v1/folders/ HTTP/1.1" 200 OK
Link
2025-02-19T17:58:24.353Z
INFO: [IP]:0 - "GET /api/v1/chats/?page=2 HTTP/1.1" 200 OK
Link
2025-02-19T17:58:32.515Z
INFO: [IP]:0 - "GET /api/v1/chats/?page=2 HTTP/1.1" 200 OK
Link
2025-02-19T17:58:33.545Z
INFO: [IP]:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
Link
2025-02-19T17:58:32.280Z
INFO: [IP]:0 - "POST /api/v1/chats/new HTTP/1.1" 200 OK
Link
2025-02-19T17:58:32.898Z
INFO: [IP]:0 - "POST /api/v1/chats/4f5de36e-a12a-4e05-87d5-c9024bba0964 HTTP/1.1" 200 OK
Link
2025-02-19T17:58:34.166Z
INFO [open_webui.routers.openai] get_all_models()
Link
2025-02-19T17:58:34.511Z
INFO: [IP]:0 - "POST /api/chat/completions HTTP/1.1" 200 OK
Link
2025-02-19T17:58:34.807Z
INFO: [IP]:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 #OK`
Screenshots/Screen Recordings (if applicable):
UI View when response is not received