Page Refresh is Required to View Model Response #3981

Closed
opened 2025-11-11 15:43:46 -06:00 by GiteaMirror · 0 comments
Owner

Originally created by @ksketch on GitHub (Feb 19, 2025).

Bug Report

Installation Method

Using the v0.5.14 (latest) Docker image

Environment

  • Open WebUI Version: v0.5.14

  • Ollama (if applicable): N/A

  • Operating System: Linux x86_64 AWS ECS Fargate (unsure of exact OS - presumably Amazon Linux)

  • Browser (if applicable): Firefox 135.0 and Chrome 133.0.6943.127

  • High level architecture: OpenWebUI is running on a container in an AWS ECS cluster with 2 vCPU and 8GB RAM. Originally tested with 1vCPU and 3GB RAM.

  • In the same cluster is a LiteLLM container proxying to Azure OpenAI and an AWS Bedrock Access Gateway proxy to Llama3.3 70B, Claude 3.5 Sonnet V2 and Cohere multilingual V3 embedding models. Each proxy has 2vCPU and 4GB RAM.

  • Testing and metrics can confirm hardware resources do not appear to be an issue.

  • OpenWebUI is connected to the LiteLLM and Bedrock Access Gateway proxies via the OpenAI connections. The Integration with Amazon Bedrock guide was referenced for the Bedrock Access Gateway implementation

  • Environment is only accessible via VPN

Confirmation:

  • I have read and followed all the instructions provided in the README.md.
  • I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.

Expected Behavior:

When sending a prompt/message to an LLM I expect the conversation to be displayed in the UI once the LLM response is received. Specifically, data with the prompt and response should be sent and received by the frontend to then display in the UI via the websocket to my understanding

Actual Behavior:

Sometimes when sending a prompt/message to an LLM the UI appears to "hang"/just display the "lines" as though it is waiting for a response. Once the page is refreshed the response will appear.
There does not appear to be much rhyme or reason to when this behavior occurs. Sometimes things will work normally right away, other times this hanging behavior will occur; however, it seems the most reliable way to reproduce this behavior is starting a new conversation with a different model or changing the model in the same conversation. Specifically, using a "custom" model (base model with custom system prompt) seems to be the most reliable; however, this behavior was noted to occur when using a base model as well.

Description

Bug Summary:
As described above, the UI appears to "hang" and not display the model response in some situations. It was noted that when this occurs there does not appear to be any data sent or received via the web socket connection that would otherwise provide the data to the model and response to the UI. I do not experience this issue running OpenWebUI without the above proxies. Coupled with the "inconsistent" reproducibility, this makes me think it may be latency related issue? Possibly related to issue #1461?

Reproduction Details

Steps to Reproduce:

  • Open a web browser and optionally proxy it through a web proxy such as Burp, Caido or other suitable proxy
  • Login to OpenWebUI and start a conversation with a model. As mentioned above, reproducing this may be more reliable by using a "custom model" as opposed to
  • Send a prompt to the model
  • Observe that the UI appears to hang, or otherwise does not show the LLM response. If using a proxy, observe that the web socket history does not show any sending or receiving of the prompt or response
  • Refresh the page and observe that the LLM response is now in the UI like it should be
  • NOTE: If you receive a response in the UI before refreshing the page (i.e. it works normally), start a new conversation with a different model and repeat steps 2 and 3. I have also noticed this behavior seems to occur when changing the model in the same conversation

Logs and Screenshots

Browser Console Logs:
Originally, there was a svelte warning log about duplicate element? ['CodeBlock'] that "may cause issues."This warning no longer seems to appear. The below logs appear in the console when the issue arises.

Chat.svelte:1221 submitPrompt Write me a system promtp for a coding expert 6247f95f-debb-45aa-9b28-5a19330d1dc3 Chat.svelte:183 saveSessionSelectedModels ['system-prompt-assistant'] ["system-prompt-assistant"] MessageInput.svelte:344 destroy Chat.svelte:1388 modelId system-prompt-assistant +layout.svelte:100 usage {models: Array(1)}models: ['system-prompt-assistant'][[Prototype]]: Object +layout.svelte:100 usage {models: Array(1)}models: ['system-prompt-assistant'][[Prototype]]: Object Chat.svelte:1621 {status: true, task_id: 'b28d593c-9985-4113-aebf-144d8f00235a'}status: truetask_id: "b28d593c-9985-4113-aebf-144d8f00235a"[[Prototype]]: Object +layout.svelte:100 usage {models: Array(0)}models: [][[Prototype]]: Object

Docker Container Logs:
The below logs are seen when the issue occurs on the openwebui container

`2025-02-19T17:58:24.229Z
INFO: [IP]:0 - "GET /api/v1/folders/ HTTP/1.1" 200 OK
Link

2025-02-19T17:58:24.353Z
INFO: [IP]:0 - "GET /api/v1/chats/?page=2 HTTP/1.1" 200 OK
Link

2025-02-19T17:58:32.515Z
INFO: [IP]:0 - "GET /api/v1/chats/?page=2 HTTP/1.1" 200 OK
Link

2025-02-19T17:58:33.545Z
INFO: [IP]:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
Link

2025-02-19T17:58:32.280Z
INFO: [IP]:0 - "POST /api/v1/chats/new HTTP/1.1" 200 OK
Link

2025-02-19T17:58:32.898Z
INFO: [IP]:0 - "POST /api/v1/chats/4f5de36e-a12a-4e05-87d5-c9024bba0964 HTTP/1.1" 200 OK
Link

2025-02-19T17:58:34.166Z
INFO [open_webui.routers.openai] get_all_models()
Link

2025-02-19T17:58:34.511Z
INFO: [IP]:0 - "POST /api/chat/completions HTTP/1.1" 200 OK
Link

2025-02-19T17:58:34.807Z
INFO: [IP]:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 #OK`

Screenshots/Screen Recordings (if applicable):

UI View when response is not received

Image

Originally created by @ksketch on GitHub (Feb 19, 2025). # Bug Report ## Installation Method Using the v0.5.14 (latest) Docker image ## Environment - **Open WebUI Version**: v0.5.14 - **Ollama (if applicable)**: N/A - **Operating System**: Linux x86_64 AWS ECS Fargate (unsure of exact OS - presumably Amazon Linux) - **Browser (if applicable)**: Firefox 135.0 and Chrome 133.0.6943.127 - **High level architecture**: OpenWebUI is running on a container in an AWS ECS cluster with 2 vCPU and 8GB RAM. Originally tested with 1vCPU and 3GB RAM. - In the same cluster is a LiteLLM container proxying to Azure OpenAI and an [AWS Bedrock Access Gateway](https://github.com/aws-samples/bedrock-access-gateway/tree/main) proxy to Llama3.3 70B, Claude 3.5 Sonnet V2 and Cohere multilingual V3 embedding models. Each proxy has 2vCPU and 4GB RAM. - Testing and metrics can confirm hardware resources do not appear to be an issue. - OpenWebUI is connected to the LiteLLM and Bedrock Access Gateway proxies via the OpenAI connections. The Integration with [Amazon Bedrock](https://docs.openwebui.com/tutorials/integrations/amazon-bedrock) guide was referenced for the Bedrock Access Gateway implementation - Environment is only accessible via VPN **Confirmation:** - [x] I have read and followed all the instructions provided in the README.md. - [x] I am on the latest version of both Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below. ## Expected Behavior: When sending a prompt/message to an LLM I expect the conversation to be displayed in the UI once the LLM response is received. Specifically, data with the prompt and response should be sent and received by the frontend to then display in the UI via the websocket to my understanding ## Actual Behavior: Sometimes when sending a prompt/message to an LLM the UI appears to "hang"/just display the "lines" as though it is waiting for a response. Once the page is refreshed the response will appear. There does not appear to be much rhyme or reason to when this behavior occurs. Sometimes things will work normally right away, other times this hanging behavior will occur; however, it seems the most reliable way to reproduce this behavior is starting a new conversation with a different model or changing the model in the same conversation. Specifically, using a "custom" model (base model with custom system prompt) seems to be the most reliable; however, this behavior was noted to occur when using a base model as well. ## Description **Bug Summary:** As described above, the UI appears to "hang" and not display the model response in some situations. It was noted that when this occurs there does not appear to be any data sent or received via the web socket connection that would otherwise provide the data to the model and response to the UI. I do not experience this issue running OpenWebUI without the above proxies. Coupled with the "inconsistent" reproducibility, this makes me think it may be latency related issue? Possibly related to issue #1461? ## Reproduction Details **Steps to Reproduce:** - Open a web browser and optionally proxy it through a web proxy such as Burp, Caido or other suitable proxy - Login to OpenWebUI and start a conversation with a model. As mentioned above, reproducing this may be more reliable by using a "custom model" as opposed to - Send a prompt to the model - Observe that the UI appears to hang, or otherwise does not show the LLM response. If using a proxy, observe that the web socket history does not show any sending or receiving of the prompt or response - Refresh the page and observe that the LLM response is now in the UI like it should be - **NOTE**: If you receive a response in the UI before refreshing the page (i.e. it works normally), start a new conversation with a different model and repeat steps 2 and 3. I have also noticed this behavior seems to occur when changing the model in the same conversation ## Logs and Screenshots **Browser Console Logs:** Originally, there was a svelte warning log about duplicate element? ['CodeBlock'] that "may cause issues."This warning no longer seems to appear. The below logs appear in the console when the issue arises. `Chat.svelte:1221 submitPrompt Write me a system promtp for a coding expert 6247f95f-debb-45aa-9b28-5a19330d1dc3 Chat.svelte:183 saveSessionSelectedModels ['system-prompt-assistant'] ["system-prompt-assistant"] MessageInput.svelte:344 destroy Chat.svelte:1388 modelId system-prompt-assistant +layout.svelte:100 usage {models: Array(1)}models: ['system-prompt-assistant'][[Prototype]]: Object +layout.svelte:100 usage {models: Array(1)}models: ['system-prompt-assistant'][[Prototype]]: Object Chat.svelte:1621 {status: true, task_id: 'b28d593c-9985-4113-aebf-144d8f00235a'}status: truetask_id: "b28d593c-9985-4113-aebf-144d8f00235a"[[Prototype]]: Object +layout.svelte:100 usage {models: Array(0)}models: [][[Prototype]]: Object` **Docker Container Logs:** The below logs are seen when the issue occurs on the openwebui container `2025-02-19T17:58:24.229Z INFO: [IP]:0 - "GET /api/v1/folders/ HTTP/1.1" 200 OK Link 2025-02-19T17:58:24.353Z INFO: [IP]:0 - "GET /api/v1/chats/?page=2 HTTP/1.1" 200 OK Link 2025-02-19T17:58:32.515Z INFO: [IP]:0 - "GET /api/v1/chats/?page=2 HTTP/1.1" 200 OK Link 2025-02-19T17:58:33.545Z INFO: [IP]:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK Link 2025-02-19T17:58:32.280Z INFO: [IP]:0 - "POST /api/v1/chats/new HTTP/1.1" 200 OK Link 2025-02-19T17:58:32.898Z INFO: [IP]:0 - "POST /api/v1/chats/4f5de36e-a12a-4e05-87d5-c9024bba0964 HTTP/1.1" 200 OK Link 2025-02-19T17:58:34.166Z INFO [open_webui.routers.openai] get_all_models() Link 2025-02-19T17:58:34.511Z INFO: [IP]:0 - "POST /api/chat/completions HTTP/1.1" 200 OK Link 2025-02-19T17:58:34.807Z INFO: [IP]:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 #OK` **Screenshots/Screen Recordings (if applicable):** UI View when response is not received ![Image](https://github.com/user-attachments/assets/25c1065e-974a-407a-b1fb-8bab8011da78)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#3981