Bad performance when sending more than 100 lines #3056

Closed
opened 2025-11-11 15:20:41 -06:00 by GiteaMirror · 0 comments
Owner

Originally created by @mkroman on GitHub (Dec 21, 2024).

Bug Report

When I try to start a new chat containing more than a couple of hundreds of lines, the browser freezes for several seconds before actually sending the request to the backend.

Installation Method

I'm running Open WebUI in a Docker container, with host network access, which is where ollama is running.

Environment

  • Open WebUI Version: v0.4.8
  • Ollama (if applicable): v0.5.2
  • Operating System: Arch Linux
  • Browser (if applicable): Firefox 133.0.3

Confirmation:

  • I have read and followed all the instructions provided in the README.md.
  • I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.

Expected Behavior:

The experience should be smooth and the request should be sent immediately.

Actual Behavior:

The browser hangs for several seconds before sending the request.

Description

Bug Summary:

When I try to start a new chat containing more than a couple of hundreds of lines, the browser freezes for several seconds before actually sending the request to the backend.

Reproduction Details

Steps to Reproduce:

  1. Set up openwebui
  2. Add a large context model like Gemini Pro 1.5
  3. Fill the text field with the contents of prompt.txt
  4. The browser should hang

Additional Information

I profiled the process and the result can be viewed here.

prompt.txt

Originally created by @mkroman on GitHub (Dec 21, 2024). # Bug Report When I try to start a new chat containing more than a couple of hundreds of lines, the browser freezes for several seconds before actually sending the request to the backend. ## Installation Method I'm running Open WebUI in a Docker container, with host network access, which is where ollama is running. ## Environment - **Open WebUI Version:** v0.4.8 - **Ollama (if applicable):** v0.5.2 - **Operating System:** Arch Linux - **Browser (if applicable):** Firefox 133.0.3 **Confirmation:** - [X] I have read and followed all the instructions provided in the README.md. - [X] I am on the latest version of both Open WebUI and Ollama. - [ ] I have included the browser console logs. - [ ] I have included the Docker container logs. - [X] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below. ## Expected Behavior: The experience should be smooth and the request should be sent immediately. ## Actual Behavior: The browser hangs for several seconds before sending the request. ## Description **Bug Summary:** When I try to start a new chat containing more than a couple of hundreds of lines, the browser freezes for several seconds before actually sending the request to the backend. ## Reproduction Details **Steps to Reproduce:** 1. Set up openwebui 2. Add a large context model like Gemini Pro 1.5 3. Fill the text field with the contents of `prompt.txt` 4. The browser should hang ## Additional Information I profiled the process and the result can be viewed [here](https://share.firefox.dev/4gwwltl). [prompt.txt](https://github.com/user-attachments/files/18216775/prompt.txt)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#3056