[GH-ISSUE #12977] issue: Abnormal Error when using Web Search #32304

Closed
opened 2026-04-25 06:11:41 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @ReiSuzunami on GitHub (Apr 17, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/12977

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

v0.6.5

Ollama Version (if applicable)

No response

Operating System

server: Debian 12 client: macOS Sequoia

Browser (if applicable)

MS Edge 135.0.3179.73

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have listed steps to reproduce the bug in detail.

Expected Behavior

  1. Web Interface should wait the Web Search Engine(searxng in my case) to return all results, and return an Error only after timeout or other situation.

  2. The reference should be distributed into different sources, not only the first one.

Actual Behavior

  1. Web Interface returns a TypeError: Failed to fetch in the middle of searching, which did not acturally interrupt the process, but still display a consistent error message beneth the response area.

  2. All the references are pointed to the first search result.

Steps to Reproduce

  1. Type in your questions, enable the Web Searching option.
  2. Wait for the response
  3. See the error message
  4. Wait for the answer, see the references

Logs & Screenshots

Browser Logs:

Image

Docker Logs:

Everything went fine but did not see any log corresponding to the /api/chat/completions. Assue that was caused by a regular keepalive mechanism because the completion process was blocked by the content fetching process.

open-webui  | 2025-04-17 15:12:03.462 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}
open-webui  | 2025-04-17 15:12:03.577 | INFO     | open_webui.routers.openai:get_all_models:389 - get_all_models() - {}
open-webui  | 2025-04-17 15:12:07.603 | INFO     | open_webui.routers.retrieval:process_web_search:1477 - trying to web search with ('searxng', 'April 17 2025 news highlights') - {}
open-webui  | 2025-04-17 15:12:19.358 | INFO     | open_webui.routers.retrieval:save_docs_to_vector_db:821 - save_docs_to_vector_db: document https://www.democracynow.org/2025/4/17/headlines web-search-c894abe488417426c00e225f51a462f6720fc0ab58a28266275f - {}
open-webui  | 2025-04-17 15:12:19.373 | INFO     | open_webui.routers.retrieval:save_docs_to_vector_db:904 - adding to collection web-search-c894abe488417426c00e225f51a462f6720fc0ab58a28266275f - {}

/* Other Search Logs */

open-webui  | 2025-04-17 15:12:33.602 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}

/* Other Search Logs */

open-webui  | 2025-04-17 15:12:53.306 | INFO     | open_webui.routers.openai:get_all_models:389 - get_all_models() - {}
open-webui  | 2025-04-17 15:12:54.577 | INFO     | open_webui.retrieval.utils:query_doc:88 - query_doc:result [['d4462247-0408-4aa9-a3af-518c3ea05918', 'df339e7a-01ec-4a70-83e8-7a8d6f68ae13', '9719c620-8aac-49b6-b275-7abee426d1f9']] [[{'embedding_config': '{"engine": "openai", "model": "Doubao-Embedding"}', 'source': 'https://www.democracynow.org/2025/4/17/headlines', 'start_index': 1820}, {'embedding_config': '{"engine": "openai", "model": "Doubao-Embedding"}', 'source': 'https://www.democracynow.org/2025/4/17/headlines', 'start_index': 929}, {'embedding_config': '{"engine": "openai", "model": "Doubao-Embedding"}', 'source': 'https://www.democracynow.org/2025/4/17/headlines', 'start_index': 12525}]] - {}

/* Other Embedding Logs*/

open-webui  | 2025-04-17 15:12:56.085 | WARNING  | chromadb.segment.impl.vector.local_persistent_hnsw:query_vectors:423 - Number of requested results 3 is greater than number of elements in index 2, updating n_results = 2 - {}

/* Other Embedding Logs*/

open-webui  | 2025-04-17 15:12:57.716 | INFO     | open_webui.routers.openai:get_all_models:389 - get_all_models() - {}
open-webui  | 2025-04-17 15:13:02.808 | INFO     | open_webui.routers.openai:get_all_models:389 - get_all_models() - {}
open-webui  | 2025-04-17 15:13:02.929 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "POST /api/chat/completed HTTP/1.1" 200 - {}
open-webui  | 2025-04-17 15:13:03.113 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "POST /api/v1/chats/8874fbcf-5c2b-4664-9842-bfa76dd0658d HTTP/1.1" 200 - {}
open-webui  | 2025-04-17 15:13:03.285 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}
open-webui  | 2025-04-17 15:13:04.536 | INFO     | open_webui.routers.openai:get_all_models:389 - get_all_models() - {}
open-webui  | 2025-04-17 15:13:04.615 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}
open-webui  | 2025-04-17 15:13:04.689 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}
open-webui  | 2025-04-17 15:13:06.145 | INFO     | open_webui.models.chats:count_chats_by_tag_name_and_user_id:817 - Count of chats for tag 'international_affairs': 0 - {}
open-webui  | 2025-04-17 15:13:06.178 | INFO     | open_webui.models.chats:count_chats_by_tag_name_and_user_id:817 - Count of chats for tag 'news': 0 - {}
open-webui  | 2025-04-17 15:13:06.203 | INFO     | open_webui.models.chats:count_chats_by_tag_name_and_user_id:817 - Count of chats for tag 'economy': 0 - {}
open-webui  | 2025-04-17 15:13:06.323 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "GET /api/v1/chats/8874fbcf-5c2b-4664-9842-bfa76dd0658d HTTP/1.1" 200 - {}
open-webui  | 2025-04-17 15:13:06.339 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 - {}
open-webui  | 2025-04-17 15:13:06.459 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 - {}

Additional Information

I'm using self-deployed SearXNG and Playwright server to get me search result.
I chose an OpenAI-compatible embedding model to process CJK languages.

Originally created by @ReiSuzunami on GitHub (Apr 17, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/12977 ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version v0.6.5 ### Ollama Version (if applicable) _No response_ ### Operating System server: Debian 12 client: macOS Sequoia ### Browser (if applicable) MS Edge 135.0.3179.73 ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have listed steps to reproduce the bug in detail. ### Expected Behavior 1. Web Interface should wait the Web Search Engine(searxng in my case) to return all results, and return an Error only after timeout or other situation. 2. The reference should be distributed into different sources, not only the first one. ### Actual Behavior 1. Web Interface returns a `TypeError: Failed to fetch` **in the middle of searching**, which did not acturally interrupt the process, but still display a consistent error message beneth the response area. 2. All the references are pointed to the first search result. ### Steps to Reproduce 1. Type in your questions, enable the Web Searching option. 2. Wait for the response 3. See the error message 4. Wait for the answer, see the references ### Logs & Screenshots Browser Logs: ![Image](https://github.com/user-attachments/assets/8a3813b7-0ed5-4336-84e2-6786ba0ba007) Docker Logs: Everything went fine but did not see any log corresponding to the `/api/chat/completions`. Assue that was caused by a regular **keepalive** mechanism because the completion process was blocked by the content fetching process. ```full log open-webui | 2025-04-17 15:12:03.462 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} open-webui | 2025-04-17 15:12:03.577 | INFO | open_webui.routers.openai:get_all_models:389 - get_all_models() - {} open-webui | 2025-04-17 15:12:07.603 | INFO | open_webui.routers.retrieval:process_web_search:1477 - trying to web search with ('searxng', 'April 17 2025 news highlights') - {} open-webui | 2025-04-17 15:12:19.358 | INFO | open_webui.routers.retrieval:save_docs_to_vector_db:821 - save_docs_to_vector_db: document https://www.democracynow.org/2025/4/17/headlines web-search-c894abe488417426c00e225f51a462f6720fc0ab58a28266275f - {} open-webui | 2025-04-17 15:12:19.373 | INFO | open_webui.routers.retrieval:save_docs_to_vector_db:904 - adding to collection web-search-c894abe488417426c00e225f51a462f6720fc0ab58a28266275f - {} /* Other Search Logs */ open-webui | 2025-04-17 15:12:33.602 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} /* Other Search Logs */ open-webui | 2025-04-17 15:12:53.306 | INFO | open_webui.routers.openai:get_all_models:389 - get_all_models() - {} open-webui | 2025-04-17 15:12:54.577 | INFO | open_webui.retrieval.utils:query_doc:88 - query_doc:result [['d4462247-0408-4aa9-a3af-518c3ea05918', 'df339e7a-01ec-4a70-83e8-7a8d6f68ae13', '9719c620-8aac-49b6-b275-7abee426d1f9']] [[{'embedding_config': '{"engine": "openai", "model": "Doubao-Embedding"}', 'source': 'https://www.democracynow.org/2025/4/17/headlines', 'start_index': 1820}, {'embedding_config': '{"engine": "openai", "model": "Doubao-Embedding"}', 'source': 'https://www.democracynow.org/2025/4/17/headlines', 'start_index': 929}, {'embedding_config': '{"engine": "openai", "model": "Doubao-Embedding"}', 'source': 'https://www.democracynow.org/2025/4/17/headlines', 'start_index': 12525}]] - {} /* Other Embedding Logs*/ open-webui | 2025-04-17 15:12:56.085 | WARNING | chromadb.segment.impl.vector.local_persistent_hnsw:query_vectors:423 - Number of requested results 3 is greater than number of elements in index 2, updating n_results = 2 - {} /* Other Embedding Logs*/ open-webui | 2025-04-17 15:12:57.716 | INFO | open_webui.routers.openai:get_all_models:389 - get_all_models() - {} open-webui | 2025-04-17 15:13:02.808 | INFO | open_webui.routers.openai:get_all_models:389 - get_all_models() - {} open-webui | 2025-04-17 15:13:02.929 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "POST /api/chat/completed HTTP/1.1" 200 - {} open-webui | 2025-04-17 15:13:03.113 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "POST /api/v1/chats/8874fbcf-5c2b-4664-9842-bfa76dd0658d HTTP/1.1" 200 - {} open-webui | 2025-04-17 15:13:03.285 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} open-webui | 2025-04-17 15:13:04.536 | INFO | open_webui.routers.openai:get_all_models:389 - get_all_models() - {} open-webui | 2025-04-17 15:13:04.615 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} open-webui | 2025-04-17 15:13:04.689 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} open-webui | 2025-04-17 15:13:06.145 | INFO | open_webui.models.chats:count_chats_by_tag_name_and_user_id:817 - Count of chats for tag 'international_affairs': 0 - {} open-webui | 2025-04-17 15:13:06.178 | INFO | open_webui.models.chats:count_chats_by_tag_name_and_user_id:817 - Count of chats for tag 'news': 0 - {} open-webui | 2025-04-17 15:13:06.203 | INFO | open_webui.models.chats:count_chats_by_tag_name_and_user_id:817 - Count of chats for tag 'economy': 0 - {} open-webui | 2025-04-17 15:13:06.323 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "GET /api/v1/chats/8874fbcf-5c2b-4664-9842-bfa76dd0658d HTTP/1.1" 200 - {} open-webui | 2025-04-17 15:13:06.339 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 - {} open-webui | 2025-04-17 15:13:06.459 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 113.132.22.149:0 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 - {} ``` ### Additional Information I'm using self-deployed SearXNG and Playwright server to get me search result. I chose an OpenAI-compatible embedding model to process CJK languages.
GiteaMirror added the bug label 2026-04-25 06:11:41 -05:00
Author
Owner

@tth37 commented on GitHub (Apr 17, 2025):

Could you share the console log in browser, is there any error messages related to TypeError: Failed to fetch? (as it seems to be a bug in frontend)

The second bug is already identified in #12811

<!-- gh-comment-id:2813403737 --> @tth37 commented on GitHub (Apr 17, 2025): Could you share the console log in browser, is there any error messages related to `TypeError: Failed to fetch`? (as it seems to be a bug in frontend) The second bug is already identified in #12811
Author
Owner

@ReiSuzunami commented on GitHub (Apr 17, 2025):

Could you share the console log in browser, is there any error messages related to TypeError: Failed to fetch?

The second bug is already identified in #12811

There is one related console log which I missed before:

Image

I visit my instance by following network structure:

Client --HTTP/2--> CDN --HTTP--> Nginx --HTTP--> Docker Compose

Could the HTTP/2 convertion be a potential problem?

<!-- gh-comment-id:2813434890 --> @ReiSuzunami commented on GitHub (Apr 17, 2025): > Could you share the console log in browser, is there any error messages related to `TypeError: Failed to fetch`? > > The second bug is already identified in [#12811](https://github.com/open-webui/open-webui/issues/12811) There is one related console log which I missed before: ![Image](https://github.com/user-attachments/assets/0a07c45e-16ce-4672-9eed-33609730d55d) I visit my instance by following network structure: `Client --HTTP/2--> CDN --HTTP--> Nginx --HTTP--> Docker Compose` Could the HTTP/2 convertion be a potential problem?
Author
Owner

@tth37 commented on GitHub (Apr 17, 2025):

Maybe check Nginx logs?

<!-- gh-comment-id:2813444646 --> @tth37 commented on GitHub (Apr 17, 2025): Maybe check Nginx logs?
Author
Owner

@ReiSuzunami commented on GitHub (Apr 17, 2025):

Maybe check Nginx logs?

There did have a 499 error in the Nginx log, just after a 200 with a similar request. This instance is only used by me so there's no influences from other users.

It seems that a HTTPS URI was sent to the HTTP service?

<IP ADDRESS> - - [18/Apr/2025:00:09:33 +0800] "GET /models HTTP/1.1" 200 44 "-" "Python/3.11 aiohttp/3.11.11"
<IP ADDRESS>- - [18/Apr/2025:00:09:37 +0800] "POST /v1/chat/completions HTTP/1.1" 200 679 "-" "Python/3.11 aiohttp/3.11.11"

<IP ADDRESS> - - [18/Apr/2025:00:10:03 +0800] "POST /api/chat/completions HTTP/1.1" 499 0 "https://<HOSTNAME>/c/8874fbcf-5c2b-4664-9842-bfa76dd0658d" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36 Edg/135.0.0.0"

<IP ADDRESS> - - [18/Apr/2025:00:10:03 +0800] "GET /api/v1/chats/?page=1 HTTP/1.1" 200 4558 "https://<HOSTNAME>/c/8874fbcf-5c2b-4664-9842-bfa76dd0658d" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36 Edg/135.0.0.0"
<!-- gh-comment-id:2813462792 --> @ReiSuzunami commented on GitHub (Apr 17, 2025): > Maybe check Nginx logs? There did have a 499 error in the Nginx log, just after a 200 with a similar request. This instance is only used by me so there's no influences from other users. It seems that a HTTPS URI was sent to the HTTP service? ```Nginx <IP ADDRESS> - - [18/Apr/2025:00:09:33 +0800] "GET /models HTTP/1.1" 200 44 "-" "Python/3.11 aiohttp/3.11.11" <IP ADDRESS>- - [18/Apr/2025:00:09:37 +0800] "POST /v1/chat/completions HTTP/1.1" 200 679 "-" "Python/3.11 aiohttp/3.11.11" <IP ADDRESS> - - [18/Apr/2025:00:10:03 +0800] "POST /api/chat/completions HTTP/1.1" 499 0 "https://<HOSTNAME>/c/8874fbcf-5c2b-4664-9842-bfa76dd0658d" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36 Edg/135.0.0.0" <IP ADDRESS> - - [18/Apr/2025:00:10:03 +0800] "GET /api/v1/chats/?page=1 HTTP/1.1" 200 4558 "https://<HOSTNAME>/c/8874fbcf-5c2b-4664-9842-bfa76dd0658d" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36 Edg/135.0.0.0" ```
Author
Owner

@tth37 commented on GitHub (Apr 17, 2025):

If you turn off web search, does this TypeError: Failed to fetch still exists? btw maybe consider sharing your Nginx configuration and your "cdn" configuration?

<!-- gh-comment-id:2813479790 --> @tth37 commented on GitHub (Apr 17, 2025): If you turn off web search, does this `TypeError: Failed to fetch` still exists? btw maybe consider sharing your Nginx configuration and your "cdn" configuration?
Author
Owner

@ReiSuzunami commented on GitHub (Apr 17, 2025):

If you turn off web search, does this TypeError: Failed to fetch still exists? btw maybe consider sharing your Nginx configuration and your "cdn" configuration?

No, this only happens when Open-WebUI executes Web Searching.

My Nginx server block:

server {
    server_name <HOST>;
    listen [::]:<PORT>;
    location / {
        proxy_pass http://127.0.0.1:30000; # Docker Port
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
        proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
        proxy_set_header Sec-WebSocket-Extensions $http_sec_websocket_extensions;

        proxy_buffering off;
        proxy_cache off;
    }
}

I used Tencent EdgeOne for CDN and I just configured the HTTP/2 option for visitors and HTTP/1.1 for fetching the source.

<!-- gh-comment-id:2813499217 --> @ReiSuzunami commented on GitHub (Apr 17, 2025): > If you turn off web search, does this `TypeError: Failed to fetch` still exists? btw maybe consider sharing your Nginx configuration and your "cdn" configuration? No, this only happens when Open-WebUI executes Web Searching. My Nginx server block: ```Nginx server { server_name <HOST>; listen [::]:<PORT>; location / { proxy_pass http://127.0.0.1:30000; # Docker Port proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version; proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key; proxy_set_header Sec-WebSocket-Extensions $http_sec_websocket_extensions; proxy_buffering off; proxy_cache off; } } ``` I used Tencent EdgeOne for CDN and I just configured the HTTP/2 option for visitors and HTTP/1.1 for fetching the source.
Author
Owner

@tth37 commented on GitHub (Apr 17, 2025):

What if bypassing tencent proxy, directly access the Nginx service?

<!-- gh-comment-id:2813509748 --> @tth37 commented on GitHub (Apr 17, 2025): What if bypassing tencent proxy, directly access the Nginx service?
Author
Owner

@ReiSuzunami commented on GitHub (Apr 17, 2025):

What if bypassing tencent proxy, directly access the Nginx service?

Well the problem seems to disappear when I directly visit the site use HTTP without the CDN. There's nothing with Open-WebUI. I will try to configure it correctly.

<!-- gh-comment-id:2813530383 --> @ReiSuzunami commented on GitHub (Apr 17, 2025): > What if bypassing tencent proxy, directly access the Nginx service? Well the problem seems to disappear when I directly visit the site use HTTP without the CDN. There's nothing with Open-WebUI. I will try to configure it correctly.
Author
Owner

@ReiSuzunami commented on GitHub (Apr 18, 2025):

What if bypassing tencent proxy, directly access the Nginx service?

I found the real problem. /api/chat/completions is a SUPER LONG request in Web Searching, which includes the full process that LLM generates query and wait for SearXNG to return the results, before the answer started to generate.

This time is usually 40-60 seconds on WebSocket when I'm using SearXNG and my CDN regarded that as a timeout. So the error showed up.

When I'm using HTTP Directly, there was no 'timeout' settings so the error would not happen. Also, when I'm making regular conversations(without Web Searching), the 'waiting process' was instant because the answer is genreated directly.

So is there a possibility to optimize this feature? In my case I can change the timeout threshold of my CDN, but there must have situations that sensitive to timeout limit. It would be nice if this process could be optimized!

<!-- gh-comment-id:2814414417 --> @ReiSuzunami commented on GitHub (Apr 18, 2025): > What if bypassing tencent proxy, directly access the Nginx service? I found the real problem. `/api/chat/completions` is a **SUPER LONG** request in Web Searching, which includes the **full process** that LLM generates query and wait for SearXNG to return the results, **before the answer started to generate**. This time is usually 40-60 seconds on WebSocket when I'm using SearXNG and my CDN regarded that as a timeout. So the error showed up. When I'm using HTTP Directly, there was no 'timeout' settings so the error would not happen. Also, when I'm making regular conversations(without Web Searching), the 'waiting process' was instant because the answer is genreated directly. So is there a possibility to optimize this feature? In my case I can change the timeout threshold of my CDN, but there must have situations that sensitive to timeout limit. It would be nice if this process could be optimized!
Author
Owner

@tth37 commented on GitHub (Apr 18, 2025):

The response time for /api/chat/completions is too long, which in my opinion, is indeed a problem. Because when there is a background WebSocket connection, /api/chat/completions should ideally be a function that returns quickly, allowing the process_chat_payload and process_chat_response processes to run in the background.

Currently, only process_chat_response is fully run in the background, while process_chat_payload remains a synchronous function. I suspect that the design might not have considered that process_chat_payload could also take a long time.

A possible fix is to also design process_chat_payload as an async function that runs in the background, returns the chat completion immediately and updating response status via WebSocket. 🤔 (This seems theoretically possible, but it might require substantial refactoring of the chat_completion handler)

<!-- gh-comment-id:2814468261 --> @tth37 commented on GitHub (Apr 18, 2025): The response time for `/api/chat/completions` is too long, which in my opinion, is *indeed* a problem. Because when there is a background WebSocket connection, `/api/chat/completions` should ideally be a function that returns quickly, allowing the `process_chat_payload` and `process_chat_response` processes to run in the background. Currently, only `process_chat_response` is fully run in the background, while `process_chat_payload` remains a synchronous function. I suspect that the design might not have considered that `process_chat_payload` could also take a long time. A possible fix is to also design `process_chat_payload` as an async function that runs in the background, returns the chat completion immediately and updating response status via WebSocket. 🤔 (This seems theoretically possible, but it might require substantial refactoring of the `chat_completion` handler)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#32304