mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 19:38:46 -05:00
Make Web Search Use Multiple Queries #3023
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @reecelikesramen on GitHub (Dec 16, 2024).
Feature Request
Is your feature request related to a problem? Please describe.
The web search feature only searches a single query, but the prompt (even the default one) generates multiple queries.
Describe the solution you'd like
I would like the web search feature to use all the search queries generated.
Describe alternatives you've considered
Additional context
Right now the web search feature takes in a list of queries, and it's hard coded to select the first in the list. This could be leading to poorer search quality, as the default prompt is encouraged to make several queries, and not necessarily put the best one first. Here's the relevant code:
open-webui/src/lib/components/chat/Chat.svelte
@aaltulea commented on GitHub (Dec 17, 2024):
i was trying to achieve this. the code below will search 3 unique queries. you can change the number of queries by changing 3 in
.slice(0, 3). overall, i didn't find it very useful.@reecelikesramen commented on GitHub (Dec 17, 2024):
May I ask what search engine and query generator model you're using?
I noticed because I was trying to improve the web search prompt. I realized that if I asked a multipart question it would split up the queries like [q1, q2, q3] if I had 3 different topics, but it would only search q1, and the model would either not answer q2 or q3, or use no context to answer those questions.
I think it's worth having this issue here at least to come to a conclusion on what should be default: just 1 query, or multiple.
@aaltulea commented on GitHub (Dec 18, 2024):
I am using qwen2.5 (7b and 32b), with searxng. In the settings, i specified 20 search results to read. Using the code I posed, the model was able to read 40-60 pages, but at the end it will only pick around 5 to render the answer, which is the same as if it ran one query and read 20 pages. I think if we were able to refine the prompt for search query generation, we will get better results.