mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 11:28:35 -05:00
[PR #12889] [CLOSED] fix: Limit RAG-knowledge results to <= k when using multiple sources (and knowledge bases) #23044
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
📋 Pull Request Information
Original PR: https://github.com/open-webui/open-webui/pull/12889
Author: @almajo
Created: 4/15/2025
Status: ❌ Closed
Base:
dev← Head:fix/respect_topk_on_multiple_knowledge_bases📝 Commits (2)
18fbf82~ Limit rag results to <= k0129e38~ Exclude empty documents📊 Changes
1 file changed (+0 additions, -0 deletions)
View changed files
📝
backend/open_webui/retrieval/utils.py(+150 -105)📄 Description
Pull Request Checklist
Note to first-time contributors: Please open a discussion post in Discussions and describe your changes before submitting a pull request.
Before submitting, make sure you've checked the following:
devbranch.Changelog Entry
Description
I already described the problem in https://github.com/open-webui/open-webui/discussions/12749.
When using multiple knowledge bases (let's say n) with a setting of top-k = k, the llm was provided with
n*kchunks instead of k. For a model with 5 knowledge bases and top-k=10, this means 50 chunks which has many disadvantages.In this PR, we correct this by leveraging the list parameter of collection_names for
query_collectionandquery_hybrid_collectionby actually passing multiple instead of only one at a time. Hence, the merge_results at the end of these methods does the job correctly.However, as the
merge_and_sort_resultsmethod removed the actual collection heritage, I needed to add some context code where we store the heritage information - which we might need later in the frontend or wherever.Changed
get_sources_from_filesto respect thekparameterAdditional Information
The result of the
get_sources_from_filesmethod remains the same: A list of collections, each with some chunks which in total are <= k. When 2 knowledge bases are used (or 1 knowledge base, 1 in-chat file), it's a list of length 2 - as before.As the results are grouped together by knowledge base, it's good to understand that this means that the order of ranked documents is a good mix. Here is an example:
Hence, when giving the model the chunks for citation the enumeration of distances will be [1, 0.5, 0.9, 0.4] (as sources are given by number from source 1 to n), i.e. it's not an ordered list from highest to lowest. However, this is rather an additional feature of this implementation as it is an advanced approach to shuffle results for better RAG performance.
🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.