mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 19:08:59 -05:00
[PR #22681] [CLOSED] fix: bypass RAG and directly inject full context files to eliminate message latency #49860
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
📋 Pull Request Information
Original PR: https://github.com/open-webui/open-webui/pull/22681
Author: @a86582751
Created: 3/14/2026
Status: ❌ Closed
Base:
dev← Head:fix-full-context-bypass-rag📝 Commits (1)
149f186fix: bypass RAG and directly inject full context files to eliminate message latency📊 Changes
1 file changed (+97 additions, -34 deletions)
View changed files
📝
backend/open_webui/utils/middleware.py(+97 -34)📄 Description
Pull Request Checklist
Before submitting, make sure you've checked the following:
devbranch. PRs targetingmainwill be immediately closed.devto ensure no unrelated commits (e.g. frommain) are included.Changelog Entry
Description
Fixed
get_sources_from_itemswas still being called even whenall_full_contextorbypass_embeddingwas true.Additional Information
backend/open_webui/utils/middleware.py: Added checks forall_full_contextandbypass_embeddingto directly load content and append it as[Attached Files Context]before returning, skipping generation and embedding stages entirely.Screenshots or Videos
Use Focused Retrieval
use Full Contextm,But the response is slow.:

The response speed for chatting with large files has significantly improved.
The RAG functionality remains unaffected.:
Thank you for the welcome! I completely understand the community's concerns regarding untested PRs. I can explicitly confirm that I have personally, manually, and exhaustively tested this fix.
Here is the detailed context of my troubleshooting journey and exactly how I verified the solution:
🔍 The Background & Troubleshooting Process
I was trying to migrate my role-play chat history from another AI client to Open WebUI. I uploaded a character card file (
校园恋爱角色卡定制.md, containing about 91,480 characters). The model response became extremely slow.Admin Panel -> Settings -> Documents.To isolate the issue:
This definitively proved that the bottleneck was not my backend LLM API, but rather Open WebUI's file retrieval (RAG) pipeline.
According to the official docs, enabling "File Context" in the model settings triggers RAG. If disabled, the model can't read files at all. The "Full Context" toggle seemed to just run the entire RAG process anyway before dumping all the content, failing to actually bypass the heavy RAG workload. This severely impacted conversation performance.
🛠️ The Investigation & AI Transparency
My intuition told me there was a flaw in the file processing logic. To be fully transparent, I used AI to help me navigate the codebase and investigate my hypothesis. Sure enough, we found the logical gap: the RAG pipeline was never truly bypassed even when full context was requested.
✅ How I Tested It (Verification)
While AI helped find the logic flaw, the testing was 100% manual and rigorous:
Visual evidence (Before/After comparison screenshots) is already attached in my original post at the top of this PR.
This PR fixes a genuine performance bottleneck for RP and large-document users. It is fully tested on a real server and ready for your review! 🙏
Contributor License Agreement
🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.