mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 10:58:17 -05:00
[GH-ISSUE #23720] issue: No Way for Custom Tools to Access RAG Documents Retrieved During Conversation #20053
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ilias486 on GitHub (Apr 14, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/23720
Check Existing Issues
Installation Method
Pip Install
Open WebUI Version
v0.8.12
Ollama Version (if applicable)
No response
Operating System
Ubuntu 24.04
Browser (if applicable)
No response
Confirmation
README.md.Expected Behavior
Custom Python tools in Open WebUI should have access to RAG-retrieved documents when executing within a chat that has knowledge bases (MCP servers) enabled. Tools should be able to read the same document context that the LLM receives, enabling them to generate responses based on retrieved knowledge.
Actual Behavior
Custom Python tools execute in isolation from the RAG pipeline. When a tool is called alongside an MCP knowledge base:
__metadata__object shows both the tool and MCP server were active (tool_ids), but contains no document references or contentThis prevents custom tools from generating context-aware responses based on retrieved documents.
Steps to Reproduce
Logs & Screenshots
The external content that should land in the model's context window alongside the user's instructions is not there
Additional Information
Read and followed this issue but no solution thus far: https://github.com/open-webui/open-webui/issues/22526
@Classic298 commented on GitHub (Apr 14, 2026):
how does the MCP tool give back results
and if the AI can see it then certainly can the tool. the tool has access to the FULL chat object including ALL files or tool outputs (tool results) so if the AI can see it, so can the tool. Even if it is in a format you weren't looking for; the information has to be somewhere.
and for my understanding because its not fully clear from the issue
you send a message
AI calls "mcp tool" which gives back a result
THEN the AI calls your custom tool - and this custom tool ahs no way to access the result previously returned by the MCP Tool?
it certainly can - if the MCP tool succeeded and the AI has the context, then the python tool in workspace has all the access to the objects it needs to access any tool outputs inside the chat.
need further reproduction steps in that case
@ilias486 commented on GitHub (Apr 14, 2026):
The MCP knowledge base server returns document content directly to the LLM's context window. The LLM then references these documents in its response (we can see it summarizing the course content). However, the tool results are not stored in a retrievable field in the chat object.
This is what happens:
The core issue is that the LLM clearly receives the documents (it summarizes them in its response), but the custom Python tool cannot find them anywhere in the chat object passed to it.
What is the correct API or mechanism for a custom Python tool to access tool results from previous turns in the same chat?
I tried:
What is the correct API or mechanism for a custom Python tool to access tool results from previous turns in the same chat?
@Classic298 commented on GitHub (Apr 14, 2026):
No api
No mechanism
Read the tool role in the chat. It contains tool outputs