[GH-ISSUE #23720] issue: No Way for Custom Tools to Access RAG Documents Retrieved During Conversation #35582

Closed
opened 2026-04-25 09:46:02 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @ilias486 on GitHub (Apr 14, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/23720

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
  • I am using the latest version of Open WebUI.

Installation Method

Pip Install

Open WebUI Version

v0.8.12

Ollama Version (if applicable)

No response

Operating System

Ubuntu 24.04

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

Custom Python tools in Open WebUI should have access to RAG-retrieved documents when executing within a chat that has knowledge bases (MCP servers) enabled. Tools should be able to read the same document context that the LLM receives, enabling them to generate responses based on retrieved knowledge.

Actual Behavior

Custom Python tools execute in isolation from the RAG pipeline. When a tool is called alongside an MCP knowledge base:

  1. RAG documents are retrieved and passed to the LLM
  2. The LLM can reference these documents in its response
  3. The custom tool receives no document content whatsoever
  4. The __metadata__ object shows both the tool and MCP server were active (tool_ids), but contains no document references or content

This prevents custom tools from generating context-aware responses based on retrieved documents.

Steps to Reproduce

  1. Start Open WebUI v0.8.10+ with Docker
  2. Configure an MCP knowledge base server with documents (e.g., course materials)
  3. Create a custom Python tool:
     class Tools:
        async def my_tool(
             self,
             topic: str = "Default",
             __messages__=None,
             __metadata__=None,
             __event_emitter__=None,
         ) -> HTMLResponse:
             print("=== MESSAGES ===", __messages__)
             print("=== METADATA ===", __metadata__)
    
  4. Enable the custom tool in Open WebUI
  5. Start a new chat session
  6. Enable both the custom tool and the MCP knowledge base
  7. Send a message that triggers RAG retrieval:
  8. Observe the tool logs

Logs & Screenshots

The external content that should land in the model's context window alongside the user's instructions is not there

Additional Information

Read and followed this issue but no solution thus far: https://github.com/open-webui/open-webui/issues/22526

Originally created by @ilias486 on GitHub (Apr 14, 2026). Original GitHub issue: https://github.com/open-webui/open-webui/issues/23720 ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!). - [x] I am using the latest version of Open WebUI. ### Installation Method Pip Install ### Open WebUI Version v0.8.12 ### Ollama Version (if applicable) _No response_ ### Operating System Ubuntu 24.04 ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior Custom Python tools in Open WebUI should have access to RAG-retrieved documents when executing within a chat that has knowledge bases (MCP servers) enabled. Tools should be able to read the same document context that the LLM receives, enabling them to generate responses based on retrieved knowledge. ### Actual Behavior Custom Python tools execute in isolation from the RAG pipeline. When a tool is called alongside an MCP knowledge base: 1. RAG documents are retrieved and passed to the LLM 2. The LLM can reference these documents in its response 3. The custom tool receives no document content whatsoever 4. The `__metadata__` object shows both the tool and MCP server were active (`tool_ids`), but contains no document references or content This prevents custom tools from generating context-aware responses based on retrieved documents. ### Steps to Reproduce 1. Start Open WebUI v0.8.10+ with Docker 2. Configure an MCP knowledge base server with documents (e.g., course materials) 3. Create a custom Python tool: ```python class Tools: async def my_tool( self, topic: str = "Default", __messages__=None, __metadata__=None, __event_emitter__=None, ) -> HTMLResponse: print("=== MESSAGES ===", __messages__) print("=== METADATA ===", __metadata__) ``` 4. Enable the custom tool in Open WebUI 5. Start a new chat session 6. Enable both the custom tool and the MCP knowledge base 7. Send a message that triggers RAG retrieval: 8. Observe the tool logs ### Logs & Screenshots The external content that should land in the model's context window alongside the user's instructions is not there ### Additional Information Read and followed this issue but no solution thus far: https://github.com/open-webui/open-webui/issues/22526
GiteaMirror added the bug label 2026-04-25 09:46:02 -05:00
Author
Owner

@Classic298 commented on GitHub (Apr 14, 2026):

how does the MCP tool give back results

and if the AI can see it then certainly can the tool. the tool has access to the FULL chat object including ALL files or tool outputs (tool results) so if the AI can see it, so can the tool. Even if it is in a format you weren't looking for; the information has to be somewhere.

and for my understanding because its not fully clear from the issue

you send a message
AI calls "mcp tool" which gives back a result
THEN the AI calls your custom tool - and this custom tool ahs no way to access the result previously returned by the MCP Tool?
it certainly can - if the MCP tool succeeded and the AI has the context, then the python tool in workspace has all the access to the objects it needs to access any tool outputs inside the chat.

need further reproduction steps in that case

<!-- gh-comment-id:4244651143 --> @Classic298 commented on GitHub (Apr 14, 2026): how does the MCP tool give back results and if the AI can see it then certainly can the tool. the tool has access to the FULL chat object including ALL files or tool outputs (tool results) so if the AI can see it, so can the tool. Even if it is in a format you weren't looking for; the information has to be somewhere. and for my understanding because its not fully clear from the issue you send a message AI calls "mcp tool" which gives back a result THEN the AI calls your custom tool - and this custom tool ahs no way to access the result previously returned by the MCP Tool? it certainly can - if the MCP tool succeeded and the AI has the context, then the python tool in workspace has all the access to the objects it needs to access any tool outputs inside the chat. need further reproduction steps in that case
Author
Owner

@ilias486 commented on GitHub (Apr 14, 2026):

The MCP knowledge base server returns document content directly to the LLM's context window. The LLM then references these documents in its response (we can see it summarizing the course content). However, the tool results are not stored in a retrievable field in the chat object.

This is what happens:

  1. User sends message
  2. AI calls MCP knowledge base tool → returns document content
  3. AI calls the custom Python tool → tool has no way to access the MCP tool's returned result

The core issue is that the LLM clearly receives the documents (it summarizes them in its response), but the custom Python tool cannot find them anywhere in the chat object passed to it.

What is the correct API or mechanism for a custom Python tool to access tool results from previous turns in the same chat?
I tried:

  1. messages : only contains system prompt + user message. No RAG document content.
  2. metadata : contains keys like chat_id, tool_ids, tool_servers, files, but no document content.
  3. metadata['tool_servers'] : Empty array [] even when MCP server was called.
  4. metadata['tool_ids'] : Shows ['tool_name', 'server:mcp:mcp_name'] - proves both tools were active, but no content.

What is the correct API or mechanism for a custom Python tool to access tool results from previous turns in the same chat?

<!-- gh-comment-id:4245280974 --> @ilias486 commented on GitHub (Apr 14, 2026): The MCP knowledge base server returns document content directly to the LLM's context window. The LLM then references these documents in its response (we can see it summarizing the course content). However, the tool results are not stored in a retrievable field in the chat object. This is what happens: 1. User sends message 2. AI calls MCP knowledge base tool → returns document content 3. AI calls the custom Python tool → tool has no way to access the MCP tool's returned result The core issue is that the LLM clearly receives the documents (it summarizes them in its response), but the custom Python tool cannot find them anywhere in the chat object passed to it. What is the correct API or mechanism for a custom Python tool to access tool results from previous turns in the same chat? I tried: 1. __messages__ : only contains system prompt + user message. No RAG document content. 2. __metadata__ : contains keys like chat_id, tool_ids, tool_servers, files, but no document content. 3. __metadata__['tool_servers'] : Empty array [] even when MCP server was called. 4. __metadata__['tool_ids'] : Shows ['tool_name', 'server:mcp:mcp_name'] - proves both tools were active, but no content. What is the correct API or mechanism for a custom Python tool to access tool results from previous turns in the same chat?
Author
Owner

@Classic298 commented on GitHub (Apr 14, 2026):

No api
No mechanism

Read the tool role in the chat. It contains tool outputs

<!-- gh-comment-id:4245314978 --> @Classic298 commented on GitHub (Apr 14, 2026): No api No mechanism Read the tool role in the chat. It contains tool outputs
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#35582