[GH-ISSUE #22202] feat: implement MCP Annotations for MCP results to bypass LLM and context #35187

Closed
opened 2026-04-25 09:25:36 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @bluepuma77 on GitHub (Mar 3, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/22202

Check Existing Issues

  • I have searched for all existing open AND closed issues and discussions for similar requests. I have found none that is comparable to my request.

Verify Feature Scope

  • I have read through and understood the scope definition for feature requests in the Issues section. I believe my feature request meets the definition and belongs in the Issues section instead of the Discussions.

Problem Description

I would like to be able to implement a MCP that can return data that would not be processed by the LLM and will not waste context, but just be displayed to the user. Also not be included in future request to LLM.

Use case: create a MCP to interact with a financial database. Query in, data table and chart out. The chart is HTML/JS with interactive ChartJS, which is totally irrelevant to the LLM and would just waste context.

So far I found no solution to have MCP return data that bypasses the LLM and context.

Desired Solution you'd like

MCP "Annotations" can already indicate that the content is only intended for the user, not the assistant:

  "annotations": {
    "audience": ["user"],
    "priority": 0.8,
    "lastModified": "2025-01-12T15:00:58Z"
  }

It would be great if a MCP message with the user-only annotation is simply stored as an embed.

Alternatives Considered

The other option I found for a MCP to bypass LLM context would be to use the open-webui API and create a chat message with embed directly. But LLM calling MCP calling open-webui API seems like a sub-optimal work-around.

I also check the files component. Not sure about the changes after the XSS vulnerability changes and how to upload a file with admin role, that a regular user can download. But a MCP response with <file type="html" id="FILE_ID" /> would probably be shown as plain text tag, not show the file in the frontend.

Additional Context

Would also solve https://github.com/open-webui/open-webui/issues/14708

Test implementation:

diff --git a/backend/open_webui/utils/middleware.py b/backend/open_webui/utils/middleware.py
index 55f55ab03..c260d07e8 100644
--- a/backend/open_webui/utils/middleware.py
+++ b/backend/open_webui/utils/middleware.py
@@ -983,6 +983,24 @@ def process_tool_result(
                 if isinstance(item, dict):
                     if item.get("type") == "text":
                         text = item.get("text", "")
+
+                        # create display-only embed when audience is exactly ["user"]
+                        annotations = item.get("annotations")
+                        audience = (
+                            annotations.get("audience")
+                            if isinstance(annotations, dict)
+                            else None
+                        )
+                        is_user_audience = (
+                            isinstance(audience, list)
+                            and len(audience) == 1
+                            and audience[0] == "user"
+                        )
+                        if is_user_audience:
+                            if isinstance(text, str) and text:
+                                tool_result_embeds.append(text)
+                            continue
+
                         if isinstance(text, str):
                             try:
                                 text = json.loads(text)
@@ -1009,6 +1027,12 @@ def process_tool_result(
                             }
                         )
             tool_result = tool_response[0] if len(tool_response) == 1 else tool_response
+            if not tool_response and tool_result_embeds:
+                tool_result = {
+                    "status": "success",
+                    "code": "ui_component",
+                    "message": f"{tool_function_name}: Embedded UI result is active and visible to the user.",
+                }
         else:  # OpenAPI
             for item in tool_result:
                 if isinstance(item, str) and item.startswith("data:"):

I tested it locally with a custom MCP server. Can provide a pull request.

Originally created by @bluepuma77 on GitHub (Mar 3, 2026). Original GitHub issue: https://github.com/open-webui/open-webui/issues/22202 ### Check Existing Issues - [x] I have searched for all existing **open AND closed** issues and discussions for similar requests. I have found none that is comparable to my request. ### Verify Feature Scope - [x] I have read through and understood the scope definition for feature requests in the Issues section. I believe my feature request meets the definition and belongs in the Issues section instead of the Discussions. ### Problem Description I would like to be able to implement a MCP that can return data that would not be processed by the LLM and will not waste context, but just be displayed to the user. Also not be included in future request to LLM. Use case: create a MCP to interact with a financial database. Query in, data table and chart out. The chart is HTML/JS with interactive ChartJS, which is totally irrelevant to the LLM and would just waste context. So far I found no solution to have MCP return data that bypasses the LLM and context. ### Desired Solution you'd like MCP "[Annotations](https://modelcontextprotocol.io/specification/2025-06-18/server/resources#annotations)" can already indicate that the content is only intended for the user, not the assistant: ``` "annotations": { "audience": ["user"], "priority": 0.8, "lastModified": "2025-01-12T15:00:58Z" } ``` It would be great if a MCP message with the user-only annotation is simply stored as an embed. ### Alternatives Considered The other option I found for a MCP to bypass LLM context would be to use the open-webui API and create a chat message with embed directly. But LLM calling MCP calling open-webui API seems like a sub-optimal work-around. I also check the files component. Not sure about the changes after the [XSS vulnerability](https://github.com/open-webui/open-webui/security/advisories/GHSA-8gh5-qqh8-hq3x) changes and how to upload a file with admin role, that a regular user can download. But a MCP response with `<file type="html" id="FILE_ID" />` would probably be shown as plain text tag, not show the file in the frontend. ### Additional Context Would also solve https://github.com/open-webui/open-webui/issues/14708 Test implementation: ``` diff --git a/backend/open_webui/utils/middleware.py b/backend/open_webui/utils/middleware.py index 55f55ab03..c260d07e8 100644 --- a/backend/open_webui/utils/middleware.py +++ b/backend/open_webui/utils/middleware.py @@ -983,6 +983,24 @@ def process_tool_result( if isinstance(item, dict): if item.get("type") == "text": text = item.get("text", "") + + # create display-only embed when audience is exactly ["user"] + annotations = item.get("annotations") + audience = ( + annotations.get("audience") + if isinstance(annotations, dict) + else None + ) + is_user_audience = ( + isinstance(audience, list) + and len(audience) == 1 + and audience[0] == "user" + ) + if is_user_audience: + if isinstance(text, str) and text: + tool_result_embeds.append(text) + continue + if isinstance(text, str): try: text = json.loads(text) @@ -1009,6 +1027,12 @@ def process_tool_result( } ) tool_result = tool_response[0] if len(tool_response) == 1 else tool_response + if not tool_response and tool_result_embeds: + tool_result = { + "status": "success", + "code": "ui_component", + "message": f"{tool_function_name}: Embedded UI result is active and visible to the user.", + } else: # OpenAPI for item in tool_result: if isinstance(item, str) and item.startswith("data:"): ``` I tested it locally with a custom MCP server. Can provide a pull request.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#35187