mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 02:48:13 -05:00
[PR #24105] fix(mcp): fix response discarded when MCP cleanup crashes in process_chat finally block #43137
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
📋 Pull Request Information
Original PR: https://github.com/open-webui/open-webui/pull/24105
Author: @looselyhuman
Created: 4/24/2026
Status: 🔄 Open
Base:
dev← Head:gaia-patch-2📝 Commits (10+)
fe6783cMerge pull request #19030 from open-webui/devfc05e0aMerge pull request #19405 from open-webui/deve3faec6Merge pull request #19416 from open-webui/dev9899293Merge pull request #19448 from open-webui/dev140605eMerge pull request #19462 from open-webui/dev6f1486fMerge pull request #19466 from open-webui/devd95f533Merge pull request #19729 from open-webui/deva7271530.6.43 (#20093)6adde20Merge pull request #20394 from open-webui/devf9b0534Merge pull request #20522 from open-webui/dev📊 Changes
1 file changed (+15 additions, -12 deletions)
View changed files
📝
backend/open_webui/main.py(+15 -12)📄 Description
Pull Request Checklist
devbranch.dev.fixprefix used.Problem
When native MCP function calling completes successfully, the chat endpoint sometimes returns
500 Internal Server Errorwith"No response returned."— despite the LLM having produced a valid response. The completed response is silently discarded.Root Cause
The
finallyblock inchat_completioncleaned up MCP clients using:asyncio.wait_for()andasyncio.shield()both create new asyncio Tasks. TheMCPClient's exit stack contains anyio resources (streamable_httptransport) that use anyio cancel scopes. anyio cancel scopes are owned by the task that entered them — exiting them from a different task raises:This is a
BaseException, not anException. It propagates through thefinallyblock, overwrites the return value of the already-completedprocess_chatcoroutine, and surfaces as a 500 with an empty body.Fix
Replace the
asyncio.wait_for(asyncio.shield(...))wrapper with a plain loop that callsclient.disconnect()directly in the current task.MCPClient.disconnect()already catchesBaseExceptioninternally (see companion PR #24104 fixingclient.py), so no wrapper is needed here. An outerexcept BaseExceptionguards against any unexpected escapes.Changelog Entry
Description
Bug fix for MCP client cleanup in
process_chatdiscarding a valid LLM response and returning 500 due to anyio cancel scope violations in thefinallyblock.Fixed
process_chatfinallyblock no longer usesasyncio.wait_for/asyncio.shield, which spawned child tasks that violated anyio cancel scope ownership and causedBaseExceptionto propagate through thefinallyblock, overwriting a valid completed response with a 500.Testing
I tested this on my self-hosted OpenWebUI instance running behind a Cloudflare reverse proxy tunnel, with an MCP server (FastMCP, stateless HTTP transport, Bearer token auth) connected via the Tool Servers UI. With the model set to Gemma4-26b via local Ollama and
function_calling: native, I sent chat messages with a tool selected. Before this fix (but after applying PR #24104), successful tool calls would occasionally result in a 500 response being returned to the client even though the LLM had produced a valid answer. After applying both PR #24104 and this fix, the chat endpoint returns the correct response consistently. Tested on the non-streaming path (stream: falsevia direct API call) specifically.Contributor License Agreement
🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.