[PR #22725] [CLOSED] fix:preserve filter http errors #49880

Closed
opened 2026-04-30 02:17:13 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/open-webui/open-webui/pull/22725
Author: @sebnowak
Created: 3/16/2026
Status: Closed

Base: devHead: fix/preserve-filter-http-errors


📝 Commits (10+)

📊 Changes

2 files changed (+3 additions, -1 deletions)

View changed files

📝 backend/open_webui/main.py (+2 -0)
📝 backend/open_webui/routers/pipelines.py (+1 -1)

📄 Description

Pull Request Checklist

Note to first-time contributors: Please open a discussion post in Discussions to discuss your idea/fix with the community before creating a pull request, and describe your changes before submitting a pull request.

This is a minimal bug fix rather than a larger feature or design change and i have already created a corresponding PR in pipelines (https://github.com/open-webui/pipelines/pull/597). Without this PR here the other would not make sense and i wouldnt know what to discuss.

Before submitting, make sure you've checked the following:

  • Target branch: This pull request targets the dev branch.
  • Description: I have provided a concise description of the changes below.
  • Changelog: A changelog entry is included below.
  • Documentation: No user-facing docs update seemed necessary for this backend error propagation fix.
  • Dependencies: No new dependencies were added or upgraded.
  • Testing: I manually tested the change and included reproducible steps below.
  • Agentic AI Code: I manually reviewed the 3 lines of code edits in this PR and tested the final patch myself by the script below before opening this PR.
  • **Code review:**I manually reviewed the 3 lines of code edits. The change scoped is limted to the specific bug.
  • Design & Architecture: This is a narrow bug fix and does not introduce new settings or architecture changes.
  • Git Hygiene: This PR is atomic and limited to one logical fix.
  • Title Prefix: This PR uses the fix prefix as it is a bug in my opinion.

Changelog Entry

Description

  • Preserve HTTP error responses returned by pipeline inlet filters instead of flattening them into a generic exception resulting in users getting a inconclusive null response.

Added

  • None.

Changed

  • backend/open_webui/routers/pipelines.py now re-raises pipeline inlet HTTP error responses as HTTPException.
  • backend/open_webui/main.py now re-raises HTTPException in process_chat() before the generic exception handler.

Deprecated

  • None.

Removed

  • None.

Fixed

  • Fixed a case where an inlet filter could reject a request, but the original HTTP error was not reliably propagated back through Open WebUI.

Security

  • None.

Breaking Changes

  • None.

Additional Information

  • This code path is not handling a raw Python exception from a pipeline module anymore; it is handling an already-materialized HTTP error response from the Pipelines service, so re-raising it as HTTPException preserves the HTTP semantics instead of flattening them into a generic exception.
  • This PR is intentionally narrow and does not change the success path.
  • Companion Pipelines PR: open-webui/pipelines#<PIPELINES_PR_NUMBER>

Screenshots or Videos

  • Not really applicable for this backend/API fix.
  • I can attach a screenshot of the passing terminal test output if that is helpful.
    screenshot_test_cli_output

Testing

I personally tested all changes in this PR.

Steps:

  1. I checked out this branch against dev.
  2. I ran:
    python3 test_open_webui_pipeline_http_errors.py (script was created with the help of Codex)
  3. That script executes the current source from this branch and verifies:
    • process_pipeline_inlet_filter() preserves the HTTP status and detail from a pipeline inlet error response
    • process_chat() does not swallow an existing HTTPException

test_open_webui_pipeline_http_errors.py

import ast
import asyncio
import textwrap
import types
import unittest
import sys
from pathlib import Path


OPEN_WEBUI_ROOT = "~/open-webui"


class HTTPException(Exception):
    def __init__(self, status_code: int, detail=None):
        super().__init__(detail)
        self.status_code = status_code
        self.detail = detail


class DummyLog:
    def debug(self, *_args, **_kwargs):
        pass

    def info(self, *_args, **_kwargs):
        pass

    def exception(self, *_args, **_kwargs):
        pass


def _extract_async_function(path: Path, name: str, parent_name: str | None = None) -> str:
    source = path.read_text(encoding="utf-8")
    tree = ast.parse(source)

    for node in ast.walk(tree):
        for child in ast.iter_child_nodes(node):
            child.parent = node

    for node in ast.walk(tree):
        if not isinstance(node, ast.AsyncFunctionDef) or node.name != name:
            continue

        parent = getattr(node, "parent", None)
        if parent_name is None and isinstance(parent, ast.Module):
            return textwrap.dedent(ast.get_source_segment(source, node))
        if (
            parent_name is not None
            and isinstance(parent, ast.AsyncFunctionDef)
            and parent.name == parent_name
        ):
            return textwrap.dedent(ast.get_source_segment(source, node))

    raise ValueError(f"Could not find async function {name!r} in {path}")


class OpenWebUIPipelineHttpErrorTests(unittest.IsolatedAsyncioTestCase):
    async def test_process_pipeline_inlet_filter_preserves_http_error(self):
        """pipeline inlet errors keep their HTTP status and detail."""
        router_path = OPEN_WEBUI_ROOT / "backend/open_webui/routers/pipelines.py"
        helper_source = _extract_async_function(router_path, "process_pipeline_inlet_filter")
        sort_source = textwrap.dedent(
            ast.get_source_segment(
                router_path.read_text(encoding="utf-8"),
                next(
                    node
                    for node in ast.walk(ast.parse(router_path.read_text(encoding="utf-8")))
                    if isinstance(node, ast.FunctionDef) and node.name == "get_sorted_filters"
                ),
            )
        )

        class ClientResponseError(Exception):
            def __init__(self, status: int):
                super().__init__(status)
                self.status = status

        class FakeResponse:
            status = 429
            content_type = "application/json"

            async def json(self):
                return {"detail": "Rate limit exceeded. Please try again later."}

            def raise_for_status(self):
                raise ClientResponseError(self.status)

        class FakeRequestContext:
            def __init__(self, response):
                self.response = response

            async def __aenter__(self):
                return self.response

            async def __aexit__(self, *_args):
                return False

        class FakeSession:
            def __init__(self, response):
                self.response = response

            async def __aenter__(self):
                return self

            async def __aexit__(self, *_args):
                return False

            def post(self, *_args, **_kwargs):
                return FakeRequestContext(self.response)

        fake_aiohttp = types.SimpleNamespace(
            ClientResponseError=ClientResponseError,
            ClientSession=lambda trust_env=True: FakeSession(FakeResponse()),
        )

        namespace = {
            "aiohttp": fake_aiohttp,
            "HTTPException": HTTPException,
            "AIOHTTP_CLIENT_SESSION_SSL": None,
            "log": DummyLog(),
        }
        exec(sort_source, namespace)
        exec(helper_source, namespace)

        request = types.SimpleNamespace(
            app=types.SimpleNamespace(
                state=types.SimpleNamespace(
                    config=types.SimpleNamespace(
                        OPENAI_API_BASE_URLS=["http://pipelines.local/v1"],
                        OPENAI_API_KEYS=["test-key"],
                    )
                )
            )
        )
        user = types.SimpleNamespace(
            id="user-1",
            email="user@example.com",
            name="Test User",
            role="user",
        )
        models = {
            "chat-model": {"id": "chat-model"},
            "rate-limit-filter": {
                "id": "rate-limit-filter",
                "urlIdx": "0",
                "pipeline": {"type": "filter", "pipelines": ["*"], "priority": 0},
            },
        }

        with self.assertRaises(HTTPException) as ctx:
            await namespace["process_pipeline_inlet_filter"](
                request, {"model": "chat-model"}, user, models
            )

        self.assertEqual(ctx.exception.status_code, 429)
        self.assertEqual(
            ctx.exception.detail,
            "Rate limit exceeded. Please try again later.",
        )

    async def test_process_chat_re_raises_http_exception(self):
        """chat processing does not swallow an existing HTTPException."""
        main_path = OPEN_WEBUI_ROOT / "backend/open_webui/main.py"
        process_chat_source = _extract_async_function(main_path, "process_chat", "chat_completion")

        async def process_chat_payload(*_args, **_kwargs):
            raise HTTPException(429, "Rate limit exceeded. Please try again later.")

        async def chat_completion_handler(*_args, **_kwargs):
            raise AssertionError("chat_completion_handler should not run")

        async def process_chat_response(*_args, **_kwargs):
            raise AssertionError("process_chat_response should not run")

        namespace = {
            "asyncio": asyncio,
            "HTTPException": HTTPException,
            "log": DummyLog(),
            "process_chat_payload": process_chat_payload,
            "chat_completion_handler": chat_completion_handler,
            "build_chat_response_context": lambda *_args, **_kwargs: None,
            "process_chat_response": process_chat_response,
            "get_event_emitter": lambda *_args, **_kwargs: None,
            "Chats": types.SimpleNamespace(
                upsert_message_to_chat_by_id_and_message_id=lambda *_args, **_kwargs: None
            ),
            "model_id": "chat-model",
            "tasks": None,
        }
        exec(process_chat_source, namespace)

        with self.assertRaises(HTTPException) as ctx:
            await namespace["process_chat"](None, {}, None, {}, {})

        self.assertEqual(ctx.exception.status_code, 429)
        self.assertEqual(
            ctx.exception.detail,
            "Rate limit exceeded. Please try again later.",
        )


if __name__ == "__main__":
    print("Running Open WebUI pipeline HTTP error checks...\n", file=sys.stderr, flush=True)
    unittest.main(verbosity=2, buffer=True)

Result:
CLI output:

Running Open WebUI pipeline HTTP error checks...

test_process_chat_re_raises_http_exception (__main__.OpenWebUIPipelineHttpErrorTests.test_process_chat_re_raises_http_exception)
chat processing does not swallow an existing HTTPException. ... ok
test_process_pipeline_inlet_filter_preserves_http_error (__main__.OpenWebUIPipelineHttpErrorTests.test_process_pipeline_inlet_filter_preserves_http_error)
pipeline inlet errors keep their HTTP status and detail. ... ok

----------------------------------------------------------------------
Ran 2 tests in 0.053s

OK

Contributor License Agreement


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/open-webui/open-webui/pull/22725 **Author:** [@sebnowak](https://github.com/sebnowak) **Created:** 3/16/2026 **Status:** ❌ Closed **Base:** `dev` ← **Head:** `fix/preserve-filter-http-errors` --- ### 📝 Commits (10+) - [`fe6783c`](https://github.com/open-webui/open-webui/commit/fe6783c16699911c7be17392596d579333fb110c) Merge pull request #19030 from open-webui/dev - [`fc05e0a`](https://github.com/open-webui/open-webui/commit/fc05e0a6c5d39da60b603b4d520f800d6e36f748) Merge pull request #19405 from open-webui/dev - [`e3faec6`](https://github.com/open-webui/open-webui/commit/e3faec62c58e3a83d89aa3df539feacefa125e0c) Merge pull request #19416 from open-webui/dev - [`9899293`](https://github.com/open-webui/open-webui/commit/9899293f050ad50ae12024cbebee7e018acd851e) Merge pull request #19448 from open-webui/dev - [`140605e`](https://github.com/open-webui/open-webui/commit/140605e660b8186a7d5c79fb3be6ffb147a2f498) Merge pull request #19462 from open-webui/dev - [`6f1486f`](https://github.com/open-webui/open-webui/commit/6f1486ffd0cb288d0e21f41845361924e0d742b3) Merge pull request #19466 from open-webui/dev - [`d95f533`](https://github.com/open-webui/open-webui/commit/d95f533214e3fe5beb5e41ec1f349940bc4c7043) Merge pull request #19729 from open-webui/dev - [`a727153`](https://github.com/open-webui/open-webui/commit/a7271532f8a38da46785afcaa7e65f9a45e7d753) 0.6.43 (#20093) - [`6adde20`](https://github.com/open-webui/open-webui/commit/6adde203cd292a9e3af9c64a2ae36b603fed096a) Merge pull request #20394 from open-webui/dev - [`f9b0534`](https://github.com/open-webui/open-webui/commit/f9b0534e0c442631d1cb7205169588b9b6204179) Merge pull request #20522 from open-webui/dev ### 📊 Changes **2 files changed** (+3 additions, -1 deletions) <details> <summary>View changed files</summary> 📝 `backend/open_webui/main.py` (+2 -0) 📝 `backend/open_webui/routers/pipelines.py` (+1 -1) </details> ### 📄 Description # Pull Request Checklist ### Note to first-time contributors: Please open a discussion post in [Discussions](https://github.com/open-webui/open-webui/discussions) to discuss your idea/fix with the community before creating a pull request, and describe your changes before submitting a pull request. This is a minimal bug fix rather than a larger feature or design change and i have already created a corresponding PR in pipelines (https://github.com/open-webui/pipelines/pull/597). Without this PR here the other would not make sense and i wouldnt know what to discuss. **Before submitting, make sure you've checked the following:** - [x] **Target branch:** This pull request targets the `dev` branch. - [x] **Description:** I have provided a concise description of the changes below. - [x] **Changelog:** A changelog entry is included below. - [ ] **Documentation:** No user-facing docs update seemed necessary for this backend error propagation fix. - [ ] **Dependencies:** No new dependencies were added or upgraded. - [x] **Testing:** I manually tested the change and included reproducible steps below. - [x] **Agentic AI Code:** I manually reviewed the 3 lines of code edits in this PR and tested the final patch myself by the script below before opening this PR. - [x] **Code review:**I manually reviewed the 3 lines of code edits. The change scoped is limted to the specific bug. - [x] **Design & Architecture:** This is a narrow bug fix and does not introduce new settings or architecture changes. - [x] **Git Hygiene:** This PR is atomic and limited to one logical fix. - [x] **Title Prefix:** This PR uses the `fix` prefix as it is a bug in my opinion. # Changelog Entry ### Description - Preserve HTTP error responses returned by pipeline inlet filters instead of flattening them into a generic exception resulting in users getting a inconclusive null response. ### Added - None. ### Changed - `backend/open_webui/routers/pipelines.py` now re-raises pipeline inlet HTTP error responses as `HTTPException`. - `backend/open_webui/main.py` now re-raises `HTTPException` in `process_chat()` before the generic exception handler. ### Deprecated - None. ### Removed - None. ### Fixed - Fixed a case where an inlet filter could reject a request, but the original HTTP error was not reliably propagated back through Open WebUI. ### Security - None. ### Breaking Changes - None. --- ### Additional Information - This code path is not handling a raw Python exception from a pipeline module anymore; it is handling an already-materialized HTTP error response from the Pipelines service, so re-raising it as `HTTPException` preserves the HTTP semantics instead of flattening them into a generic exception. - This PR is intentionally narrow and does not change the success path. - Companion Pipelines PR: `open-webui/pipelines#<PIPELINES_PR_NUMBER>` ### Screenshots or Videos - Not really applicable for this backend/API fix. - I can attach a screenshot of the passing terminal test output if that is helpful. <img width="1238" height="199" alt="screenshot_test_cli_output" src="https://github.com/user-attachments/assets/4321b7a7-639e-409d-9e6d-c38556dce6ac" /> ### Testing I personally tested all changes in this PR. Steps: 1. I checked out this branch against `dev`. 2. I ran: `python3 test_open_webui_pipeline_http_errors.py` (script was created with the help of Codex) 3. That script executes the current source from this branch and verifies: - `process_pipeline_inlet_filter()` preserves the HTTP status and detail from a pipeline inlet error response - `process_chat()` does not swallow an existing `HTTPException` test_open_webui_pipeline_http_errors.py ```python import ast import asyncio import textwrap import types import unittest import sys from pathlib import Path OPEN_WEBUI_ROOT = "~/open-webui" class HTTPException(Exception): def __init__(self, status_code: int, detail=None): super().__init__(detail) self.status_code = status_code self.detail = detail class DummyLog: def debug(self, *_args, **_kwargs): pass def info(self, *_args, **_kwargs): pass def exception(self, *_args, **_kwargs): pass def _extract_async_function(path: Path, name: str, parent_name: str | None = None) -> str: source = path.read_text(encoding="utf-8") tree = ast.parse(source) for node in ast.walk(tree): for child in ast.iter_child_nodes(node): child.parent = node for node in ast.walk(tree): if not isinstance(node, ast.AsyncFunctionDef) or node.name != name: continue parent = getattr(node, "parent", None) if parent_name is None and isinstance(parent, ast.Module): return textwrap.dedent(ast.get_source_segment(source, node)) if ( parent_name is not None and isinstance(parent, ast.AsyncFunctionDef) and parent.name == parent_name ): return textwrap.dedent(ast.get_source_segment(source, node)) raise ValueError(f"Could not find async function {name!r} in {path}") class OpenWebUIPipelineHttpErrorTests(unittest.IsolatedAsyncioTestCase): async def test_process_pipeline_inlet_filter_preserves_http_error(self): """pipeline inlet errors keep their HTTP status and detail.""" router_path = OPEN_WEBUI_ROOT / "backend/open_webui/routers/pipelines.py" helper_source = _extract_async_function(router_path, "process_pipeline_inlet_filter") sort_source = textwrap.dedent( ast.get_source_segment( router_path.read_text(encoding="utf-8"), next( node for node in ast.walk(ast.parse(router_path.read_text(encoding="utf-8"))) if isinstance(node, ast.FunctionDef) and node.name == "get_sorted_filters" ), ) ) class ClientResponseError(Exception): def __init__(self, status: int): super().__init__(status) self.status = status class FakeResponse: status = 429 content_type = "application/json" async def json(self): return {"detail": "Rate limit exceeded. Please try again later."} def raise_for_status(self): raise ClientResponseError(self.status) class FakeRequestContext: def __init__(self, response): self.response = response async def __aenter__(self): return self.response async def __aexit__(self, *_args): return False class FakeSession: def __init__(self, response): self.response = response async def __aenter__(self): return self async def __aexit__(self, *_args): return False def post(self, *_args, **_kwargs): return FakeRequestContext(self.response) fake_aiohttp = types.SimpleNamespace( ClientResponseError=ClientResponseError, ClientSession=lambda trust_env=True: FakeSession(FakeResponse()), ) namespace = { "aiohttp": fake_aiohttp, "HTTPException": HTTPException, "AIOHTTP_CLIENT_SESSION_SSL": None, "log": DummyLog(), } exec(sort_source, namespace) exec(helper_source, namespace) request = types.SimpleNamespace( app=types.SimpleNamespace( state=types.SimpleNamespace( config=types.SimpleNamespace( OPENAI_API_BASE_URLS=["http://pipelines.local/v1"], OPENAI_API_KEYS=["test-key"], ) ) ) ) user = types.SimpleNamespace( id="user-1", email="user@example.com", name="Test User", role="user", ) models = { "chat-model": {"id": "chat-model"}, "rate-limit-filter": { "id": "rate-limit-filter", "urlIdx": "0", "pipeline": {"type": "filter", "pipelines": ["*"], "priority": 0}, }, } with self.assertRaises(HTTPException) as ctx: await namespace["process_pipeline_inlet_filter"]( request, {"model": "chat-model"}, user, models ) self.assertEqual(ctx.exception.status_code, 429) self.assertEqual( ctx.exception.detail, "Rate limit exceeded. Please try again later.", ) async def test_process_chat_re_raises_http_exception(self): """chat processing does not swallow an existing HTTPException.""" main_path = OPEN_WEBUI_ROOT / "backend/open_webui/main.py" process_chat_source = _extract_async_function(main_path, "process_chat", "chat_completion") async def process_chat_payload(*_args, **_kwargs): raise HTTPException(429, "Rate limit exceeded. Please try again later.") async def chat_completion_handler(*_args, **_kwargs): raise AssertionError("chat_completion_handler should not run") async def process_chat_response(*_args, **_kwargs): raise AssertionError("process_chat_response should not run") namespace = { "asyncio": asyncio, "HTTPException": HTTPException, "log": DummyLog(), "process_chat_payload": process_chat_payload, "chat_completion_handler": chat_completion_handler, "build_chat_response_context": lambda *_args, **_kwargs: None, "process_chat_response": process_chat_response, "get_event_emitter": lambda *_args, **_kwargs: None, "Chats": types.SimpleNamespace( upsert_message_to_chat_by_id_and_message_id=lambda *_args, **_kwargs: None ), "model_id": "chat-model", "tasks": None, } exec(process_chat_source, namespace) with self.assertRaises(HTTPException) as ctx: await namespace["process_chat"](None, {}, None, {}, {}) self.assertEqual(ctx.exception.status_code, 429) self.assertEqual( ctx.exception.detail, "Rate limit exceeded. Please try again later.", ) if __name__ == "__main__": print("Running Open WebUI pipeline HTTP error checks...\n", file=sys.stderr, flush=True) unittest.main(verbosity=2, buffer=True) ``` Result: CLI output: ```text Running Open WebUI pipeline HTTP error checks... test_process_chat_re_raises_http_exception (__main__.OpenWebUIPipelineHttpErrorTests.test_process_chat_re_raises_http_exception) chat processing does not swallow an existing HTTPException. ... ok test_process_pipeline_inlet_filter_preserves_http_error (__main__.OpenWebUIPipelineHttpErrorTests.test_process_pipeline_inlet_filter_preserves_http_error) pipeline inlet errors keep their HTTP status and detail. ... ok ---------------------------------------------------------------------- Ran 2 tests in 0.053s OK ``` ### Contributor License Agreement - [x] By submitting this pull request, I confirm that I have read and fully agree to the [Contributor License Agreement (CLA)](https://github.com/open-webui/open-webui/blob/main/CONTRIBUTOR_LICENSE_AGREEMENT), and I am providing my contributions under its terms. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-30 02:17:13 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#49880