[GH-ISSUE #17472] issue: We are getting each request three times . This is happening for all request. Even pipe is also called 3 time each time you call it #56965

Closed
opened 2026-05-05 20:19:40 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @cvasani on GitHub (Sep 16, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/17472

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

latest

Ollama Version (if applicable)

NA

Operating System

Unix

Browser (if applicable)

NA

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

{"text": "2025-09-16T04:28:24.944757+0000 | WARNING | No OAuth session found for user 280ca759-42f1-4854-b81f-42c371ec2013, session None\n", "record": {"elapsed": {"repr": "0:01:32.510668", "seconds": 92.510668}, "exception": null, "extra": {}, "file": {"name": "oauth.py", "path": "/app/backend/open_webui/utils/oauth.py"}, "function": "get_oauth_token", "level": {"icon": "⚠️", "name": "WARNING", "no": 30}, "line": 178, "message": "No OAuth session found for user 280ca759-42f1-4854-b81f-42c371ec2013, session None", "module": "oauth", "name": "open_webui.utils.oauth", "process": {"id": 1, "name": "MainProcess"}, "thread": {"id": 281472872222752, "name": "MainThread"}, "time": {"repr": "2025-09-16 04:28:24.944757+00:00", "timestamp": 1757996904.944757}}}
{"text": "2025-09-16T04:28:24.944757+0000 | WARNING | No OAuth session found for user 280ca759-42f1-4854-b81f-42c371ec2013, session None\n", "record": {"elapsed": {"repr": "0:01:32.510668", "seconds": 92.510668}, "exception": null, "extra": {}, "file": {"name": "oauth.py", "path": "/app/backend/open_webui/utils/oauth.py"}, "function": "get_oauth_token", "level": {"icon": "⚠️", "name": "WARNING", "no": 30}, "line": 178, "message": "No OAuth session found for user 280ca759-42f1-4854-b81f-42c371ec2013, session None", "module": "oauth", "name": "open_webui.utils.oauth", "process": {"id": 1, "name": "MainProcess"}, "thread": {"id": 281472872222752, "name": "MainThread"}, "time": {"repr": "2025-09-16 04:28:24.944757+00:00", "timestamp": 1757996904.944757}}}
{"text": "2025-09-16T04:28:24.944757+0000 | WARNING | No OAuth session found for user 280ca759-42f1-4854-b81f-42c371ec2013, session None\n", "record": {"elapsed": {"repr": "0:01:32.510668", "seconds": 92.510668}, "exception": null, "extra": {}, "file": {"name": "oauth.py", "path": "/app/backend/open_webui/utils/oauth.py"}, "function": "get_oauth_token", "level": {"icon": "⚠️", "name": "WARNING", "no": 30}, "line": 178, "message": "No OAuth session found for user 280ca759-42f1-4854-b81f-42c371ec2013, session None", "module": "oauth", "name": "open_webui.utils.oauth", "process": {"id": 1, "name": "MainProcess"}, "thread": {"id": 281472872222752, "name": "MainThread"}, "time": {"repr": "2025-09-16 04:28:24.944757+00:00", "timestamp": 1757996904.944757}}}

Actual Behavior

It shall not give multiple status update.
Call shall be only one time .
Hello world 0 should have printed as this is a single call with start value as 0 .

Steps to Reproduce

docker and create a given pipe . observe logging

from pydantic import BaseModel, Field
from typing import Optional, Callable, Dict, Any, Awaitable
import asyncio
import sys
from loguru import logger

logger.remove(0)

logger.add(sys.stderr, format="{time} | {level} | {message}")

logger.add(
"/tmp/logs/test.log",
format="{time} | {level} | {message}",
level="WARNING",
serialize=True,
)
logger.warning("Happy logging with Loguru!")

class Pipe:
class Valves(BaseModel):
MODEL_ID: str = Field(default="")

def __init__(self):
    self.valves = self.Valves()
    self.emitter = None
    self.id = "TEST_P"
    self.type = "manifold"
    self.name = "TEST_P"
    self.emitter = None
    self.increment = 1

def pipes(self):
    return [{"id": "model_id_1", "name": "model_1"}]

async def pipe(
    self,
    body: dict,
    __event_emitter__: Optional[Callable[[Dict[str, Any]], Awaitable[None]]] = None,
):
    # Print initial configuration and input body
    logger.log("DEBUG", body)
    self.emitter = __event_emitter__
    print(f"Starting process with config: {self.valves}, input: {body}")
    await self._emit_status("Sending request to Beatoven.ai API...")
    # Define three steps with a 1-second delay between each
    for i in range(1, 4):
        # print(f"Step {i}: Processing data...")
        await self._emit_status(f"Step {i}: Processing data...")
        await asyncio.sleep(1)  # Wait for 1 second

    # Final logic to return result
    model = body.get("model", "")
    self.increment += 1
    result = f"{model}: Hello, World! {self.increment}"
    print("{model}: Hello, World! {self.increment}")
    # print("Process completed.")
    await self._emit_status("Done", True)
    return result

async def _emit_status(
    self, description: str, done: bool = False
) -> Awaitable[None]:
    """Send status updates"""
    if self.emitter:
        return await self.emitter(
            {
                "type": "status",
                "data": {
                    "description": description,
                    "done": done,
                },
            }
        )
    return None

Logs & Screenshots

Image

Additional Information

Image
Originally created by @cvasani on GitHub (Sep 16, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/17472 ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version latest ### Ollama Version (if applicable) NA ### Operating System Unix ### Browser (if applicable) NA ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior {"text": "2025-09-16T04:28:24.944757+0000 | WARNING | No OAuth session found for user 280ca759-42f1-4854-b81f-42c371ec2013, session None\n", "record": {"elapsed": {"repr": "0:01:32.510668", "seconds": 92.510668}, "exception": null, "extra": {}, "file": {"name": "oauth.py", "path": "/app/backend/open_webui/utils/oauth.py"}, "function": "get_oauth_token", "level": {"icon": "⚠️", "name": "WARNING", "no": 30}, "line": 178, "message": "No OAuth session found for user 280ca759-42f1-4854-b81f-42c371ec2013, session None", "module": "oauth", "name": "open_webui.utils.oauth", "process": {"id": 1, "name": "MainProcess"}, "thread": {"id": 281472872222752, "name": "MainThread"}, "time": {"repr": "2025-09-16 04:28:24.944757+00:00", "timestamp": 1757996904.944757}}} {"text": "2025-09-16T04:28:24.944757+0000 | WARNING | No OAuth session found for user 280ca759-42f1-4854-b81f-42c371ec2013, session None\n", "record": {"elapsed": {"repr": "0:01:32.510668", "seconds": 92.510668}, "exception": null, "extra": {}, "file": {"name": "oauth.py", "path": "/app/backend/open_webui/utils/oauth.py"}, "function": "get_oauth_token", "level": {"icon": "⚠️", "name": "WARNING", "no": 30}, "line": 178, "message": "No OAuth session found for user 280ca759-42f1-4854-b81f-42c371ec2013, session None", "module": "oauth", "name": "open_webui.utils.oauth", "process": {"id": 1, "name": "MainProcess"}, "thread": {"id": 281472872222752, "name": "MainThread"}, "time": {"repr": "2025-09-16 04:28:24.944757+00:00", "timestamp": 1757996904.944757}}} {"text": "2025-09-16T04:28:24.944757+0000 | WARNING | No OAuth session found for user 280ca759-42f1-4854-b81f-42c371ec2013, session None\n", "record": {"elapsed": {"repr": "0:01:32.510668", "seconds": 92.510668}, "exception": null, "extra": {}, "file": {"name": "oauth.py", "path": "/app/backend/open_webui/utils/oauth.py"}, "function": "get_oauth_token", "level": {"icon": "⚠️", "name": "WARNING", "no": 30}, "line": 178, "message": "No OAuth session found for user 280ca759-42f1-4854-b81f-42c371ec2013, session None", "module": "oauth", "name": "open_webui.utils.oauth", "process": {"id": 1, "name": "MainProcess"}, "thread": {"id": 281472872222752, "name": "MainThread"}, "time": {"repr": "2025-09-16 04:28:24.944757+00:00", "timestamp": 1757996904.944757}}} ### Actual Behavior It shall not give multiple status update. Call shall be only one time . Hello world 0 should have printed as this is a single call with start value as 0 . ### Steps to Reproduce docker and create a given pipe . observe logging from pydantic import BaseModel, Field from typing import Optional, Callable, Dict, Any, Awaitable import asyncio import sys from loguru import logger # logger.remove(0) # logger.add(sys.stderr, format="{time} | {level} | {message}") logger.add( "/tmp/logs/test.log", format="{time} | {level} | {message}", level="WARNING", serialize=True, ) logger.warning("Happy logging with Loguru!") class Pipe: class Valves(BaseModel): MODEL_ID: str = Field(default="") def __init__(self): self.valves = self.Valves() self.emitter = None self.id = "TEST_P" self.type = "manifold" self.name = "TEST_P" self.emitter = None self.increment = 1 def pipes(self): return [{"id": "model_id_1", "name": "model_1"}] async def pipe( self, body: dict, __event_emitter__: Optional[Callable[[Dict[str, Any]], Awaitable[None]]] = None, ): # Print initial configuration and input body logger.log("DEBUG", body) self.emitter = __event_emitter__ print(f"Starting process with config: {self.valves}, input: {body}") await self._emit_status("Sending request to Beatoven.ai API...") # Define three steps with a 1-second delay between each for i in range(1, 4): # print(f"Step {i}: Processing data...") await self._emit_status(f"Step {i}: Processing data...") await asyncio.sleep(1) # Wait for 1 second # Final logic to return result model = body.get("model", "") self.increment += 1 result = f"{model}: Hello, World! {self.increment}" print("{model}: Hello, World! {self.increment}") # print("Process completed.") await self._emit_status("Done", True) return result async def _emit_status( self, description: str, done: bool = False ) -> Awaitable[None]: """Send status updates""" if self.emitter: return await self.emitter( { "type": "status", "data": { "description": description, "done": done, }, } ) return None ### Logs & Screenshots <img width="453" height="813" alt="Image" src="https://github.com/user-attachments/assets/5e0fdb76-dfdc-4126-b555-1c41e4a26dc1" /> ### Additional Information <img width="453" height="813" alt="Image" src="https://github.com/user-attachments/assets/c8ea37b1-c853-48ce-a78e-4cc50a21698e" />
GiteaMirror added the bug label 2026-05-05 20:19:40 -05:00
Author
Owner

@cvasani commented on GitHub (Sep 16, 2025):

Probably this is due to DEFAULT_TAGS_GENERATION_PROMPT_TEMPLATE . Can we have some advise on how to avoid such calling multiple times ?

Logs are also duplicated multiple times. Making logs size go beyond gbs .

<!-- gh-comment-id:3295473589 --> @cvasani commented on GitHub (Sep 16, 2025): Probably this is due to DEFAULT_TAGS_GENERATION_PROMPT_TEMPLATE . Can we have some advise on how to avoid such calling multiple times ? Logs are also duplicated multiple times. Making logs size go beyond gbs .
Author
Owner

@Classic298 commented on GitHub (Sep 16, 2025):

You are aware that multiple model calls are being made to generate tags, titles, follow up prompts, query generation and more, yes?

<!-- gh-comment-id:3296199642 --> @Classic298 commented on GitHub (Sep 16, 2025): You are aware that multiple model calls are being made to generate tags, titles, follow up prompts, query generation and more, yes?
Author
Owner

@rgaricano commented on GitHub (Sep 16, 2025):

Your pipe function has self.type = "manifold" and implements a pipes() method that returns a list of sub-pipes. When Open WebUI processes manifold pipes, it creates separate model entries for each sub-pipe returned by your pipes() method functions.

However, your pipe() method is still being executed for each request, causing the status updates to repeat. The issue is that your pipe function is being invoked multiple times - once for each sub-pipe or potentially multiple times due to how the request is being processed.

3 Solutions:

  • Remove Manifold Type
    # Remove this line: self.type = "manifold"
  • Add Request Deduplication
...
async def pipe(self, body: dict, __event_emitter__: Optional[Callable[[Dict[str, Any]], Awaitable[None]]] = None):
    # Create a unique identifier for this request
    request_id = f"{body.get('model', '')}-{id(body)}"

    if request_id in self.processed_requests:
        return f"Already processed: {request_id}"

    self.processed_requests.add(request_id) 
...
  • Check Model ID in Manifold
...
async def pipe(self, body: dict, __event_emitter__: Optional[Callable[[Dict[str, Any]], Awaitable[None]]] = None):
    model = body.get("model", "")

    # Only process if this is the specific sub-pipe we want
    if not model.endswith("model_id_1"):
        return f"Skipping processing for {model}" 
...
<!-- gh-comment-id:3296336071 --> @rgaricano commented on GitHub (Sep 16, 2025): Your pipe function has self.type = "manifold" and implements a pipes() method that returns a list of sub-pipes. When Open WebUI processes manifold pipes, it creates separate model entries for each sub-pipe returned by your pipes() method functions. However, your pipe() method is still being executed for each request, causing the status updates to repeat. The issue is that your pipe function is being invoked multiple times - once for each sub-pipe or potentially multiple times due to how the request is being processed. 3 Solutions: - Remove Manifold Type `# Remove this line: self.type = "manifold"` - Add Request Deduplication ``` ... async def pipe(self, body: dict, __event_emitter__: Optional[Callable[[Dict[str, Any]], Awaitable[None]]] = None): # Create a unique identifier for this request request_id = f"{body.get('model', '')}-{id(body)}" if request_id in self.processed_requests: return f"Already processed: {request_id}" self.processed_requests.add(request_id) ... ``` - Check Model ID in Manifold ``` ... async def pipe(self, body: dict, __event_emitter__: Optional[Callable[[Dict[str, Any]], Awaitable[None]]] = None): model = body.get("model", "") # Only process if this is the specific sub-pipe we want if not model.endswith("model_id_1"): return f"Skipping processing for {model}" ... ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#56965