[PR #20479] [CLOSED] fix: prevent system prompt duplication in native function calling #48687

Closed
opened 2026-04-30 00:43:14 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/open-webui/open-webui/pull/20479
Author: @jvadura
Created: 1/8/2026
Status: Closed

Base: mainHead: fix/system-prompt-duplication


📝 Commits (1)

  • 42b731c fix: prevent system prompt duplication in native function calling

📊 Changes

1 file changed (+18 additions, -0 deletions)

View changed files

📝 backend/open_webui/utils/misc.py (+18 -0)

📄 Description

Summary

Prevents system prompt content from being duplicated during native function calling with MCP tools, which was causing quadratic token growth and excessive API costs.

Problem

When using native function calling mode with MCP tools, each tool call iteration triggers update_message_content() which prepends the system prompt to the existing system message. This causes the same prompt to be duplicated multiple times:

Tool call 1: "System prompt" (~20k tokens)
Tool call 2: "System prompt\nSystem prompt" (~40k tokens)
Tool call 3: "System prompt\nSystem prompt\nSystem prompt" (~60k tokens)

Impact: A 20k token conversation can balloon to 3M+ tokens over multiple tool call iterations, causing massive unnecessary API costs.

Root Cause

The bug occurs in the agentic tool call loop:

  1. Initial request applies system prompt via apply_system_prompt_to_body() with replace=True
  2. Each tool call iteration calls generate_chat_completion() again
  3. The router applies model system prompt via apply_system_prompt_to_body() with replace=False (default)
  4. This calls update_message_content() with append=False, which prepends the content
  5. The same system prompt gets prepended on every iteration

Fix

Add a check in update_message_content() to skip the update if the content is already present at the start of the existing message. This prevents duplicate prepending while preserving the ability to append genuinely new content.

Test Plan

  • Enable native function calling mode
  • Connect MCP tool server
  • Trigger multiple tool calls
  • Verify token count stays stable (not growing by ~system_prompt_size each iteration)

Verified: Token count remains stable at ~12k tokens after 3 tool calls (previously would have grown to ~36k+).

Related: #19656


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/open-webui/open-webui/pull/20479 **Author:** [@jvadura](https://github.com/jvadura) **Created:** 1/8/2026 **Status:** ❌ Closed **Base:** `main` ← **Head:** `fix/system-prompt-duplication` --- ### 📝 Commits (1) - [`42b731c`](https://github.com/open-webui/open-webui/commit/42b731ca36688013cc4a402895dd147104d606ee) fix: prevent system prompt duplication in native function calling ### 📊 Changes **1 file changed** (+18 additions, -0 deletions) <details> <summary>View changed files</summary> 📝 `backend/open_webui/utils/misc.py` (+18 -0) </details> ### 📄 Description ## Summary Prevents system prompt content from being duplicated during native function calling with MCP tools, which was causing quadratic token growth and excessive API costs. ## Problem When using native function calling mode with MCP tools, each tool call iteration triggers `update_message_content()` which prepends the system prompt to the existing system message. This causes the same prompt to be duplicated multiple times: ``` Tool call 1: "System prompt" (~20k tokens) Tool call 2: "System prompt\nSystem prompt" (~40k tokens) Tool call 3: "System prompt\nSystem prompt\nSystem prompt" (~60k tokens) ``` **Impact:** A 20k token conversation can balloon to 3M+ tokens over multiple tool call iterations, causing massive unnecessary API costs. ## Root Cause The bug occurs in the agentic tool call loop: 1. Initial request applies system prompt via `apply_system_prompt_to_body()` with `replace=True` 2. Each tool call iteration calls `generate_chat_completion()` again 3. The router applies model system prompt via `apply_system_prompt_to_body()` with `replace=False` (default) 4. This calls `update_message_content()` with `append=False`, which **prepends** the content 5. The same system prompt gets prepended on every iteration ## Fix Add a check in `update_message_content()` to skip the update if the content is already present at the start of the existing message. This prevents duplicate prepending while preserving the ability to append genuinely new content. ## Test Plan - [x] Enable native function calling mode - [x] Connect MCP tool server - [x] Trigger multiple tool calls - [x] Verify token count stays stable (not growing by ~system_prompt_size each iteration) **Verified:** Token count remains stable at ~12k tokens after 3 tool calls (previously would have grown to ~36k+). Related: #19656 --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-30 00:43:14 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#48687