[GH-ISSUE #19738] issue: Thinking models render responses inside thinking UI when using native tools #34505

Closed
opened 2026-04-25 08:31:21 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @FujinoXiao on GitHub (Dec 4, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/19738

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
  • I am using the latest version of Open WebUI.

Installation Method

Git Clone

Open WebUI Version

0.6.41

Ollama Version (if applicable)

No response

Operating System

Ubuntu 22.04

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

When thinking models (e.g., gemini-2.5-pro) use native tools, the response after tool execution should appear in the main chat area.

Actual Behavior

When thinking models use native tools, the response content is incorrectly rendered inside the thinking UI block instead of the main response area.

Note: This issue ONLY occurs with native tools. Function calling tools work correctly.

Steps to Reproduce

use this tool


import random
from typing import Annotated
from pydantic import Field


class Tools:
    def __init__(self):
        self.words = [
            # 名词
            "星辰",
            "山川",
            "流云",
            "明月",
            "清风",
            "落叶",
            "烟雨",
            "霜雪",
            "晨曦",
            "暮色",
            # 动词
            "追寻",
            "眺望",
            "徘徊",
            "沉醉",
            "飘零",
            "凝望",
            "守候",
            "漫步",
            "沉思",
            "回眸",
            # 形容词
            "璀璨",
            "静谧",
            "悠远",
            "朦胧",
            "苍茫",
            "辽阔",
            "空灵",
            "淡雅",
            "绚烂",
            "幽深",
            # 成语
            "风花雪月",
            "沧海桑田",
            "浮光掠影",
            "云淡风轻",
            "柳暗花明",
            "春暖花开",
        ]

    def random_words(
        self,
        count: Annotated[
            int, Field(description="生成词语的数量 (1-20)", ge=1, le=20)
        ] = 5,
    ) -> str:
        """
        随机生成指定数量的中文词语。
        """
        result = random.sample(self.words, min(count, len(self.words)))
        return "、".join(result)
  1. Connect gemini-2.5-pro via OpenRouter (or Google AI API)

  2. Start a new chat, select gemini-2.5-pro as the model

  3. Enable the native tool created in step 1

  4. Send this prompt:

Call my random_words tool 3 times consecutively. Rules: 1. After EACH tool call, write ~50 tokens reflecting on the words and brainstorming a sentence idea 2. After all 3 calls are complete, think deeply (at least 500 tokens) about how to combine all collected words 3. Finally, generate a short story incorporating the words from all 3 calls

  1. Observe: The model's responses after tool calls are rendered inside the thinking UI block instead of the main response area

Note: This issue also occurs occasionally during normal chat with thinking models using native tools. The complex prompt above is designed to reliably reproduce the issue, but simpler prompts can trigger it intermittently.

Logs & Screenshots

Image Image

Additional Information

Additional Information:

  • Affects: Native tools only
  • Does NOT affect: Function calling tools
  • Tested model: gemini-2.5-pro (via OpenRouter)
  • This bug also occurs sporadically during normal conversations with thinking models + native tools. The detailed reproduction steps above use a complex multi-tool-call prompt to trigger the issue consistently.
Originally created by @FujinoXiao on GitHub (Dec 4, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/19738 ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!). - [x] I am using the latest version of Open WebUI. ### Installation Method Git Clone ### Open WebUI Version 0.6.41 ### Ollama Version (if applicable) _No response_ ### Operating System Ubuntu 22.04 ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior When thinking models (e.g., gemini-2.5-pro) use native tools, the response after tool execution should appear in the main chat area. ### Actual Behavior When thinking models use native tools, the response content is incorrectly rendered inside the thinking UI block instead of the main response area. Note: This issue ONLY occurs with native tools. Function calling tools work correctly. ### Steps to Reproduce use this tool ``` import random from typing import Annotated from pydantic import Field class Tools: def __init__(self): self.words = [ # 名词 "星辰", "山川", "流云", "明月", "清风", "落叶", "烟雨", "霜雪", "晨曦", "暮色", # 动词 "追寻", "眺望", "徘徊", "沉醉", "飘零", "凝望", "守候", "漫步", "沉思", "回眸", # 形容词 "璀璨", "静谧", "悠远", "朦胧", "苍茫", "辽阔", "空灵", "淡雅", "绚烂", "幽深", # 成语 "风花雪月", "沧海桑田", "浮光掠影", "云淡风轻", "柳暗花明", "春暖花开", ] def random_words( self, count: Annotated[ int, Field(description="生成词语的数量 (1-20)", ge=1, le=20) ] = 5, ) -> str: """ 随机生成指定数量的中文词语。 """ result = random.sample(self.words, min(count, len(self.words))) return "、".join(result) ``` 2. Connect gemini-2.5-pro via OpenRouter (or Google AI API) 3. Start a new chat, select gemini-2.5-pro as the model 4. Enable the native tool created in step 1 5. Send this prompt: Call my random_words tool 3 times consecutively. Rules: 1. After EACH tool call, write ~50 tokens reflecting on the words and brainstorming a sentence idea 2. After all 3 calls are complete, think deeply (at least 500 tokens) about how to combine all collected words 3. Finally, generate a short story incorporating the words from all 3 calls 6. Observe: The model's responses after tool calls are rendered inside the thinking UI block instead of the main response area Note: This issue also occurs occasionally during normal chat with thinking models using native tools. The complex prompt above is designed to reliably reproduce the issue, but simpler prompts can trigger it intermittently. ### Logs & Screenshots <img width="1411" height="740" alt="Image" src="https://github.com/user-attachments/assets/38984d26-6273-46da-90b9-a7d0be8f4d2e" /> <img width="1581" height="1044" alt="Image" src="https://github.com/user-attachments/assets/588f4459-dfdc-4ae9-9a15-4fa3ceb758c1" /> ### Additional Information Additional Information: - Affects: Native tools only - Does NOT affect: Function calling tools - Tested model: gemini-2.5-pro (via OpenRouter) - This bug also occurs sporadically during normal conversations with thinking models + native tools. The detailed reproduction steps above use a complex multi-tool-call prompt to trigger the issue consistently.
GiteaMirror added the bug label 2026-04-25 08:31:21 -05:00
Author
Owner

@owui-terminator[bot] commented on GitHub (Dec 4, 2025):

🔍 Similar Issues Found

I found some existing issues that might be related to this one. Please check if any of these are duplicates or contain helpful solutions:

  1. #19711 issue: Editing function for models broken
    by skleffmann • Dec 03, 2025 • bug

  2. #16788 issue: Rendering bug when the response from the model contains <think>
    by alanxmay • Aug 21, 2025 • bug

  3. #19702 issue: Image generation tool causes inconsistency between model response and actual generated image
    by manwallet • Dec 03, 2025 • bug

  4. #14282 issue: thoughts may not be rendered correctly when using with tools
    by funnycups • May 24, 2025 • bug

  5. #19103 issue: no response from the model when ask in "channels"
    by silenceroom • Nov 11, 2025 • bug

Show 5 more related issues
  1. #13322 issue: Think tags not playing well with Native Tools enabled.
    by ivanwong1989 • Apr 29, 2025 • bug

  2. #14561 issue: thinking not showing up for Openrouter models anymore after recent update
    by amanat361 • May 31, 2025 • bug

  3. #16730 issue: Models not generating answers when web search is active
    by IMJONEZZ • Aug 19, 2025 • bug

  4. #19439 issue: models do not load in workspace - SyntaxError: Unexpected token "Internal S"... is not valid JSON
    by arslancloud • Nov 24, 2025 • bug

  5. #16973 issue: Post-tool “thinking” text leaks outside reasoning tags
    by FabioPolito24 • Aug 27, 2025 • bug


💡 Tips:

  • If this is a duplicate, please consider closing this issue and adding any additional details to the existing one
  • If you found a solution in any of these issues, please share it here to help others

This comment was generated automatically by a bot. Please react with a 👍 if this comment was helpful, or a 👎 if it was not.

<!-- gh-comment-id:3610767026 --> @owui-terminator[bot] commented on GitHub (Dec 4, 2025): 🔍 **Similar Issues Found** I found some existing issues that might be related to this one. Please check if any of these are duplicates or contain helpful solutions: 1. [#19711](https://github.com/open-webui/open-webui/issues/19711) **issue: Editing function for models broken** *by skleffmann • Dec 03, 2025 • `bug`* 2. [#16788](https://github.com/open-webui/open-webui/issues/16788) **issue: Rendering bug when the response from the model contains `<think>`** *by alanxmay • Aug 21, 2025 • `bug`* 3. [#19702](https://github.com/open-webui/open-webui/issues/19702) **issue: Image generation tool causes inconsistency between model response and actual generated image** *by manwallet • Dec 03, 2025 • `bug`* 4. [#14282](https://github.com/open-webui/open-webui/issues/14282) **issue: thoughts may not be rendered correctly when using with tools** *by funnycups • May 24, 2025 • `bug`* 5. [#19103](https://github.com/open-webui/open-webui/issues/19103) **issue: no response from the model when ask in "channels"** *by silenceroom • Nov 11, 2025 • `bug`* <details> <summary>Show 5 more related issues</summary> 6. [#13322](https://github.com/open-webui/open-webui/issues/13322) **issue: Think tags not playing well with Native Tools enabled.** *by ivanwong1989 • Apr 29, 2025 • `bug`* 7. [#14561](https://github.com/open-webui/open-webui/issues/14561) **issue: thinking not showing up for Openrouter models anymore after recent update** *by amanat361 • May 31, 2025 • `bug`* 8. [#16730](https://github.com/open-webui/open-webui/issues/16730) **issue: Models not generating answers when web search is active** *by IMJONEZZ • Aug 19, 2025 • `bug`* 9. [#19439](https://github.com/open-webui/open-webui/issues/19439) **issue: models do not load in workspace - SyntaxError: Unexpected token "Internal S"... is not valid JSON** *by arslancloud • Nov 24, 2025 • `bug`* 10. [#16973](https://github.com/open-webui/open-webui/issues/16973) **issue: Post-tool “thinking” text leaks outside reasoning tags** *by FabioPolito24 • Aug 27, 2025 • `bug`* </details> --- 💡 **Tips:** - If this is a duplicate, please consider closing this issue and adding any additional details to the existing one - If you found a solution in any of these issues, please share it here to help others *This comment was generated automatically by a bot.* Please react with a 👍 if this comment was helpful, or a 👎 if it was not.
Author
Owner

@FujinoXiao commented on GitHub (Dec 4, 2025):

Reviewed all related issues - none are duplicates. My issue provides a consistent way to reproduce this bug, which may help debug those older stale issues as well.

<!-- gh-comment-id:3610778118 --> @FujinoXiao commented on GitHub (Dec 4, 2025): Reviewed all related issues - none are duplicates. My issue provides a consistent way to reproduce this bug, which may help debug those older stale issues as well.
Author
Owner

@tjbck commented on GitHub (Dec 4, 2025):

Open to reviewing PRs.

<!-- gh-comment-id:3614121164 --> @tjbck commented on GitHub (Dec 4, 2025): Open to reviewing PRs.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#34505