[GH-ISSUE #21348] issue: v0.8.0 reasoning trace is visually split to many parts, causing the browser to slow down to a halt #58116

Closed
opened 2026-05-05 22:22:26 -05:00 by GiteaMirror · 50 comments
Owner

Originally created by @rotemdan on GitHub (Feb 13, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/21348

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

v0.8.0

Ollama Version (if applicable)

No response

Operating System

Windows 11

Browser (if applicable)

Brave (latest)

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

Regular reasoning trace rendering.

Actual Behavior

Seems to be some sort of rendering issue with reasoning trace possibly split at each token output. I think the screen capture is the best way to show this.

Steps to Reproduce

Prompt a model that outputs a reasoning trace.

Logs & Screenshots

Image

Additional Information

Never seen the issue before v0.8.0.

Originally created by @rotemdan on GitHub (Feb 13, 2026). Original GitHub issue: https://github.com/open-webui/open-webui/issues/21348 ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!). - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version v0.8.0 ### Ollama Version (if applicable) _No response_ ### Operating System Windows 11 ### Browser (if applicable) Brave (latest) ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior Regular reasoning trace rendering. ### Actual Behavior Seems to be some sort of rendering issue with reasoning trace possibly split at each token output. I think the screen capture is the best way to show this. ### Steps to Reproduce Prompt a model that outputs a reasoning trace. ### Logs & Screenshots ![Image](https://github.com/user-attachments/assets/e237cfbd-c5ea-4485-a5f4-08771239384c) ### Additional Information Never seen the issue before v0.8.0.
GiteaMirror added the bug label 2026-05-05 22:22:26 -05:00
Author
Owner

@rotemdan commented on GitHub (Feb 13, 2026):

To clarify I'm seeing this with my own local backend (called NodeLM).

It happens only on reasoning traces, not on non-reasoning responses.

I tried to see if I get this with OpenRouter and it doesn't seem so (at the moment, based on limited testing).

Something about the stream response my backend is generating is causing v0.8.0 to produce this behavior, while previous versions of open-webui didn't. It's possible other backends also cause that, but I'm not able to test all of them at the moment.

Edit: I believe my backend produces standard streaming outputs. Speculation: it may actually be related to connection latency or the fact the server is local.

I'll need to investigate what exactly is causing it.

<!-- gh-comment-id:3894722815 --> @rotemdan commented on GitHub (Feb 13, 2026): To clarify I'm seeing this with my own local backend (called NodeLM). It happens only on reasoning traces, not on non-reasoning responses. I tried to see if I get this with OpenRouter and it doesn't seem so (at the moment, based on limited testing). Something about the stream response my backend is generating is causing `v0.8.0` to produce this behavior, while previous versions of open-webui didn't. It's possible other backends also cause that, but I'm not able to test all of them at the moment. **Edit**: I believe my backend produces standard streaming outputs. Speculation: it may actually be related to connection latency or the fact the server is local. I'll need to investigate what exactly is causing it.
Author
Owner

@LostVector commented on GitHub (Feb 13, 2026):

getting this too ... so it wasn't always like this? lol

<!-- gh-comment-id:3894729890 --> @LostVector commented on GitHub (Feb 13, 2026): getting this too ... so it wasn't always like this? lol
Author
Owner

@Yundin commented on GitHub (Feb 13, 2026):

Having the same issue after the 0.8.0 update
I'm using a pipe function to OpenRouter. I've checked that its output hasn't changed: it yields <think>\n block and reasoning tokens after that, followed by \n</think>\n\n and the rest of the content. It is visual representation that changed.

<!-- gh-comment-id:3894743526 --> @Yundin commented on GitHub (Feb 13, 2026): Having the same issue after the 0.8.0 update I'm using a [pipe function](https://openwebui.com/posts/openrouter_2ad7ff05) to OpenRouter. I've checked that its output hasn't changed: it yields `<think>\n` block and reasoning tokens after that, followed by `\n</think>\n\n` and the rest of the content. It is visual representation that changed.
Author
Owner

@spectre-pro commented on GitHub (Feb 13, 2026):

I use gemini 3 pro, also have this problem

<!-- gh-comment-id:3894787882 --> @spectre-pro commented on GitHub (Feb 13, 2026): I use gemini 3 pro, also have this problem
Author
Owner

@sailstudio commented on GitHub (Feb 13, 2026):

I use minimax-m2.1(local host vllm provider), also have this problem, and i can not revert to last verison (database changed)

<!-- gh-comment-id:3894858560 --> @sailstudio commented on GitHub (Feb 13, 2026): I use minimax-m2.1(local host vllm provider), also have this problem, and i can not revert to last verison (database changed)
Author
Owner

@ByzantineProcess commented on GitHub (Feb 13, 2026):

Also getting this issue with an Ollama backend using deepseek-r1 and qwen3. gpt-oss and glm-4.7-flash don't seem to be affected?

<!-- gh-comment-id:3895083500 --> @ByzantineProcess commented on GitHub (Feb 13, 2026): Also getting this issue with an Ollama backend using deepseek-r1 and qwen3. gpt-oss and glm-4.7-flash don't seem to be affected?
Author
Owner

@RedBlizard commented on GitHub (Feb 13, 2026):

I have the same problem for now rolling back to release v0.7.2 fixed my problem ! and removed watchtower in my docker-compose for now ! i hope this will be fixed soon.

<!-- gh-comment-id:3895148693 --> @RedBlizard commented on GitHub (Feb 13, 2026): I have the same problem for now rolling back to release v0.7.2 fixed my problem ! and removed watchtower in my docker-compose for now ! i hope this will be fixed soon.
Author
Owner

@spectre-pro commented on GitHub (Feb 13, 2026):

I have the same problem for now rolling back to release v0.7.2 fixed my problem ! and removed watchtower in my docker-compose for now ! i hope this will be fixed soon.

database be changed also can rolling back?

<!-- gh-comment-id:3895158662 --> @spectre-pro commented on GitHub (Feb 13, 2026): > I have the same problem for now rolling back to release v0.7.2 fixed my problem ! and removed watchtower in my docker-compose for now ! i hope this will be fixed soon. database be changed also can rolling back?
Author
Owner

@kleymenus commented on GitHub (Feb 13, 2026):

I have this issue, too... w. OpenAI API (Mistral, Google), Ollama

<!-- gh-comment-id:3895320505 --> @kleymenus commented on GitHub (Feb 13, 2026): I have this issue, too... w. OpenAI API (Mistral, Google), Ollama
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

During extensive testing we DID see this behavior but only for the minimax api.

We never saw this behavior on OpenAI API, nor OpenRouter, nor LiteLLM, nor OpenAI, nor Vertex AI

I assumed minimax api due to the very new model release had issues and i couldn't reproduce it on openrouter at all.

What api endpoints are you guys using?
There has to be a difference in how these Providers stream their responses.

<!-- gh-comment-id:3895346967 --> @Classic298 commented on GitHub (Feb 13, 2026): During extensive testing we DID see this behavior but only for the minimax api. We never saw this behavior on OpenAI API, nor OpenRouter, nor LiteLLM, nor OpenAI, nor Vertex AI I assumed minimax api due to the very new model release had issues and i couldn't reproduce it on openrouter at all. What api endpoints are you guys using? There has to be a difference in how these Providers stream their responses.
Author
Owner

@kleymenus commented on GitHub (Feb 13, 2026):

@Classic298 for instance:

https://api.mistral.ai/v1
https://generativelanguage.googleapis.com/v1beta/openai

and Ollama

(in previous versions, everything worked with the same initial settings)

<!-- gh-comment-id:3895381875 --> @kleymenus commented on GitHub (Feb 13, 2026): @Classic298 for instance: https://api.mistral.ai/v1 https://generativelanguage.googleapis.com/v1beta/openai and Ollama (in previous versions, everything worked with the same initial settings)
Author
Owner

@RedBlizard commented on GitHub (Feb 13, 2026):

I don't use external api's just Ollama with http://host.docker.internal:11434

<!-- gh-comment-id:3895396234 --> @RedBlizard commented on GitHub (Feb 13, 2026): I don't use external api's just Ollama with http://host.docker.internal:11434
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

Ok will try with generative language openai endpoint by Google. Give me a minute.

<!-- gh-comment-id:3895416740 --> @Classic298 commented on GitHub (Feb 13, 2026): Ok will try with generative language openai endpoint by Google. Give me a minute.
Author
Owner

@kleymenus commented on GitHub (Feb 13, 2026):

@Classic298 just a note: reasoning tags could be inside the stream of the main assistant's response

<!-- gh-comment-id:3895433195 --> @kleymenus commented on GitHub (Feb 13, 2026): @Classic298 just a note: reasoning tags could be inside the stream of the main assistant's response
Author
Owner

@Omaha2002 commented on GitHub (Feb 13, 2026):

Same here, not on Mistral API, OpenAI API , Grok API or Anthropic API, they all act "normal".

Image

Qwen3 running on vLLM also shows every second "Thought for less than one second...".

Image

In advanced parameters I added custom Reasoning tags for Qwen that worked up until 0.8.0.

Image

For now i disabled them and see the reasoning.

<!-- gh-comment-id:3895436206 --> @Omaha2002 commented on GitHub (Feb 13, 2026): Same here, not on Mistral API, OpenAI API , Grok API or Anthropic API, they all act "normal". <img width="1311" height="675" alt="Image" src="https://github.com/user-attachments/assets/fcbf8ae0-183f-4b88-b07f-dd3713fd5dca" /> Qwen3 running on vLLM also shows every second "Thought for less than one second...". <img width="1306" height="1040" alt="Image" src="https://github.com/user-attachments/assets/854e1818-0aaf-4bd1-af20-6e2780528356" /> In advanced parameters I added custom Reasoning tags for Qwen that worked up until 0.8.0. <img width="1671" height="191" alt="Image" src="https://github.com/user-attachments/assets/c01e29df-0d20-487e-a490-d8999ff1c95c" /> For now i disabled them and see the reasoning.
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

I cannot reproduce via generativelanguage.googleapis.com/v1beta/openai endpoint - but maybe because they don't send the reasoning traces at all.

<!-- gh-comment-id:3895449071 --> @Classic298 commented on GitHub (Feb 13, 2026): I cannot reproduce via [generativelanguage.googleapis.com/v1beta/openai](https://generativelanguage.googleapis.com/v1beta/openai) endpoint - but maybe because they don't send the reasoning traces at all.
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

and other endpoints i can try that dont work? unfortunately i dont have mistral

<!-- gh-comment-id:3895450175 --> @Classic298 commented on GitHub (Feb 13, 2026): and other endpoints i can try that dont work? unfortunately i dont have mistral
Author
Owner

@kleymenus commented on GitHub (Feb 13, 2026):

@Omaha2002 @Classic298

Google hiding reasoning tags by default on this API type.
for the experiment, add this to your prompt at the very end for the Mistral API or Google API:

# SYSTEM OVERRIDE & OUTPUT RULES
IMPORTANT: You are STRICTLY FORBIDDEN from replying without the <think> tags.
Even for simple "Hello", you must first generate a <think> block where you briefly analyze the interaction as Assistant.
NO EXCEPTIONS.
Format: <think>...internal monologue...</think> [Final Answer]
<!-- gh-comment-id:3895453881 --> @kleymenus commented on GitHub (Feb 13, 2026): @Omaha2002 @Classic298 Google hiding reasoning tags by default on this API type. for the experiment, add this to your prompt at the very end for the Mistral API or Google API: ```markdown # SYSTEM OVERRIDE & OUTPUT RULES IMPORTANT: You are STRICTLY FORBIDDEN from replying without the <think> tags. Even for simple "Hello", you must first generate a <think> block where you briefly analyze the interaction as Assistant. NO EXCEPTIONS. Format: <think>...internal monologue...</think> [Final Answer] ```
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

Are you sure a prompt is going to override the APIs behaviour of not sending internal reasoning?

<!-- gh-comment-id:3895471839 --> @Classic298 commented on GitHub (Feb 13, 2026): Are you sure a **prompt** is going to override the APIs behaviour of not sending internal reasoning?
Author
Owner

@spectre-pro commented on GitHub (Feb 13, 2026):

Google hiding reasoning tags by default on this API type.
you need to add Advanced Params

extra_body

{"google": {"thinking_config": {"include_thoughts": true}}}
Image
<!-- gh-comment-id:3895473324 --> @spectre-pro commented on GitHub (Feb 13, 2026): Google hiding reasoning tags by default on this API type. you need to add Advanced Params ``` extra_body {"google": {"thinking_config": {"include_thoughts": true}}} ``` <img width="983" height="48" alt="Image" src="https://github.com/user-attachments/assets/8719e77b-82d3-4d5b-89aa-93149424f328" />
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

thanks, i will try that

<!-- gh-comment-id:3895474249 --> @Classic298 commented on GitHub (Feb 13, 2026): thanks, i will try that
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

thanks reproducible with that. Will investigate.

<!-- gh-comment-id:3895485199 --> @Classic298 commented on GitHub (Feb 13, 2026): thanks reproducible with that. Will investigate.
Author
Owner

@kleymenus commented on GitHub (Feb 13, 2026):

@Classic298 this is to emulate the general response and problem - the model begins to incorporate thought within the main block itself, yes. so, even if you don't have Mistral or any other api access - you will reproduce an issue in generic way

<!-- gh-comment-id:3895485218 --> @kleymenus commented on GitHub (Feb 13, 2026): @Classic298 this is to emulate the general response and problem - the model begins to incorporate thought within the main block itself, yes. so, even if you don't have Mistral or any other api access - you will reproduce an issue in generic way
Author
Owner

@spectre-pro commented on GitHub (Feb 13, 2026):

So can I roll back? I'm worried that doing so will corrupt my database.

<!-- gh-comment-id:3895496450 --> @spectre-pro commented on GitHub (Feb 13, 2026): So can I roll back? I'm worried that doing so will corrupt my database.
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

You can roll back if you also roll back your database from a backup

The migrations have moved your access control, chat messages and prompts into new tables

<!-- gh-comment-id:3895499095 --> @Classic298 commented on GitHub (Feb 13, 2026): You can roll back if you also roll back your database from a backup The migrations have moved your access control, chat messages and prompts into new tables
Author
Owner

@spectre-pro commented on GitHub (Feb 13, 2026):

Unfortunately, I didn't back up the database before the update.

<!-- gh-comment-id:3895505369 --> @spectre-pro commented on GitHub (Feb 13, 2026): Unfortunately, I didn't back up the database before the update.
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

If this is a backend issue (might as well be) then everyone here can easily apply the changes without needing to wait for a new release or even for the PR to get merged. All installation methods allow you to change the backend python files, restart, and done.


PS: Just generally said - we encourage everyone to have a separate dev environment. Either for actually using it on a day-2-day basis (for single user setups) or as a testing environment and to test upcoming releases for your production use. If more people can help test the dev branch, things like this could be caught earlier. We do a lot of testing already, but as evident, despite testing many different providers - we can't try them all.

<!-- gh-comment-id:3895526992 --> @Classic298 commented on GitHub (Feb 13, 2026): If this is a backend issue (might as well be) then everyone here can easily apply the changes without needing to wait for a new release or even for the PR to get merged. All installation methods allow you to change the backend python files, restart, and done. --- **PS: Just generally said - we encourage everyone to have a separate dev environment. Either for actually using it on a day-2-day basis (for single user setups) or as a testing environment and to test upcoming releases for your production use. If more people can help test the `dev` branch, things like this could be caught earlier. We do a lot of testing already, but as evident, despite testing many different providers - we can't try them all.**
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

Might have a fix ready. Give me a little bit to verify.

<!-- gh-comment-id:3895536201 --> @Classic298 commented on GitHub (Feb 13, 2026): Might have a fix ready. Give me a little bit to verify.
Author
Owner

@spectre-pro commented on GitHub (Feb 13, 2026):

ok thx

<!-- gh-comment-id:3895540838 --> @spectre-pro commented on GitHub (Feb 13, 2026): ok thx
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

Hell yeah
Fix works with generativelanguages endpoint and minimax endpoint and doesnt break openrouter (which previously worked)

so looks goooood

<!-- gh-comment-id:3895552283 --> @Classic298 commented on GitHub (Feb 13, 2026): Hell yeah Fix works with generativelanguages endpoint and minimax endpoint and doesnt break openrouter (which previously worked) so looks goooood
Author
Owner

@spectre-pro commented on GitHub (Feb 13, 2026):

Where or when can we get this fix works

<!-- gh-comment-id:3895558636 --> @spectre-pro commented on GitHub (Feb 13, 2026): Where or when can we get this fix works
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

@spectre-pro give me a minute 👀 patience

<!-- gh-comment-id:3895560258 --> @Classic298 commented on GitHub (Feb 13, 2026): @spectre-pro give me a minute 👀 patience
Author
Owner

@spectre-pro commented on GitHub (Feb 13, 2026):

ok...

<!-- gh-comment-id:3895565414 --> @spectre-pro commented on GitHub (Feb 13, 2026): ok...
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

anyone with this issue i encourage you to manually apply these changes to your open webui (before anyone asks, yes you can also modify the docker image, just docker exec and bash into it to modify it) then restart after modifying your backend and try it out

https://github.com/open-webui/open-webui/pull/21355

<!-- gh-comment-id:3895565655 --> @Classic298 commented on GitHub (Feb 13, 2026): anyone with this issue i encourage you to manually apply these changes to your open webui (before anyone asks, yes you can also modify the docker image, just docker exec and bash into it to modify it) then restart after modifying your backend and try it out https://github.com/open-webui/open-webui/pull/21355
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

Please come back and confirm working (or not) when done.

Additionally, i ran 8 different tests on this change and they all passed - so in addition of my own testing, i am highly confident this should work for everyone

<!-- gh-comment-id:3895573575 --> @Classic298 commented on GitHub (Feb 13, 2026): Please come back and confirm working (or not) when done. Additionally, i ran 8 different tests on this change and they all passed - so in addition of my own testing, i am highly confident this should work for everyone
Author
Owner

@kleymenus commented on GitHub (Feb 13, 2026):

fixed for me 👍

<!-- gh-comment-id:3895590805 --> @kleymenus commented on GitHub (Feb 13, 2026): fixed for me 👍
Author
Owner

@spectre-pro commented on GitHub (Feb 13, 2026):

it works for me

<!-- gh-comment-id:3895683778 --> @spectre-pro commented on GitHub (Feb 13, 2026): it works for me
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

perfect thanks for confirming guys.

Reminder: Just generally said - we encourage everyone to have a separate dev environment. Either for actually using it on a day-to-day basis (for single user setups) or as a testing environment and to test upcoming releases for your production use. If more people can help test the dev branch, things like this could be caught earlier. We do a lot of testing already, but as evident, despite testing many different providers - we can't try them all.

Will leave this open until the PR is merged.

<!-- gh-comment-id:3895714157 --> @Classic298 commented on GitHub (Feb 13, 2026): perfect thanks for confirming guys. **Reminder: Just generally said - we encourage everyone to have a separate dev environment. Either for actually using it on a day-to-day basis (for single user setups) or as a testing environment and to test upcoming releases for your production use. If more people can help test the dev branch, things like this could be caught earlier. We do a lot of testing already, but as evident, despite testing many different providers - we can't try them all.** Will leave this open until the PR is merged.
Author
Owner

@RedBlizard commented on GitHub (Feb 13, 2026):

@Classic298 thanks for your support , but i have to wait until it is merged with next release.

<!-- gh-comment-id:3896022457 --> @RedBlizard commented on GitHub (Feb 13, 2026): @Classic298 thanks for your support , but i have to wait until it is merged with next release.
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

@RedBlizard technically you don't have to. You can just apply these changes manually to the backend files.

<!-- gh-comment-id:3896031851 --> @Classic298 commented on GitHub (Feb 13, 2026): @RedBlizard technically you don't have to. You can just apply these changes manually to the backend files.
Author
Owner

@sailstudio commented on GitHub (Feb 13, 2026):

@Classic298 I use this patch,and fix think tag issue , nice work!

but display maybe incorrect when the "think" process contains a <code_interpreter> block, please check this:

Image
<!-- gh-comment-id:3896208526 --> @sailstudio commented on GitHub (Feb 13, 2026): @Classic298 I use this patch,and fix think tag issue , nice work! but display maybe incorrect when the "think" process contains a <code_interpreter> block, please check this: <img width="1014" height="1131" alt="Image" src="https://github.com/user-attachments/assets/eece4f29-bd6a-4365-b330-2a6eda26d0df" />
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

for the code interpreter, why should it render it as html? it is always json

what model is this? I dont get the non-detected think tags anymore. Also check your advanced model settings, did you define any custom think tags there?

<!-- gh-comment-id:3896219366 --> @Classic298 commented on GitHub (Feb 13, 2026): for the code interpreter, why should it render it as html? it is always json what model is this? I dont get the non-detected think tags anymore. Also check your advanced model settings, did you define any custom think tags there?
Author
Owner

@sailstudio commented on GitHub (Feb 13, 2026):

Image
<!-- gh-comment-id:3896392761 --> @sailstudio commented on GitHub (Feb 13, 2026): <img width="1050" height="556" alt="Image" src="https://github.com/user-attachments/assets/923124d0-ce17-4ea2-a881-05f5826372c5" />
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

@sailstudio could you provide the raw response text for me please - just using the copy button below

and please also say if you configured custom think tags or not.

<!-- gh-comment-id:3896403872 --> @Classic298 commented on GitHub (Feb 13, 2026): @sailstudio could you provide the raw response text for me please - just using the copy button below and please also say if you configured custom think tags or not.
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

nevermind could reproduce it and i have a fix for it . 1min

<!-- gh-comment-id:3896479109 --> @Classic298 commented on GitHub (Feb 13, 2026): nevermind could reproduce it and i have a fix for it . 1min
Author
Owner

@spectre-pro commented on GitHub (Feb 13, 2026):

I think the reason is that the model invokes the tool while thinking, rather than after thinking ends.
This caused the tool to break the thinking area.

<!-- gh-comment-id:3896480152 --> @spectre-pro commented on GitHub (Feb 13, 2026): I think the reason is that the model invokes the tool while thinking, rather than after thinking ends. This caused the tool to break the thinking area.
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

thats not the issue.

<!-- gh-comment-id:3896483272 --> @Classic298 commented on GitHub (Feb 13, 2026): thats not the issue.
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

@sailstudio pushed a new fix to the PR

https://github.com/open-webui/open-webui/pull/21355

apply all changes again

<!-- gh-comment-id:3896491192 --> @Classic298 commented on GitHub (Feb 13, 2026): @sailstudio pushed a new fix to the PR https://github.com/open-webui/open-webui/pull/21355 apply all changes again
Author
Owner

@tjbck commented on GitHub (Feb 13, 2026):

Could anyone confirm if this issue has been resolved in dev?

<!-- gh-comment-id:3900074213 --> @tjbck commented on GitHub (Feb 13, 2026): Could anyone confirm if this issue has been resolved in dev?
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

yes resolved in dev

<!-- gh-comment-id:3900096024 --> @Classic298 commented on GitHub (Feb 13, 2026): yes resolved in dev
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#58116