[GH-ISSUE #11064] Magistral ignores nothink (v0.9.0) #33059

Closed
opened 2026-04-22 15:15:40 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @chhu on GitHub (Jun 13, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11064

Originally assigned to: @jmorganca on GitHub.

What is the issue?

Probably a template issue.

ollama run magistral
>>> /set nothink
Set 'nothink' mode.
>>> What is the current date?
<think>
Okay, I need to figure out what the current date is. But wait, I'm a text-based AI model, and my knowledge cutoff
is 2023. That means I don't have real-time information or access to the internet to 
...

Relevant log output


OS

Windows

GPU

No response

CPU

Intel

Ollama version

0.9.0

Originally created by @chhu on GitHub (Jun 13, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11064 Originally assigned to: @jmorganca on GitHub. ### What is the issue? Probably a template issue. ``` ollama run magistral >>> /set nothink Set 'nothink' mode. >>> What is the current date? <think> Okay, I need to figure out what the current date is. But wait, I'm a text-based AI model, and my knowledge cutoff is 2023. That means I don't have real-time information or access to the internet to ... ``` ### Relevant log output ```shell ``` ### OS Windows ### GPU _No response_ ### CPU Intel ### Ollama version 0.9.0
GiteaMirror added the bug label 2026-04-22 15:15:40 -05:00
Author
Owner

@mmorys commented on GitHub (Jun 13, 2025):

With ollama 0.9.0, I have a related issue. Thinking occurs both with /set think and /set nothink but with different fences around the thinking.

/set think

Thinking fenced with Thinking... and ...done thinking..

>>> What is the current date?
Thinking...
Okay, the user wants to know the current date. But how do I know [...]
...done thinking.

**Summary:**
Since I don't have [...]

**Final Answer:**
I don't have real-time information [...]

/set nothink

Thinking fenced with <think> and </think>.

>>> What is the current date?
<think>
Alright, I need to figure out the current date. But wait, [...]
</think>

**Summary:**
As an AI without real-time capabilities, [...]

Final answer in Markdown format:

The current date is not available [...]

Template and System Prompt (ollama show --modelfile magistral)

TEMPLATE """{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1}}
{{- if eq .Role "system" }}[SYSTEM_PROMPT]{{ .Content }}[/SYSTEM_PROMPT]
{{- else if eq .Role "user" }}[INST]{{ .Content }}[/INST]
{{- else if eq .Role "assistant" }}
{{- if and $.IsThinkSet (and $last .Thinking) -}}
<think>
{{ .Thinking }}
</think>
{{- end }}
{{- if .Content }}{{ .Content }}
{{- end }}
{{- if not (eq (len (slice $.Messages $i)) 1) }}</s>
{{- end }}
{{- end }}
{{- end }}"""
SYSTEM "A user will ask you to solve a task. You should first draft your thinking process (inner monologue) until you have derived the final answer. Afterwards, write a self-contained summary of your thoughts (i.e. your summary should be succinct but contain all the critical steps you needed to reach the conclusion). You should use Markdown and Latex to format your response. Write both your thoughts and summary in the same language as the task posed by the user.

Your thinking process must follow the template below:
<think>
Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate a correct answer.
</think>

Here, provide a concise summary that reflects your reasoning and presents a clear final answer to the user.

Problem:"
<!-- gh-comment-id:2970762960 --> @mmorys commented on GitHub (Jun 13, 2025): With ollama 0.9.0, I have a related issue. Thinking occurs both with `/set think` and `/set nothink` but with different fences around the thinking. ## `/set think` Thinking fenced with `Thinking...` and `...done thinking.`. ```text >>> What is the current date? Thinking... Okay, the user wants to know the current date. But how do I know [...] ...done thinking. **Summary:** Since I don't have [...] **Final Answer:** I don't have real-time information [...] ``` ## `/set nothink` Thinking fenced with `<think>` and `</think>`. ```text >>> What is the current date? <think> Alright, I need to figure out the current date. But wait, [...] </think> **Summary:** As an AI without real-time capabilities, [...] Final answer in Markdown format: The current date is not available [...] ``` ## Template and System Prompt (`ollama show --modelfile magistral`) ``` TEMPLATE """{{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1}} {{- if eq .Role "system" }}[SYSTEM_PROMPT]{{ .Content }}[/SYSTEM_PROMPT] {{- else if eq .Role "user" }}[INST]{{ .Content }}[/INST] {{- else if eq .Role "assistant" }} {{- if and $.IsThinkSet (and $last .Thinking) -}} <think> {{ .Thinking }} </think> {{- end }} {{- if .Content }}{{ .Content }} {{- end }} {{- if not (eq (len (slice $.Messages $i)) 1) }}</s> {{- end }} {{- end }} {{- end }}""" SYSTEM "A user will ask you to solve a task. You should first draft your thinking process (inner monologue) until you have derived the final answer. Afterwards, write a self-contained summary of your thoughts (i.e. your summary should be succinct but contain all the critical steps you needed to reach the conclusion). You should use Markdown and Latex to format your response. Write both your thoughts and summary in the same language as the task posed by the user. Your thinking process must follow the template below: <think> Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate a correct answer. </think> Here, provide a concise summary that reflects your reasoning and presents a clear final answer to the user. Problem:" ```
Author
Owner

@rick-github commented on GitHub (Jun 13, 2025):

It's the same issue. Madrigal is not a hybrid model in the same way that qwen3 is - there's no switch to enable or disable thinking. deepseek is the same, but in that template an empty think block is added to the prompt to fool deepseek into thinking it's already done it's thinking. It currently doesn't work very well (#11010).

The madrigal template doesn't even attempt the fake think block. To disable thinking, the system message has to be modified to remove the explicit instructions to do thinking.

The reason for the different fencing is that when think is true, the client knows it needs to capture the thinking and render it differently. When think is false the client doesn't do this, and so the tokens, including the thinking, are just sent to the output.

<!-- gh-comment-id:2970792797 --> @rick-github commented on GitHub (Jun 13, 2025): It's the same issue. Madrigal is not a hybrid model in the same way that qwen3 is - there's no switch to enable or disable thinking. deepseek is the same, but in that template an empty think block is added to the prompt to fool deepseek into thinking it's already done it's thinking. It currently doesn't work very well (#11010). The madrigal template doesn't even attempt the fake think block. To disable thinking, the system message has to be modified to remove the explicit instructions to do thinking. The reason for the different fencing is that when `think` is true, the client knows it needs to capture the thinking and render it differently. When `think` is false the client doesn't do this, and so the tokens, including the thinking, are just sent to the output.
Author
Owner

@mmorys commented on GitHub (Jun 13, 2025):

That makes sense. I asked Gemini 2.5 Pro to try to understand the issue, and got a solution along the same lines.


Of course. Here is a corrected and robust Ollama Modelfile that properly implements the think and nothink functionality.

The fundamental issue in your examples was a conflict between the System Prompt and the Template. Your prompt was instructing the model to manually write <think> tags as part of its main content, while the template was simultaneously (and incorrectly) trying to use Ollama's built-in, but separate, thinking feature.

This revised Modelfile harmonizes the template, system prompt, and parameters to use Ollama's native thinking functionality, which is controllable by the /set think and /set nothink commands.

# Description: A model that supports controlled thinking output.
# To use: ollama create my-thinking-model -f ./Modelfile

# Set the base model
FROM magistral:latest

# Define the template
TEMPLATE """{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "system" -}}
[SYSTEM_PROMPT]{{ .Content }}[/SYSTEM_PROMPT]
{{- else if eq .Role "user" -}}
[INST]{{ .Content }}[/INST]
{{- else if eq .Role "assistant" -}}
{{- if and $.IsThinkSet (and $last .Thinking) -}}
<think>
{{ .Thinking }}
</think>
{{- end -}}
{{ .Content -}}
</s>
{{- end -}}
{{- end }}"""

# Define the system prompt
SYSTEM """You are a helpful AI assistant.
First, think step-by-step about the user's request to build a clear plan and generate a correct answer. Your thinking process will not be shown to the user unless requested.
After you have done your thinking, provide a concise, self-contained summary of your reasoning and the final answer. Use Markdown and LaTeX for formatting when appropriate.
"""

# Set model parameters
PARAMETER stop </s>
PARAMETER stop <think>
PARAMETER stop </think>
# This parameter is crucial for enabling the .Thinking variable in the template
PARAMETER think true
<!-- gh-comment-id:2970837210 --> @mmorys commented on GitHub (Jun 13, 2025): That makes sense. I asked Gemini 2.5 Pro to try to understand the issue, and got a solution along the same lines. --- Of course. Here is a corrected and robust Ollama `Modelfile` that properly implements the `think` and `nothink` functionality. The fundamental issue in your examples was a conflict between the **System Prompt** and the **Template**. Your prompt was instructing the model to *manually* write `<think>` tags as part of its main content, while the template was simultaneously (and incorrectly) trying to use Ollama's built-in, but separate, thinking feature. This revised `Modelfile` harmonizes the template, system prompt, and parameters to use Ollama's native thinking functionality, which is controllable by the `/set think` and `/set nothink` commands. ``` # Description: A model that supports controlled thinking output. # To use: ollama create my-thinking-model -f ./Modelfile # Set the base model FROM magistral:latest # Define the template TEMPLATE """{{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if eq .Role "system" -}} [SYSTEM_PROMPT]{{ .Content }}[/SYSTEM_PROMPT] {{- else if eq .Role "user" -}} [INST]{{ .Content }}[/INST] {{- else if eq .Role "assistant" -}} {{- if and $.IsThinkSet (and $last .Thinking) -}} <think> {{ .Thinking }} </think> {{- end -}} {{ .Content -}} </s> {{- end -}} {{- end }}""" # Define the system prompt SYSTEM """You are a helpful AI assistant. First, think step-by-step about the user's request to build a clear plan and generate a correct answer. Your thinking process will not be shown to the user unless requested. After you have done your thinking, provide a concise, self-contained summary of your reasoning and the final answer. Use Markdown and LaTeX for formatting when appropriate. """ # Set model parameters PARAMETER stop </s> PARAMETER stop <think> PARAMETER stop </think> # This parameter is crucial for enabling the .Thinking variable in the template PARAMETER think true ```
Author
Owner

@mmorys commented on GitHub (Jun 13, 2025):

As one final sidenote, I am getting the <think> </nothink> output correctly by default with the Unsloth model from huggingface. Will switch to that implementation for now.

ollama run hf.co/unsloth/Magistral-Small-2506-GGUF:UD-Q4_K_XL

<!-- gh-comment-id:2970844758 --> @mmorys commented on GitHub (Jun 13, 2025): As one final sidenote, I am getting the `<think> </nothink>` output correctly by default with the [Unsloth model from huggingface](https://huggingface.co/unsloth/Magistral-Small-2506-GGUF). Will switch to that implementation for now. `ollama run hf.co/unsloth/Magistral-Small-2506-GGUF:UD-Q4_K_XL`
Author
Owner

@Igorgro commented on GitHub (Jun 13, 2025):

I have two issues when trying to use Magistral:

  1. When using it without any specific settings, it "overthinks" by writing so large thinking block that it runs out of available context (or allowed message size by ollama) and cannot produce an answer itself. For example I asked a question 'Which animal is red?" (not in English , but anyway) and it wrote a huge thinking block consisting of strange thoughs in a form "it may be a fox, or may be not a fox", which run out of allowed message size and didn't provide an answer.
  2. When trying to set system message (via /set system), then it stops thinking but also doesn't give any meaningful answers. For example it answered 'A fire' on the same question from the previous point.

I'm not sure if this an issue of ollama or the model itself

<!-- gh-comment-id:2971529093 --> @Igorgro commented on GitHub (Jun 13, 2025): I have two issues when trying to use Magistral: 1. When using it without any specific settings, it "overthinks" by writing so large thinking block that it runs out of available context (or allowed message size by ollama) and cannot produce an answer itself. For example I asked a question 'Which animal is red?" (not in English , but anyway) and it wrote a huge thinking block consisting of strange thoughs in a form "it may be a fox, or may be not a fox", which run out of allowed message size and didn't provide an answer. 2. When trying to set system message (via `/set system`), then it stops thinking but also doesn't give any meaningful answers. For example it answered 'A fire' on the same question from the previous point. I'm not sure if this an issue of ollama or the model itself
Author
Owner

@jmorganca commented on GitHub (Jun 17, 2025):

Hi all this should be fixed now. Note: it's also recommended to set the system prompt to something else. To re-download Magistral (should be fast):

ollama pull magistral
<!-- gh-comment-id:2978517323 --> @jmorganca commented on GitHub (Jun 17, 2025): Hi all this should be fixed now. Note: it's also recommended to set the system prompt to something else. To re-download Magistral (should be fast): ``` ollama pull magistral ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33059