[GH-ISSUE #11251] prompt_eval_count appears to omit <think> blocks from token accounting #7412

Closed
opened 2026-04-12 19:29:50 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @dandydan888 on GitHub (Jul 1, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11251

What is the issue?

While debugging on unusual token reporting from prompt_eval_count, I observed a behavior that may be worth clarifying:

  • When I include assistant reasoning inside a <think>...</think> block within the content of a previous message, the prompt_eval_count in subsequent requests remains unchanged—even if the <think> section contains a large amount of text.
  • In contrast, if the same content is placed outside the <think> tags (or the tag is renamed), prompt_eval_count increases as expected.

This suggests that either:

  • The client/server is explicitly filtering out <think> blocks before prompt construction and tokenization, or
  • The model has been trained to ignore them internally (which seems less likely, as token counts are measured before attention takes effect)

For developers interested in full-context fidelity—particularly across multi-turn conversations—this raises a few questions:

  • Is Ollama currently stripping <think> content prior to token accounting and model inference?
  • If so, is there a way to opt in to preserving that content in the serialized prompt for continuity?
  • Would it be possible to expose this behavior explicitly in the docs or support an override?

I understand that <think> is often used to improve user experience by keeping responses concise or structuring intermediate thoughts more clearly. That said, it might be helpful to allow developers a bit more control. For example, if <think> appears explicitly in content when feeding prior messages into the messages array, perhaps Ollama could preserve those blocks—treating them as intentional and bypassing the usual filtering logic. This would allow the majority of users to benefit from clean UI behavior, while enabling devs to maintain continuity of reasoning when needed. That kind of opt-in transparency would offer the best of both worlds: abstraction for convenience, fidelity when requested.


Reproducible behavior:

// Case 1: Using thinking field 
// prompt_eval_count = 24
{
  "model": "deepseek-r1:8b",
  "messages": [
    { "role": "user", "content": "what is 2 + 2?" },
    { "role": "assistant", "content": "2 + 2 equals 4.", "thinking": "..." },
    { "role": "user", "content": "thank you!" }
  ],
  "think": true,
  "stream": false
}
// Case 2: Injecting <think> block into content 
// prompt_eval_count = 24
{
  "model": "deepseek-r1:8b",
  "messages": [
    { "role": "user", "content": "what is 2 + 2?" },
    { "role": "assistant", "content": "<think>...</think>2 + 2 equals 4." },
    { "role": "user", "content": "thank you!" }
  ],
  "think": true,
  "stream": false
}
// Case 3: Using different tag (<thought>) 
// prompt_eval_count = 482
{
  "model": "deepseek-r1:8b",
  "messages": [
    { "role": "user", "content": "what is 2 + 2?" },
    { "role": "assistant", "content": "<thought>...</thought>2 + 2 equals 4." },
    { "role": "user", "content": "thank you!" }
  ],
  "think": true,
  "stream": false
}

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @dandydan888 on GitHub (Jul 1, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11251 ### What is the issue? While debugging on unusual token reporting from `prompt_eval_count`, I observed a behavior that may be worth clarifying: - When I include assistant reasoning inside a `<think>...</think>` block within the content of a previous message, the `prompt_eval_count` in subsequent requests remains unchanged—even if the `<think>` section contains a large amount of text. - In contrast, if the same content is placed outside the `<think>` tags (or the tag is renamed), `prompt_eval_count` increases as expected. This suggests that either: - The client/server is explicitly filtering out `<think>` blocks before prompt construction and tokenization, or - The model has been trained to ignore them internally (which seems less likely, as token counts are measured before attention takes effect) For developers interested in full-context fidelity—particularly across multi-turn conversations—this raises a few questions: - Is Ollama currently stripping `<think>` content prior to token accounting and model inference? - If so, is there a way to opt in to preserving that content in the serialized prompt for continuity? - Would it be possible to expose this behavior explicitly in the docs or support an override? I understand that `<think>` is often used to improve user experience by keeping responses concise or structuring intermediate thoughts more clearly. That said, it might be helpful to allow developers a bit more control. For example, if `<think>` appears explicitly in content when feeding prior messages into the messages array, perhaps Ollama could preserve those blocks—treating them as intentional and bypassing the usual filtering logic. This would allow the majority of users to benefit from clean UI behavior, while enabling devs to maintain continuity of reasoning when needed. That kind of opt-in transparency would offer the best of both worlds: abstraction for convenience, fidelity when requested. --- Reproducible behavior: ``` // Case 1: Using thinking field // prompt_eval_count = 24 { "model": "deepseek-r1:8b", "messages": [ { "role": "user", "content": "what is 2 + 2?" }, { "role": "assistant", "content": "2 + 2 equals 4.", "thinking": "..." }, { "role": "user", "content": "thank you!" } ], "think": true, "stream": false } ``` ``` // Case 2: Injecting <think> block into content // prompt_eval_count = 24 { "model": "deepseek-r1:8b", "messages": [ { "role": "user", "content": "what is 2 + 2?" }, { "role": "assistant", "content": "<think>...</think>2 + 2 equals 4." }, { "role": "user", "content": "thank you!" } ], "think": true, "stream": false } ``` ``` // Case 3: Using different tag (<thought>) // prompt_eval_count = 482 { "model": "deepseek-r1:8b", "messages": [ { "role": "user", "content": "what is 2 + 2?" }, { "role": "assistant", "content": "<thought>...</thought>2 + 2 equals 4." }, { "role": "user", "content": "thank you!" } ], "think": true, "stream": false } ``` ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-12 19:29:50 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 1, 2025):

Removing the think block is recommended for deepseek models.

From the chat template of the base model:

{%- if message['role'] == 'assistant' %}
  {% if '</think>' in content %}{
    % set content = content.split('</think>')[-1] %}
  {% endif %}
{% endif %}
<!-- gh-comment-id:3023113617 --> @rick-github commented on GitHub (Jul 1, 2025): Removing the think block is [recommended](https://api-docs.deepseek.com/guides/reasoning_model#multi-round-conversation) for deepseek models. From the [chat template](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B/blob/main/tokenizer_config.json#L34) of the base model: ``` {%- if message['role'] == 'assistant' %} {% if '</think>' in content %}{ % set content = content.split('</think>')[-1] %} {% endif %} {% endif %} ```
Author
Owner

@dandydan888 commented on GitHub (Jul 1, 2025):

Ah I see now! Thanks for pointing that out and linking the template. Really appreciate the clarity!

<!-- gh-comment-id:3024313846 --> @dandydan888 commented on GitHub (Jul 1, 2025): Ah I see now! Thanks for pointing that out and linking the template. Really appreciate the clarity!
Author
Owner

@dandydan888 commented on GitHub (Jul 1, 2025):

I tested another model, qwen3:30b-a3b, and this is what I noticed: manually adding a <think> block into content does count toward prompt_eval_count, which suggests the model processes it just fine. But with the introduction of the thinking property in the recent version of Ollama—where the <think> block gets extracted from the assistant's output and stored separately, there seems to be a gap when it comes to piecing it back together later on.
Once that <think> text is moved into the thinking property, it never seems to make it back into the prompt in the chat history. Hugging Face’s chat_template for the same model, for instance, re-inserts the <think> block when flattening history, so the model maintains access to its own prior reasoning. Ollama’s runtime doesn’t appear to preserve that unless I manually reconstruct it in content.
Could this be an overlooked detail? If it's intentional, what is the rationale? It'd be great if there was a way to opt into reintegrating thinking content into the serialized prompt trail, especially for workflows where introspective continuity matters.

From the chat_template at Hugging Face:

{%- elif message.role == "assistant" %}
  {%- set reasoning_content = '' %}
  {%- if message.reasoning_content is string %}
    {%- set reasoning_content = message.reasoning_content %}
  {%- else %}
    {%- if '</think>' in content %}
      {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
      {%- set content = content.split('</think>')[-1].lstrip('\n') %}
    {%- endif %}
  {%- endif %}
  {%- if loop.index0 > ns.last_query_index %}
    {%- if loop.last or (not loop.last and reasoning_content) %}
      {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
    {%- else %}
      {{- '<|im_start|>' + message.role + '\n' + content }}
    {%- endif %}
  {%- else %}
    {{- '<|im_start|>' + message.role + '\n' + content }}
  {%- endif %}
<!-- gh-comment-id:3024601849 --> @dandydan888 commented on GitHub (Jul 1, 2025): I tested another model, `qwen3:30b-a3b`, and this is what I noticed: manually adding a `<think>` block into content _does_ count toward `prompt_eval_count`, which suggests the model processes it just fine. But with the introduction of the `thinking` property in the recent version of Ollama—where the `<think>` block gets extracted from the assistant's output and stored separately, there seems to be a gap when it comes to piecing it back together later on. Once that `<think>` text is moved into the `thinking` property, it never seems to make it back into the prompt in the chat history. Hugging Face’s `chat_template` for the same model, for instance, re-inserts the `<think>` block when flattening history, so the model maintains access to its own prior reasoning. Ollama’s runtime doesn’t appear to preserve that unless I manually reconstruct it in content. Could this be an overlooked detail? If it's intentional, what is the rationale? It'd be great if there was a way to opt into reintegrating thinking content into the serialized prompt trail, especially for workflows where introspective continuity matters. From the [chat_template](https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/tokenizer_config.json) at Hugging Face: ``` {%- elif message.role == "assistant" %} {%- set reasoning_content = '' %} {%- if message.reasoning_content is string %} {%- set reasoning_content = message.reasoning_content %} {%- else %} {%- if '</think>' in content %} {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %} {%- set content = content.split('</think>')[-1].lstrip('\n') %} {%- endif %} {%- endif %} {%- if loop.index0 > ns.last_query_index %} {%- if loop.last or (not loop.last and reasoning_content) %} {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }} {%- else %} {{- '<|im_start|>' + message.role + '\n' + content }} {%- endif %} {%- else %} {{- '<|im_start|>' + message.role + '\n' + content }} {%- endif %} ```
Author
Owner

@rick-github commented on GitHub (Jul 1, 2025):

My reading of the ollama template is that it tries to do this.

{{ else if eq .Role "assistant" }}<|im_start|>assistant
{{ if (and $.IsThinkSet (and .Thinking (or $last (gt $i $lastUserIdx)))) -}}
<think>{{ .Thinking }}</think>
{{ end -}}

If the message is the assistant role, thinking is enabled and there is thinking content, and it's the last message or the index of the message is greater than the last user message, then add the thinking content. However, the laste term ((or $last (gt $i $lastUserIdx))) results in the addition of thinking only to the last message if it's assistant role. I had a quick look at the Jinja template and it's not clear to me what's happening with last_query_index. It seems to be an inverted index, maybe to account to non user and assistant roles? In any case, it seems like your requirement could be satisfied with a change to the ollama template.

<!-- gh-comment-id:3024950511 --> @rick-github commented on GitHub (Jul 1, 2025): My reading of the ollama template is that it tries to do this. ``` {{ else if eq .Role "assistant" }}<|im_start|>assistant {{ if (and $.IsThinkSet (and .Thinking (or $last (gt $i $lastUserIdx)))) -}} <think>{{ .Thinking }}</think> {{ end -}} ``` If the message is the assistant role, thinking is enabled and there is `thinking` content, and it's the last message or the index of the message is greater than the last user message, then add the thinking content. However, the laste term (`(or $last (gt $i $lastUserIdx))`) results in the addition of thinking only to the last message if it's `assistant` role. I had a quick look at the Jinja template and it's not clear to me what's happening with `last_query_index`. It seems to be an inverted index, maybe to account to non `user` and `assistant` roles? In any case, it seems like your requirement could be satisfied with a change to the ollama template.
Author
Owner

@dandydan888 commented on GitHub (Jul 1, 2025):

Thanks again for the quick reply, really appreciate it. I think your read on the template logic is accurate. That said, I’m not sure there’s a typical case where an assistant message would be the last one in the array when sending history back to the LLM. Usually, the last message is from the user, or to a lesser extent from a tool, right?

It would make a lot more sense if the logic keyed off the last assistant message, so the model could carry forward its most recent introspection as context. In any case, updating the Ollama template looks like the right move here. Thanks for pointing that out.

<!-- gh-comment-id:3025247885 --> @dandydan888 commented on GitHub (Jul 1, 2025): Thanks again for the quick reply, really appreciate it. I think your read on the template logic is accurate. That said, I’m not sure there’s a typical case where an assistant message would be the last one in the array when sending history back to the LLM. Usually, the last message is from the user, or to a lesser extent from a tool, right? It would make a lot more sense if the logic keyed off the last assistant message, so the model could carry forward its most recent introspection as context. In any case, updating the Ollama template looks like the right move here. Thanks for pointing that out.
Author
Owner

@rick-github commented on GitHub (Jul 1, 2025):

Usually, the last message is from the user, or to a lesser extent from a tool, right?

Yes. I haven't looked at in depth yet but from the brief analysis above I believe the logic is actually incorrect. If I get some time I'll try to understand what the intention is, in the meantime modifying the logic to set lastUserIdx to the highest index assistant message and changing gt to eq should get the template to fill in the thinking of the most recent assistant message.

<!-- gh-comment-id:3025287886 --> @rick-github commented on GitHub (Jul 1, 2025): > Usually, the last message is from the user, or to a lesser extent from a tool, right? Yes. I haven't looked at in depth yet but from the brief analysis above I believe the logic is actually incorrect. If I get some time I'll try to understand what the intention is, in the meantime modifying the logic to set `lastUserIdx` to the highest index `assistant` message and changing `gt` to `eq` should get the template to fill in the thinking of the most recent `assistant` message.
Author
Owner

@Thhe1oldlady commented on GitHub (Jul 1, 2025):

Dear [Recipient Name],

Could you please provide assistance with my initial project? Any guidance
or support you can offer would be greatly appreciated.

Thank you,
John c

On Tue, Jul 1, 2025 at 10:37 AM dandydan888 @.***>
wrote:

Closed #11251 https://github.com/ollama/ollama/issues/11251 as
completed.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/11251#event-18412896311, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/BFBHD4QK62O5465I342443T3GKMKHAVCNFSM6AAAAACAQBMSDGVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJYGQYTEOBZGYZTCMI
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>

<!-- gh-comment-id:3025289537 --> @Thhe1oldlady commented on GitHub (Jul 1, 2025): Dear [Recipient Name], Could you please provide assistance with my initial project? Any guidance or support you can offer would be greatly appreciated. Thank you, John c On Tue, Jul 1, 2025 at 10:37 AM dandydan888 ***@***.***> wrote: > Closed #11251 <https://github.com/ollama/ollama/issues/11251> as > completed. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/11251#event-18412896311>, or > unsubscribe > <https://github.com/notifications/unsubscribe-auth/BFBHD4QK62O5465I342443T3GKMKHAVCNFSM6AAAAACAQBMSDGVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJYGQYTEOBZGYZTCMI> > . > You are receiving this because you are subscribed to this thread.Message > ID: ***@***.***> >
Author
Owner

@Thhe1oldlady commented on GitHub (Jul 1, 2025):

What is the issue?

While debugging on unusual token reporting from prompt_eval_count, I observed a behavior that may be worth clarifying:

  • When I include assistant reasoning inside a <think>...</think> block within the content of a previous message, the prompt_eval_count in subsequent requests remains unchanged—even if the <think> section contains a large amount of text.
  • In contrast, if the same content is placed outside the <think> tags (or the tag is renamed), prompt_eval_count increases as expected.

This suggests that either:

  • The client/server is explicitly filtering out <think> blocks before prompt construction and tokenization, or
  • The model has been trained to ignore them internally (which seems less likely, as token counts are measured before attention takes effect)

For developers interested in full-context fidelity—particularly across multi-turn conversations—this raises a few questions:

  • Is Ollama currently stripping <think> content prior to token accounting and model inference?
  • If so, is there a way to opt in to preserving that content in the serialized prompt for continuity?
  • Would it be possible to expose this behavior explicitly in the docs or support an override?

I understand that <think> is often used to improve user experience by keeping responses concise or structuring intermediate thoughts more clearly. That said, it might be helpful to allow developers a bit more control. For example, if <think> appears explicitly in content when feeding prior messages into the messages array, perhaps Ollama could preserve those blocks—treating them as intentional and bypassing the usual filtering logic. This would allow the majority of users to benefit from clean UI behavior, while enabling devs to maintain continuity of reasoning when needed. That kind of opt-in transparency would offer the best of both worlds: abstraction for convenience, fidelity when requested.


Reproducible behavior:

// Case 1: Using thinking field 
// prompt_eval_count = 24
{
  "model": "deepseek-r1:8b",
  "messages": [
    { "role": "user", "content": "what is 2 + 2?" },
    { "role": "assistant", "content": "2 + 2 equals 4.", "thinking": "..." },
    { "role": "user", "content": "thank you!" }
  ],
  "think": true,
  "stream": false
}
// Case 2: Injecting <think> block into content 
// prompt_eval_count = 24
{
  "model": "deepseek-r1:8b",
  "messages": [
    { "role": "user", "content": "what is 2 + 2?" },
    { "role": "assistant", "content": "<think>...</think>2 + 2 equals 4." },
    { "role": "user", "content": "thank you!" }
  ],
  "think": true,
  "stream": false
}
// Case 3: Using different tag (<thought>) 
// prompt_eval_count = 482
{
  "model": "deepseek-r1:8b",
  "messages": [
    { "role": "user", "content": "what is 2 + 2?" },
    { "role": "assistant", "content": "<thought>...</thought>2 + 2 equals 4." },
    { "role": "user", "content": "thank you!" }
  ],
  "think": true,
  "stream": false
}

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

<!-- gh-comment-id:3025294387 --> @Thhe1oldlady commented on GitHub (Jul 1, 2025): > ### What is the issue? > > While debugging on unusual token reporting from `prompt_eval_count`, I observed a behavior that may be worth clarifying: > > - When I include assistant reasoning inside a `<think>...</think>` block within the content of a previous message, the `prompt_eval_count` in subsequent requests remains unchanged—even if the `<think>` section contains a large amount of text. > - In contrast, if the same content is placed outside the `<think>` tags (or the tag is renamed), `prompt_eval_count` increases as expected. > > This suggests that either: > - The client/server is explicitly filtering out `<think>` blocks before prompt construction and tokenization, or > - The model has been trained to ignore them internally (which seems less likely, as token counts are measured before attention takes effect) > > For developers interested in full-context fidelity—particularly across multi-turn conversations—this raises a few questions: > > - Is Ollama currently stripping `<think>` content prior to token accounting and model inference? > - If so, is there a way to opt in to preserving that content in the serialized prompt for continuity? > - Would it be possible to expose this behavior explicitly in the docs or support an override? > > I understand that `<think>` is often used to improve user experience by keeping responses concise or structuring intermediate thoughts more clearly. That said, it might be helpful to allow developers a bit more control. For example, if `<think>` appears explicitly in content when feeding prior messages into the messages array, perhaps Ollama could preserve those blocks—treating them as intentional and bypassing the usual filtering logic. This would allow the majority of users to benefit from clean UI behavior, while enabling devs to maintain continuity of reasoning when needed. That kind of opt-in transparency would offer the best of both worlds: abstraction for convenience, fidelity when requested. > > --- > > Reproducible behavior: > > ``` > // Case 1: Using thinking field > // prompt_eval_count = 24 > { > "model": "deepseek-r1:8b", > "messages": [ > { "role": "user", "content": "what is 2 + 2?" }, > { "role": "assistant", "content": "2 + 2 equals 4.", "thinking": "..." }, > { "role": "user", "content": "thank you!" } > ], > "think": true, > "stream": false > } > ``` > ``` > // Case 2: Injecting <think> block into content > // prompt_eval_count = 24 > { > "model": "deepseek-r1:8b", > "messages": [ > { "role": "user", "content": "what is 2 + 2?" }, > { "role": "assistant", "content": "<think>...</think>2 + 2 equals 4." }, > { "role": "user", "content": "thank you!" } > ], > "think": true, > "stream": false > } > ``` > > ``` > // Case 3: Using different tag (<thought>) > // prompt_eval_count = 482 > { > "model": "deepseek-r1:8b", > "messages": [ > { "role": "user", "content": "what is 2 + 2?" }, > { "role": "assistant", "content": "<thought>...</thought>2 + 2 equals 4." }, > { "role": "user", "content": "thank you!" } > ], > "think": true, > "stream": false > } > ``` > > ### Relevant log output > > ```shell > > ``` > > ### OS > > _No response_ > > ### GPU > > _No response_ > > ### CPU > > _No response_ > > ### Ollama version > > _No response_
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7412