[GH-ISSUE #10891] Qwen3 Template error after first message #53672

Closed
opened 2026-04-29 04:27:10 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @Ctrl-Alt-Calvin on GitHub (May 29, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10891

What is the issue?

When running any new Qwen3 model, I get a template error after sending a second message after the first message. No such issue when trying llama3.1 model. Here is an example of the error:

hey!

Okay, the user said, "Hey!" so I need to respond in a friendly and helpful way. Let me start by acknowledging
their greeting. Maybe say something like, "Hey! How can I help you today?" That's a good opening. I should keep it
simple and positive. I don't want to make it too formal, so just a friendly reply. Also, I should make sure to
offer assistance in case they have any questions or need support. Let me check if there's any specific context I
should consider, but since the user didn't mention anything, it's best to keep it general. Alright, time to put it
all together into a natural response.

Hey! How can I help you today? 😊 What can I do for you?

howdy!
Error: template: :42:11: executing "" at <.Thinking>: can't evaluate field Thinking in type *api.Message

Relevant log output


OS

Windows, Linux

GPU

Nvidia

CPU

No response

Ollama version

0.8.0

Originally created by @Ctrl-Alt-Calvin on GitHub (May 29, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10891 ### What is the issue? When running any new Qwen3 model, I get a template error after sending a second message after the first message. No such issue when trying llama3.1 model. Here is an example of the error: >>> hey! <think> Okay, the user said, "Hey!" so I need to respond in a friendly and helpful way. Let me start by acknowledging their greeting. Maybe say something like, "Hey! How can I help you today?" That's a good opening. I should keep it simple and positive. I don't want to make it too formal, so just a friendly reply. Also, I should make sure to offer assistance in case they have any questions or need support. Let me check if there's any specific context I should consider, but since the user didn't mention anything, it's best to keep it general. Alright, time to put it all together into a natural response. </think> Hey! How can I help you today? 😊 What can I do for you? >>> howdy! Error: template: :42:11: executing "" at <.Thinking>: can't evaluate field Thinking in type *api.Message ### Relevant log output ```shell ``` ### OS Windows, Linux ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.8.0
GiteaMirror added the bug label 2026-04-29 04:27:10 -05:00
Author
Owner

@rick-github commented on GitHub (May 29, 2025):

It looks like some of the models have had their template updated in preparation for https://github.com/ollama/ollama/pull/10584. But that hasn't been integrated yet, so the template parsing fails. @drifkin

<!-- gh-comment-id:2917943492 --> @rick-github commented on GitHub (May 29, 2025): It looks like some of the models have had their template updated in preparation for https://github.com/ollama/ollama/pull/10584. But that hasn't been integrated yet, so the template parsing fails. @drifkin
Author
Owner

@drifkin commented on GitHub (May 29, 2025):

I've reverted the template changes temporarily, they were intended to be backwards compatible. @Ctrl-Alt-Calvin could you help me repro? Did you run this using the CLI? If so, interactive or not? Or did you use the API?

<!-- gh-comment-id:2917958320 --> @drifkin commented on GitHub (May 29, 2025): I've reverted the template changes temporarily, they were intended to be backwards compatible. @Ctrl-Alt-Calvin could you help me repro? Did you run this using the CLI? If so, interactive or not? Or did you use the API?
Author
Owner

@drifkin commented on GitHub (May 29, 2025):

oops sorry, was able to repro with a simple chat on the CLI, I was testing it wrong. Fix incoming! Thanks for reporting!

<!-- gh-comment-id:2917977263 --> @drifkin commented on GitHub (May 29, 2025): oops sorry, was able to repro with a simple chat on the CLI, I was testing it wrong. Fix incoming! Thanks for reporting!
Author
Owner

@jmorganca commented on GitHub (May 29, 2025):

This should be fixed now. Thanks for the issue!

<!-- gh-comment-id:2918074902 --> @jmorganca commented on GitHub (May 29, 2025): This should be fixed now. Thanks for the issue!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53672