[GH-ISSUE #10921] Fine-tune Model does not output <think> #53698

Open
opened 2026-04-29 04:33:29 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @EntropyYue on GitHub (May 31, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10921

I use the qwen3 model fine-tuned with unsloth or llamafactory in ollama, which does not output the tag; it only generates the thinking, tag and the answer.
Oddly, when I use llama.cpp, it works normally.

Originally created by @EntropyYue on GitHub (May 31, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10921 I use the qwen3 model fine-tuned with unsloth or llamafactory in ollama, which does not output the <think> tag; it only generates the thinking, </think> tag and the answer. Oddly, when I use llama.cpp, it works normally.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53698