[GH-ISSUE #10914] Add hide thinking to API #69238

Closed
opened 2026-05-04 17:32:46 -05:00 by GiteaMirror · 20 comments
Owner

Originally created by @noinformationavailable on GitHub (May 30, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10914

  • A --hidethinking option has also been added to the CLI. This makes
    it easy to use thinking in scripting scenarios like
    ollama run qwen3 --think --hidethinking "my question here" where you
    just want to see the answer but still want the benefits of thinking
    models

Would this also be possible to add to the API?

Originally created by @noinformationavailable on GitHub (May 30, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10914 - A `--hidethinking` option has also been added to the CLI. This makes it easy to use thinking in scripting scenarios like `ollama run qwen3 --think --hidethinking "my question here"` where you just want to see the answer but still want the benefits of thinking models Would this also be possible to add to the API?
GiteaMirror added the feature request label 2026-05-04 17:32:46 -05:00
Author
Owner

@rick-github commented on GitHub (May 30, 2025):

Just ignore the thinking field.

<!-- gh-comment-id:2921843637 --> @rick-github commented on GitHub (May 30, 2025): Just ignore the `thinking` field.
Author
Owner

@kekePower commented on GitHub (May 31, 2025):

Does Ollama strip away the empty <think> ... </think> that Qwen3 produces when /no_think is set?

<!-- gh-comment-id:2925781480 --> @kekePower commented on GitHub (May 31, 2025): Does Ollama strip away the empty `<think> ... </think>` that Qwen3 produces when `/no_think` is set?
Author
Owner

@rick-github commented on GitHub (May 31, 2025):

WIth 0.9.0+ and think enabled, yes.

$ curl -s localhost:11434/api/generate -d '{"model":"qwen3","prompt":"hello","think":true,"stream":false}' | jq 'del(.context)'
{
  "model": "qwen3",
  "created_at": "2025-05-31T21:54:06.908882389Z",
  "response": "Hello! 😊 How can I assist you today? If you have any questions or need help with something, feel free to ask!",
  "thinking": "Okay, the user said \"hello /think\". First, I need to respond appropriately. Since they used a slash, maybe they're testing if I can handle that. I should acknowledge their greeting and ask how I can assist them. Keep it friendly and open-ended. Let me make sure the response is welcoming and invites them to ask questions. Also, check for any possible typos or misunderstandings. Alright, that should cover it.\n",
  "done": true,
  "done_reason": "stop",
  "total_duration": 1850634822,
  "load_duration": 317618286,
  "prompt_eval_count": 11,
  "prompt_eval_duration": 4961120,
  "eval_count": 120,
  "eval_duration": 1527532641
}
$ curl -s localhost:11434/api/generate -d '{"model":"qwen3","prompt":"hello","think":false,"stream":false}' | jq 'del(.context)'
{
  "model": "qwen3",
  "created_at": "2025-05-31T21:54:14.818087675Z",
  "response": "Hello! How can I assist you today? 😊",
  "done": true,
  "done_reason": "stop",
  "total_duration": 503930904,
  "load_duration": 339161940,
  "prompt_eval_count": 17,
  "prompt_eval_duration": 10673381,
  "eval_count": 12,
  "eval_duration": 153417495
}
<!-- gh-comment-id:2925782088 --> @rick-github commented on GitHub (May 31, 2025): WIth 0.9.0+ and `think` enabled, yes. ```console $ curl -s localhost:11434/api/generate -d '{"model":"qwen3","prompt":"hello","think":true,"stream":false}' | jq 'del(.context)' { "model": "qwen3", "created_at": "2025-05-31T21:54:06.908882389Z", "response": "Hello! 😊 How can I assist you today? If you have any questions or need help with something, feel free to ask!", "thinking": "Okay, the user said \"hello /think\". First, I need to respond appropriately. Since they used a slash, maybe they're testing if I can handle that. I should acknowledge their greeting and ask how I can assist them. Keep it friendly and open-ended. Let me make sure the response is welcoming and invites them to ask questions. Also, check for any possible typos or misunderstandings. Alright, that should cover it.\n", "done": true, "done_reason": "stop", "total_duration": 1850634822, "load_duration": 317618286, "prompt_eval_count": 11, "prompt_eval_duration": 4961120, "eval_count": 120, "eval_duration": 1527532641 } $ curl -s localhost:11434/api/generate -d '{"model":"qwen3","prompt":"hello","think":false,"stream":false}' | jq 'del(.context)' { "model": "qwen3", "created_at": "2025-05-31T21:54:14.818087675Z", "response": "Hello! How can I assist you today? 😊", "done": true, "done_reason": "stop", "total_duration": 503930904, "load_duration": 339161940, "prompt_eval_count": 17, "prompt_eval_duration": 10673381, "eval_count": 12, "eval_duration": 153417495 } ```
Author
Owner

@kekePower commented on GitHub (May 31, 2025):

Perfect. Thanks for the clarification. This makes my code a bit cleaner and easier to maintain. Great work!

<!-- gh-comment-id:2925879690 --> @kekePower commented on GitHub (May 31, 2025): Perfect. Thanks for the clarification. This makes my code a bit cleaner and easier to maintain. Great work!
Author
Owner

@kekePower commented on GitHub (May 31, 2025):

It seems like Ollama may be looking for something very close to qwen3, so if I use hf.co/unsloth/Qwen3-8B-GGUF:Q6_K, for example, it doesn't strip away the <think> tags.

Can you verify?

<!-- gh-comment-id:2926034665 --> @kekePower commented on GitHub (May 31, 2025): It seems like Ollama may be looking for something very close to `qwen3`, so if I use `hf.co/unsloth/Qwen3-8B-GGUF:Q6_K`, for example, it doesn't strip away the `<think>` tags. Can you verify?
Author
Owner

@rick-github commented on GitHub (Jun 1, 2025):

The ollama thinking API requires support in the template. The template that comes with the unsloth model does not have the appropriate changes. Compare the output of ollama show --template qwen3 and ollama show --template hf.co/unsloth/Qwen3-8B-GGUF:Q6_K. You can merge them to get an unsloth model that supports thinking:

$ ollama show --modelfile hf.co/unsloth/Qwen3-8B-GGUF:Q6_K | grep "^FROM" > Modelfile 
$ ollama show --modelfile qwen3 | grep -v "^FROM" >> Modelfile
$ ollama create Qwen3-8B-GGUF:Q6_K
$ ollama run Qwen3-8B-GGUF:Q6_K hello --think=true
Thinking...
Okay, the user said "hello /think". I need to respond appropriately. Let me check the guidelines. I should be friendly and offer help. Maybe start with a greeting and ask how I can assist. Keep it simple 
and welcoming. Let me make sure there's no markdown and the response is natural.
...done thinking.

Hello! How can I assist you today? 😊

$ ollama run Qwen3-8B-GGUF:Q6_K hello --think=false
Hello! How can I assist you today? 😊
$ for t in true false ; do curl -s localhost:11434/api/generate -d '{"model":"Qwen3-8B-GGUF:Q6_K","prompt":"hello","think":'$t',"stream":false}' | jq 'del(.context)' ; done
{
  "model": "Qwen3-8B-GGUF:Q6_K",
  "created_at": "2025-06-01T00:58:19.648324733Z",
  "response": "Hello! 😊 How can I assist you today? Whether you have questions, need help with something, or just want to chat, feel free to let me know!",
  "thinking": "Okay, the user said \"hello /think\". I need to respond appropriately. First, I should acknowledge their greeting. Since they included \"/think\", maybe they wanted me to think about the response. I should keep it friendly and open-ended. Let me make sure to greet them back and invite them to ask questions or talk about anything they're interested in. Keep it simple and welcoming.\n",
  "done": true,
  "done_reason": "stop",
  "total_duration": 4927192708,
  "load_duration": 284590705,
  "prompt_eval_count": 11,
  "prompt_eval_duration": 72895843,
  "eval_count": 117,
  "eval_duration": 4569172319
}
{
  "model": "Qwen3-8B-GGUF:Q6_K",
  "created_at": "2025-06-01T00:58:20.484850995Z",
  "response": "Hello! How can I assist you today? 😊",
  "done": true,
  "done_reason": "stop",
  "total_duration": 826875483,
  "load_duration": 284848194,
  "prompt_eval_count": 17,
  "prompt_eval_duration": 100904433,
  "eval_count": 12,
  "eval_duration": 440443649
}
<!-- gh-comment-id:2926168350 --> @rick-github commented on GitHub (Jun 1, 2025): The ollama thinking API requires support in the template. The template that comes with the unsloth model does not have the appropriate changes. Compare the output of `ollama show --template qwen3` and `ollama show --template hf.co/unsloth/Qwen3-8B-GGUF:Q6_K`. You can merge them to get an unsloth model that supports thinking: ```console $ ollama show --modelfile hf.co/unsloth/Qwen3-8B-GGUF:Q6_K | grep "^FROM" > Modelfile $ ollama show --modelfile qwen3 | grep -v "^FROM" >> Modelfile $ ollama create Qwen3-8B-GGUF:Q6_K ``` ```console $ ollama run Qwen3-8B-GGUF:Q6_K hello --think=true Thinking... Okay, the user said "hello /think". I need to respond appropriately. Let me check the guidelines. I should be friendly and offer help. Maybe start with a greeting and ask how I can assist. Keep it simple and welcoming. Let me make sure there's no markdown and the response is natural. ...done thinking. Hello! How can I assist you today? 😊 $ ollama run Qwen3-8B-GGUF:Q6_K hello --think=false Hello! How can I assist you today? 😊 ``` ```console $ for t in true false ; do curl -s localhost:11434/api/generate -d '{"model":"Qwen3-8B-GGUF:Q6_K","prompt":"hello","think":'$t',"stream":false}' | jq 'del(.context)' ; done { "model": "Qwen3-8B-GGUF:Q6_K", "created_at": "2025-06-01T00:58:19.648324733Z", "response": "Hello! 😊 How can I assist you today? Whether you have questions, need help with something, or just want to chat, feel free to let me know!", "thinking": "Okay, the user said \"hello /think\". I need to respond appropriately. First, I should acknowledge their greeting. Since they included \"/think\", maybe they wanted me to think about the response. I should keep it friendly and open-ended. Let me make sure to greet them back and invite them to ask questions or talk about anything they're interested in. Keep it simple and welcoming.\n", "done": true, "done_reason": "stop", "total_duration": 4927192708, "load_duration": 284590705, "prompt_eval_count": 11, "prompt_eval_duration": 72895843, "eval_count": 117, "eval_duration": 4569172319 } { "model": "Qwen3-8B-GGUF:Q6_K", "created_at": "2025-06-01T00:58:20.484850995Z", "response": "Hello! How can I assist you today? 😊", "done": true, "done_reason": "stop", "total_duration": 826875483, "load_duration": 284848194, "prompt_eval_count": 17, "prompt_eval_duration": 100904433, "eval_count": 12, "eval_duration": 440443649 } ```
Author
Owner

@kekePower commented on GitHub (Jun 1, 2025):

@rick-github Thanks. I truly appreciate your detailed answer. This is gold and I've been experimenting. The results are awesome.

<!-- gh-comment-id:2926912994 --> @kekePower commented on GitHub (Jun 1, 2025): @rick-github Thanks. I truly appreciate your detailed answer. This is gold and I've been experimenting. The results are awesome.
Author
Owner

@kekePower commented on GitHub (Jun 3, 2025):

I've been experimenting with OLLAMA_NEW_ENGINE and there are differences.

With it set to true:

% for t in true false ; do curl -s localhost:11434/api/generate -d '{"model":"qwen3:30b-custom","prompt":"hello","think":'$t',"stream":false}' | jq 'del(.context)' ; done
{
  "model": "qwen3:30b-custom",
  "created_at": "2025-06-03T05:40:45.185679334Z",
  "response": "Hello! How can I assist you today? 😊",
  "thinking": "Okay, the user said \"hello\". I should respond politely. Let me check the guidelines. I need to be friendly and offer help. Maybe say \"Hello! How can I assist you today?\" That sounds good. Make sure it's welcoming and open-ended. Alright, that should work.\n",
  "done": true,
  "done_reason": "stop",
  "total_duration": 17094206537,
  "load_duration": 13009408357,
  "prompt_eval_count": 11,
  "prompt_eval_duration": 586835276,
  "eval_count": 75,
  "eval_duration": 3496821973
}
{
  "model": "qwen3:30b-custom",
  "created_at": "2025-06-03T05:40:46.371605328Z",
  "response": "<think>\n\n</think>\n\nHello! How can I assist you today? 😊",
  "done": true,
  "done_reason": "stop",
  "total_duration": 807185440,
  "load_duration": 26625756,
  "prompt_eval_count": 19,
  "prompt_eval_duration": 191120436,
  "eval_count": 16,
  "eval_duration": 588806414
}

And set to false:

% for t in true false ; do curl -s localhost:11434/api/generate -d '{"model":"qwen3:30b-custom","prompt":"hello","think":'$t',"stream":false}' | jq 'del(.context)' ; done
{
  "model": "qwen3:30b-custom",
  "created_at": "2025-06-03T05:50:34.156842577Z",
  "response": "Hello! How can I assist you today? 😊",
  "thinking": "Okay, the user sent \"hello\". I need to respond appropriately. First, I should acknowledge their greeting. Maybe say \"Hello!\" back. Then, offer assistance. Let them know I'm here to help with any questions or tasks. Keep it friendly and open-ended. Make sure the tone is welcoming. Avoid any complex language. Just a simple, polite response. Check for any possible misunderstandings. They might be testing the response or just starting a conversation. Either way, a straightforward reply should work. Alright, that should cover it.\n",
  "done": true,
  "done_reason": "stop",
  "total_duration": 17206678777,
  "load_duration": 10191172757,
  "prompt_eval_count": 11,
  "prompt_eval_duration": 563598790,
  "eval_count": 125,
  "eval_duration": 6450456133
}
{
  "model": "qwen3:30b-custom",
  "created_at": "2025-06-03T05:50:35.383598977Z",
  "response": "Hello! How can I assist you today? 😊",
  "done": true,
  "done_reason": "stop",
  "total_duration": 901041769,
  "load_duration": 29927025,
  "prompt_eval_count": 17,
  "prompt_eval_duration": 224969692,
  "eval_count": 12,
  "eval_duration": 645493081
}

I found a slight performance increase when using the new engine, that's why I am using it.

<!-- gh-comment-id:2933584066 --> @kekePower commented on GitHub (Jun 3, 2025): I've been experimenting with `OLLAMA_NEW_ENGINE` and there are differences. With it set to `true`: ``` % for t in true false ; do curl -s localhost:11434/api/generate -d '{"model":"qwen3:30b-custom","prompt":"hello","think":'$t',"stream":false}' | jq 'del(.context)' ; done { "model": "qwen3:30b-custom", "created_at": "2025-06-03T05:40:45.185679334Z", "response": "Hello! How can I assist you today? 😊", "thinking": "Okay, the user said \"hello\". I should respond politely. Let me check the guidelines. I need to be friendly and offer help. Maybe say \"Hello! How can I assist you today?\" That sounds good. Make sure it's welcoming and open-ended. Alright, that should work.\n", "done": true, "done_reason": "stop", "total_duration": 17094206537, "load_duration": 13009408357, "prompt_eval_count": 11, "prompt_eval_duration": 586835276, "eval_count": 75, "eval_duration": 3496821973 } { "model": "qwen3:30b-custom", "created_at": "2025-06-03T05:40:46.371605328Z", "response": "<think>\n\n</think>\n\nHello! How can I assist you today? 😊", "done": true, "done_reason": "stop", "total_duration": 807185440, "load_duration": 26625756, "prompt_eval_count": 19, "prompt_eval_duration": 191120436, "eval_count": 16, "eval_duration": 588806414 } ``` And set to `false`: ``` % for t in true false ; do curl -s localhost:11434/api/generate -d '{"model":"qwen3:30b-custom","prompt":"hello","think":'$t',"stream":false}' | jq 'del(.context)' ; done { "model": "qwen3:30b-custom", "created_at": "2025-06-03T05:50:34.156842577Z", "response": "Hello! How can I assist you today? 😊", "thinking": "Okay, the user sent \"hello\". I need to respond appropriately. First, I should acknowledge their greeting. Maybe say \"Hello!\" back. Then, offer assistance. Let them know I'm here to help with any questions or tasks. Keep it friendly and open-ended. Make sure the tone is welcoming. Avoid any complex language. Just a simple, polite response. Check for any possible misunderstandings. They might be testing the response or just starting a conversation. Either way, a straightforward reply should work. Alright, that should cover it.\n", "done": true, "done_reason": "stop", "total_duration": 17206678777, "load_duration": 10191172757, "prompt_eval_count": 11, "prompt_eval_duration": 563598790, "eval_count": 125, "eval_duration": 6450456133 } { "model": "qwen3:30b-custom", "created_at": "2025-06-03T05:50:35.383598977Z", "response": "Hello! How can I assist you today? 😊", "done": true, "done_reason": "stop", "total_duration": 901041769, "load_duration": 29927025, "prompt_eval_count": 17, "prompt_eval_duration": 224969692, "eval_count": 12, "eval_duration": 645493081 } ``` I found a slight performance increase when using the new engine, that's why I am using it.
Author
Owner

@rick-github commented on GitHub (Jun 3, 2025):

Indeed, the new engine has some issues dealing with thinking. qwen3 is the only thinking model on both engines with an updated template so it's hard to say if it's model specific or engine specific.

$ ollama run qwen3:30b hello --think=false
<think>

</think>

Hello! How can I assist you today? 😊

@drifkin

<!-- gh-comment-id:2934275563 --> @rick-github commented on GitHub (Jun 3, 2025): Indeed, the new engine has some issues dealing with thinking. qwen3 is the only thinking model on both engines with an updated template so it's hard to say if it's model specific or engine specific. ```console $ ollama run qwen3:30b hello --think=false <think> </think> Hello! How can I assist you today? 😊 ``` @drifkin
Author
Owner

@drifkin commented on GitHub (Jun 3, 2025):

Thanks for reporting, I'm able to repro where thinking set to false doesn't work on the new engine, but works on the old one. @mxyng I think this is due to the special token issue we spoke about earlier, the <think> and </think> tokens aren't being tokenized correctly as special tokens. I verified that manually hardcoding them as special in vocabulary.go like so

			if slices.Contains([]int{151668, 151667}, i) {
				v.special = append(v.special, v.Values[i])

does seem to address the problem. What's the right way of fixing this?

<!-- gh-comment-id:2936752745 --> @drifkin commented on GitHub (Jun 3, 2025): Thanks for reporting, I'm able to repro where thinking set to false doesn't work on the new engine, but works on the old one. @mxyng I think this is due to the special token issue we spoke about earlier, the `<think>` and `</think>` tokens aren't being tokenized correctly as special tokens. I verified that manually hardcoding them as special in vocabulary.go like so ``` if slices.Contains([]int{151668, 151667}, i) { v.special = append(v.special, v.Values[i]) ``` does seem to address the problem. What's the right way of fixing this?
Author
Owner

@Runlevel-zero commented on GitHub (Jun 3, 2025):

Sorry to repeat the original question,

In a case where ignoring the think tags in the response is not possible, can the think tags be suppressed when model thinking is still desired via the API. in the same vein as the CLI --hidethinking argument?

<!-- gh-comment-id:2937454356 --> @Runlevel-zero commented on GitHub (Jun 3, 2025): Sorry to repeat the original question, In a case where ignoring the think tags in the response is not possible, can the think tags be suppressed when model thinking is still desired via the API. in the same vein as the CLI `--hidethinking` argument?
Author
Owner

@rick-github commented on GitHub (Jun 3, 2025):

How is ignoring the thinking field in the response not a solution?

<!-- gh-comment-id:2937459784 --> @rick-github commented on GitHub (Jun 3, 2025): How is ignoring the `thinking` field in the response not a solution?
Author
Owner

@khromalabs commented on GitHub (Jun 9, 2025):

How is ignoring the thinking field in the response not a solution?

To add on top of what has been discussed here, there are two endpoints available to generate completion /api/generate and /api/chat personally I'm interested in the 2nd which is the one which allows me to be used in a multi-turn conversation, the thinking output for a reasoner model is still mixed there, only the 1st has a split output yet:

$ curl -s 127.0.0.1:11434/api/chat -d '{"model":"qwen3:4b","messages":[{"role":"user","content":"Hi there","think":false}], "stream":false}' | jq
{
  "model": "qwen3:4b",
  "created_at": "2025-06-09T09:19:55.741542045Z",
  "message": {
    "role": "assistant",
    "content": "<think>\nOkay, the user said \"Hi there.\" That's a greeting. I need to respond politely. Let me think of a friendly reply. Maybe \"Hello! How can I assist you today?\" That sounds good. It's welcoming and opens the door for them to ask questions. I should keep it simple and positive. No need for any complex phrases. Just a straightforward, cheerful response. Alright, that should do it.\n</think>\n\nHello! How can I assist you today? 😊"
  },
  "done_reason": "stop",
  "done": true,
  "total_duration": 5422985914,
  "load_duration": 15376457,
  "prompt_eval_count": 10,
  "prompt_eval_duration": 399473393,
  "eval_count": 101,
  "eval_duration": 5007411245
}
<!-- gh-comment-id:2955200523 --> @khromalabs commented on GitHub (Jun 9, 2025): > How is ignoring the `thinking` field in the response not a solution? To add on top of what has been discussed here, there are two endpoints available to generate completion `/api/generate` and `/api/chat` personally I'm interested in the 2nd which is the one which allows me to be used in a multi-turn conversation, the thinking output for a reasoner model is still mixed there, only the 1st has a split output yet: ``` $ curl -s 127.0.0.1:11434/api/chat -d '{"model":"qwen3:4b","messages":[{"role":"user","content":"Hi there","think":false}], "stream":false}' | jq { "model": "qwen3:4b", "created_at": "2025-06-09T09:19:55.741542045Z", "message": { "role": "assistant", "content": "<think>\nOkay, the user said \"Hi there.\" That's a greeting. I need to respond politely. Let me think of a friendly reply. Maybe \"Hello! How can I assist you today?\" That sounds good. It's welcoming and opens the door for them to ask questions. I should keep it simple and positive. No need for any complex phrases. Just a straightforward, cheerful response. Alright, that should do it.\n</think>\n\nHello! How can I assist you today? 😊" }, "done_reason": "stop", "done": true, "total_duration": 5422985914, "load_duration": 15376457, "prompt_eval_count": 10, "prompt_eval_duration": 399473393, "eval_count": 101, "eval_duration": 5007411245 } ```
Author
Owner

@rick-github commented on GitHub (Jun 9, 2025):

think doesn't go in messages.

$ curl -s 127.0.0.1:11434/api/chat -d '{"model":"qwen3:4b","messages":[{"role":"user","content":"Hi there"}], "stream":false, "think":false}' | jq
{
  "model": "qwen3:4b",
  "created_at": "2025-06-09T09:37:55.290014057Z",
  "message": {
    "role": "assistant",
    "content": "Hello! How can I assist you today? 😊"
  },
  "done_reason": "stop",
  "done": true,
  "total_duration": 29655763150,
  "load_duration": 29350531828,
  "prompt_eval_count": 18,
  "prompt_eval_duration": 141943347,
  "eval_count": 12,
  "eval_duration": 161467620
}

If the requirement is that the model is allowed to think but the thinking output is hidden, then just enable think but ignore the thinking field in the response:

$ curl -s 127.0.0.1:11434/api/chat -d '{"model":"qwen3:4b","messages":[{"role":"user","content":"Hi there"}], "stream":false, "think":true}' | jq
{
  "model": "qwen3:4b",
  "created_at": "2025-06-09T09:39:18.856682344Z",
  "message": {
    "role": "assistant",
    "content": "Hello! How can I assist you today? 😊",
    "thinking": "Okay, the user said \"Hi there\". I need to respond appropriately. Let me think about the best way to greet them. Maybe start with a friendly greeting. I should keep it warm and welcoming. Also, I should ask how I can assist them. Let me make sure the tone is positive and open. I should avoid any formal language but stay professional. Let me check for any possible mistakes. Hmm, that sounds good. I'll go with that.\n"
  },
  "done_reason": "stop",
  "done": true,
  "total_duration": 2138852444,
  "load_duration": 551998490,
  "prompt_eval_count": 12,
  "prompt_eval_duration": 27709923,
  "eval_count": 109,
  "eval_duration": 1558389354
}

<!-- gh-comment-id:2955251990 --> @rick-github commented on GitHub (Jun 9, 2025): `think` doesn't go in `messages`. ```console $ curl -s 127.0.0.1:11434/api/chat -d '{"model":"qwen3:4b","messages":[{"role":"user","content":"Hi there"}], "stream":false, "think":false}' | jq { "model": "qwen3:4b", "created_at": "2025-06-09T09:37:55.290014057Z", "message": { "role": "assistant", "content": "Hello! How can I assist you today? 😊" }, "done_reason": "stop", "done": true, "total_duration": 29655763150, "load_duration": 29350531828, "prompt_eval_count": 18, "prompt_eval_duration": 141943347, "eval_count": 12, "eval_duration": 161467620 } ``` If the requirement is that the model is allowed to think but the thinking output is hidden, then just enable `think` but ignore the `thinking` field in the response: ```console $ curl -s 127.0.0.1:11434/api/chat -d '{"model":"qwen3:4b","messages":[{"role":"user","content":"Hi there"}], "stream":false, "think":true}' | jq { "model": "qwen3:4b", "created_at": "2025-06-09T09:39:18.856682344Z", "message": { "role": "assistant", "content": "Hello! How can I assist you today? 😊", "thinking": "Okay, the user said \"Hi there\". I need to respond appropriately. Let me think about the best way to greet them. Maybe start with a friendly greeting. I should keep it warm and welcoming. Also, I should ask how I can assist them. Let me make sure the tone is positive and open. I should avoid any formal language but stay professional. Let me check for any possible mistakes. Hmm, that sounds good. I'll go with that.\n" }, "done_reason": "stop", "done": true, "total_duration": 2138852444, "load_duration": 551998490, "prompt_eval_count": 12, "prompt_eval_duration": 27709923, "eval_count": 109, "eval_duration": 1558389354 } ```
Author
Owner

@khromalabs commented on GitHub (Jun 9, 2025):

think doesn't go in messages.

Ok just noticed something, if the "think" argument is omitted the <think>..</think> text still appears inside message.content, if it's added then it behaves as you are describe, v0.9.0 in here. PS. And you are right I misplaced the think argument inside the message

<!-- gh-comment-id:2955283719 --> @khromalabs commented on GitHub (Jun 9, 2025): > `think` doesn't go in `messages`. Ok just noticed something, if the "think" argument is omitted the `<think>..</think>` text still appears inside `message.content`, if it's added then it behaves as you are describe, v0.9.0 in here. PS. And you are right I misplaced the `think` argument inside the message
Author
Owner

@rick-github commented on GitHub (Jun 9, 2025):

Yes, the default behaviour if think is not set is to return the same response as in pre-0.9.0 versions of ollama. This is so that client code that was written to deal with <think></think> will continue to work.

<!-- gh-comment-id:2955293322 --> @rick-github commented on GitHub (Jun 9, 2025): Yes, the default behaviour if `think` is not set is to return the same response as in pre-0.9.0 versions of ollama. This is so that client code that was written to deal with `<think></think>` will continue to work.
Author
Owner

@powerman commented on GitHub (Jul 2, 2025):

@rick-github Is anything like this supported on /v1/chat/completions endpoint? I'm using a Neovim plugin which uses that endpoint and looking for a way to disable <think> tags from output.

<!-- gh-comment-id:3025976849 --> @powerman commented on GitHub (Jul 2, 2025): @rick-github Is anything like this supported on /v1/chat/completions endpoint? I'm using a Neovim plugin which uses that endpoint and looking for a way to disable `<think>` tags from output.
Author
Owner

@rick-github commented on GitHub (Jul 2, 2025):

Unfortunately not. If you are using a qwen3-based model, then you can create a new model with a system message that has /no-think.

You could deploy a proxy that can add think:false to the payload and forward to the Ollama API endpoint.

Other solutions that require modifying ollama:

  • Modify the API endpoint to support think
  • Add the ability to set the think flag in the Modelfile.
<!-- gh-comment-id:3028004738 --> @rick-github commented on GitHub (Jul 2, 2025): Unfortunately not. If you are using a qwen3-based model, then you can create a new model with a system message that has `/no-think`. You could deploy a [proxy](https://docs.litellm.ai/docs/proxy/call_hooks) that can add `think:false` to the payload and forward to the Ollama API endpoint. Other solutions that require modifying ollama: - [Modify](https://github.com/ollama/ollama/pull/11249) the API endpoint to support `think` - [Add](https://github.com/ollama/ollama/issues/10961) the ability to set the `think` flag in the Modelfile.
Author
Owner

@powerman commented on GitHub (Jul 2, 2025):

/no_think does not helps - it result in response message starting with empty tags <think> </think>.

PR #11249 looks really promising (Neovim plugin I'm using provides a way to add custom params to the HTTP request), I hope it'll be merged!

<!-- gh-comment-id:3028028109 --> @powerman commented on GitHub (Jul 2, 2025): `/no_think` does not helps - it result in response message starting with empty tags `<think> </think>`. PR #11249 looks really promising (Neovim plugin I'm using provides a way to add custom params to the HTTP request), I hope it'll be merged!
Author
Owner

@rick-github commented on GitHub (Jul 2, 2025):

I think the likelihood of it being merged is slim, ollama developers have expressed the view that they will only support features of the published OpenAI API spec, and options is not a field in the spec.

<!-- gh-comment-id:3028214910 --> @rick-github commented on GitHub (Jul 2, 2025): I think the likelihood of it being merged is slim, ollama developers have expressed the view that they will only support features of the published OpenAI API spec, and `options` is not a field in the spec.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69238