[GH-ISSUE #14798] qwen3-vl:8b missing thinking toggle template (think: false ignored) #35318

Closed
opened 2026-04-22 19:45:33 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @tankbottoms on GitHub (Mar 12, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14798

Description

The qwen3-vl:8b model ships with a bare {{ .Prompt }} template that lacks the $.IsThinkSet / $.Think thinking-control logic present in the qwen3:8b model template. As a result, "think": false in API calls is silently ignored for qwen3-vl:8b, while it works correctly for qwen3:8b.

qwen3-vl:8b template (current)

{{ .Prompt }}

No ChatML structure, no thinking toggle -- just a raw prompt passthrough.

qwen3:8b template (correct)

Full ChatML with $.IsThinkSet, $.Think, /think and /no_think toggle, and proper <think> block handling.

Impact

When using the chat API with "think": false, qwen3-vl:8b ignores the flag entirely. All tokens are consumed by thinking output, producing empty actual responses for any non-trivial prompt.

Example: A 4096 num_predict token budget produces 17,054 characters of thinking and 0 characters of response content.

Steps to Reproduce

1. Confirm the template difference

# qwen3-vl:8b -- bare template
curl -s http://localhost:11434/api/show -d '{"name":"qwen3-vl:8b"}' | jq -r '.template'
# Output: {{ .Prompt }}

# qwen3:8b -- full ChatML template with thinking control
curl -s http://localhost:11434/api/show -d '{"name":"qwen3:8b"}' | jq -r '.template'
# Output: Full template with $.IsThinkSet, $.Think, /no_think logic

2. Send a chat request with think: false

curl -s http://localhost:11434/api/chat -d '{
  "model": "qwen3-vl:8b",
  "messages": [{"role": "user", "content": "What is 2+2? Answer briefly."}],
  "think": false,
  "stream": false,
  "options": {"num_predict": 4096}
}'

Actual result: The response message.content is dominated by <think>...</think> blocks consuming the full token budget. The actual answer is empty or truncated.

Expected result: With "think": false, the model should skip thinking and return a direct response, as qwen3:8b does.

3. Compare with qwen3:8b (works correctly)

curl -s http://localhost:11434/api/chat -d '{
  "model": "qwen3:8b",
  "messages": [{"role": "user", "content": "What is 2+2? Answer briefly."}],
  "think": false,
  "stream": false,
  "options": {"num_predict": 4096}
}'

This correctly returns a direct answer without thinking blocks.

Workaround

Using raw: true with explicit ChatML formatting, appending /no_think to user messages, and prefilling <think>\n\n</think>\n\n in the assistant turn works correctly:

curl -s http://localhost:11434/api/chat -d '{
  "model": "qwen3-vl:8b",
  "messages": [
    {"role": "user", "content": "What is 2+2? Answer briefly. /no_think"},
    {"role": "assistant", "content": "<think>\n\n</think>\n\n"}
  ],
  "raw": true,
  "stream": false,
  "options": {"num_predict": 4096}
}'

Expected Behavior

qwen3-vl:8b should ship with the same ChatML template structure as qwen3:8b, including full $.IsThinkSet / $.Think thinking toggle support. Both models are Qwen3-family and support the same thinking/no-thinking modes.

Version Info

  • Ollama 0.16.2 (tested on spark-1)
  • Ollama 0.16.3 (tested on spark-2)
  • Models: qwen3-vl:8b, qwen3:8b
  • OS: Linux (ARM64, DGX Spark)
Originally created by @tankbottoms on GitHub (Mar 12, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14798 ## Description The `qwen3-vl:8b` model ships with a bare `{{ .Prompt }}` template that lacks the `$.IsThinkSet` / `$.Think` thinking-control logic present in the `qwen3:8b` model template. As a result, `"think": false` in API calls is silently ignored for `qwen3-vl:8b`, while it works correctly for `qwen3:8b`. ### qwen3-vl:8b template (current) ``` {{ .Prompt }} ``` No ChatML structure, no thinking toggle -- just a raw prompt passthrough. ### qwen3:8b template (correct) Full ChatML with `$.IsThinkSet`, `$.Think`, `/think` and `/no_think` toggle, and proper `<think>` block handling. ## Impact When using the chat API with `"think": false`, `qwen3-vl:8b` ignores the flag entirely. All tokens are consumed by thinking output, producing empty actual responses for any non-trivial prompt. **Example:** A 4096 `num_predict` token budget produces **17,054 characters of thinking** and **0 characters of response content**. ## Steps to Reproduce ### 1. Confirm the template difference ```bash # qwen3-vl:8b -- bare template curl -s http://localhost:11434/api/show -d '{"name":"qwen3-vl:8b"}' | jq -r '.template' # Output: {{ .Prompt }} # qwen3:8b -- full ChatML template with thinking control curl -s http://localhost:11434/api/show -d '{"name":"qwen3:8b"}' | jq -r '.template' # Output: Full template with $.IsThinkSet, $.Think, /no_think logic ``` ### 2. Send a chat request with think: false ```bash curl -s http://localhost:11434/api/chat -d '{ "model": "qwen3-vl:8b", "messages": [{"role": "user", "content": "What is 2+2? Answer briefly."}], "think": false, "stream": false, "options": {"num_predict": 4096} }' ``` **Actual result:** The response `message.content` is dominated by `<think>...</think>` blocks consuming the full token budget. The actual answer is empty or truncated. **Expected result:** With `"think": false`, the model should skip thinking and return a direct response, as `qwen3:8b` does. ### 3. Compare with qwen3:8b (works correctly) ```bash curl -s http://localhost:11434/api/chat -d '{ "model": "qwen3:8b", "messages": [{"role": "user", "content": "What is 2+2? Answer briefly."}], "think": false, "stream": false, "options": {"num_predict": 4096} }' ``` This correctly returns a direct answer without thinking blocks. ## Workaround Using `raw: true` with explicit ChatML formatting, appending `/no_think` to user messages, and prefilling `<think>\n\n</think>\n\n` in the assistant turn works correctly: ```bash curl -s http://localhost:11434/api/chat -d '{ "model": "qwen3-vl:8b", "messages": [ {"role": "user", "content": "What is 2+2? Answer briefly. /no_think"}, {"role": "assistant", "content": "<think>\n\n</think>\n\n"} ], "raw": true, "stream": false, "options": {"num_predict": 4096} }' ``` ## Expected Behavior `qwen3-vl:8b` should ship with the same ChatML template structure as `qwen3:8b`, including full `$.IsThinkSet` / `$.Think` thinking toggle support. Both models are Qwen3-family and support the same thinking/no-thinking modes. ## Version Info - **Ollama 0.16.2** (tested on spark-1) - **Ollama 0.16.3** (tested on spark-2) - **Models:** `qwen3-vl:8b`, `qwen3:8b` - **OS:** Linux (ARM64, DGX Spark)
GiteaMirror added the thinking label 2026-04-22 19:45:33 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 12, 2026):

qwen3-vl comes in two variants, thinking and instruct (non-thinking). If you want to disable thinking, use the instruct model qwen3-vl:8b-instruct.

<!-- gh-comment-id:4046984259 --> @rick-github commented on GitHub (Mar 12, 2026): qwen3-vl comes in [two variants](https://huggingface.co/collections/Qwen/qwen3-vl), thinking and instruct (non-thinking). If you want to disable thinking, use the instruct model [qwen3-vl:8b-instruct](https://ollama.com/library/qwen3-vl:8b-instruct).
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35318