[PR #12248] [MERGED] add qwen3-coder tool support #12486

Closed
opened 2025-11-12 16:37:01 -06:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/12248
Author: @drifkin
Created: 9/11/2025
Status: Merged
Merged: 9/16/2025
Merged by: @drifkin

Base: mainHead: drifkin/qwen3-coder-parsing


📝 Commits (2)

📊 Changes

15 files changed (+2012 additions, -57 deletions)

View changed files

📝 api/types.go (+11 -11)
model/parsers/parsers.go (+37 -0)
model/parsers/qwen3coder.go (+410 -0)
model/parsers/qwen3coder_test.go (+830 -0)
model/renderers/qwen3coder.go (+217 -0)
model/renderers/qwen3coder_test.go (+338 -0)
model/renderers/renderer.go (+26 -0)
📝 openai/openai.go (+26 -21)
📝 parser/parser.go (+7 -3)
📝 parser/parser_test.go (+28 -0)
📝 server/create.go (+2 -0)
📝 server/images.go (+21 -2)
📝 server/prompt.go (+24 -13)
📝 server/routes.go (+33 -5)
📝 server/routes_debug_test.go (+2 -2)

📄 Description

The format qwen3-coder uses is relatively unique, both in rendering and in parsing. To implement parsing, I wrote a custom parser in similar style to harmony. For the rendering, I found that the logic would be much more difficult to follow in a template, so I introduced the concept of a built-in renderer that uses go code, rather than a template to generate prompts.

I set us up for future built-in parsers and renderers by making it so they can be specified in a Modelfile like so:

RENDERER "qwen3-coder"
PARSER "qwen3-coder"

These need to be provided explicitly because the architecture alone is not enough to understand what format the model expects to receive, and what format we expect it to output (e.g., qwen3-coder is qwen3moe, which includes other qwen3-family models as well)

I haven't converted harmony to be one of these "built-ins" yet, since some of it is in flux with the changes @ParthSareen has been making to move harmony to the runner. It is likely that many other built-ins will need to move to the runner as well, but I'm able to slightly defer that decision since qwen3-coder doesn't have thinking (and therefore doesn't need to be in the runner to make structured outputs work). I expect to unify harmony with this approach very soon.

Whether a particular model supports tools or thinking was previously inferred from templates, but without a template we now also use the parser itself to declare what it supports. If we have future models that re-use the same parsing format, but have different capabilities, we'll want to parameterize them and give them different names to be specified as a PARSER.

Misc changes:

  • I worked on the renderer by diffing outputs from the reference implementation and ours. To make it easier to do this, I extended https://github.com/ollama/ollama/pull/11875 to also support returning the prompt via the openai compat layer

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/12248 **Author:** [@drifkin](https://github.com/drifkin) **Created:** 9/11/2025 **Status:** ✅ Merged **Merged:** 9/16/2025 **Merged by:** [@drifkin](https://github.com/drifkin) **Base:** `main` ← **Head:** `drifkin/qwen3-coder-parsing` --- ### 📝 Commits (2) - [`4799194`](https://github.com/ollama/ollama/commit/47991940d44d8c3db3a7f0d36135976de7aadf81) add qwen3-coder tool support - [`472feec`](https://github.com/ollama/ollama/commit/472feec2ff5096eb23f72356f26d67b71f18d01e) address comments ### 📊 Changes **15 files changed** (+2012 additions, -57 deletions) <details> <summary>View changed files</summary> 📝 `api/types.go` (+11 -11) ➕ `model/parsers/parsers.go` (+37 -0) ➕ `model/parsers/qwen3coder.go` (+410 -0) ➕ `model/parsers/qwen3coder_test.go` (+830 -0) ➕ `model/renderers/qwen3coder.go` (+217 -0) ➕ `model/renderers/qwen3coder_test.go` (+338 -0) ➕ `model/renderers/renderer.go` (+26 -0) 📝 `openai/openai.go` (+26 -21) 📝 `parser/parser.go` (+7 -3) 📝 `parser/parser_test.go` (+28 -0) 📝 `server/create.go` (+2 -0) 📝 `server/images.go` (+21 -2) 📝 `server/prompt.go` (+24 -13) 📝 `server/routes.go` (+33 -5) 📝 `server/routes_debug_test.go` (+2 -2) </details> ### 📄 Description The format qwen3-coder uses is relatively unique, both in rendering and in parsing. To implement parsing, I wrote a custom parser in similar style to harmony. For the rendering, I found that the logic would be much more difficult to follow in a template, so I introduced the concept of a built-in renderer that uses go code, rather than a template to generate prompts. I set us up for future built-in parsers and renderers by making it so they can be specified in a Modelfile like so: ``` RENDERER "qwen3-coder" PARSER "qwen3-coder" ``` These need to be provided explicitly because the architecture alone is not enough to understand what format the model expects to receive, and what format we expect it to output (e.g., qwen3-coder is `qwen3moe`, which includes other qwen3-family models as well) I haven't converted harmony to be one of these "built-ins" yet, since some of it is in flux with the changes @ParthSareen has been making to move harmony to the runner. It is likely that many other built-ins will need to move to the runner as well, but I'm able to slightly defer that decision since qwen3-coder doesn't have thinking (and therefore doesn't need to be in the runner to make structured outputs work). I expect to unify harmony with this approach very soon. Whether a particular model supports tools or thinking was previously inferred from templates, but without a template we now also use the parser itself to declare what it supports. If we have future models that re-use the same parsing format, but have different capabilities, we'll want to parameterize them and give them different names to be specified as a `PARSER`. Misc changes: - I worked on the renderer by diffing outputs from the reference implementation and ours. To make it easier to do this, I extended <https://github.com/ollama/ollama/pull/11875> to also support returning the prompt via the openai compat layer --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2025-11-12 16:37:01 -06:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#12486