mirror of
https://github.com/ollama/ollama.git
synced 2026-05-06 16:11:34 -05:00
Closed
opened 2026-05-04 18:58:45 -05:00 by GiteaMirror
·
64 comments
No Branch/Tag Specified
main
dhiltgen/ci
parth-launch-plan-gating
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
parth-launch-codex-app
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
hoyyeva/opencode-image-modality
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
No Label
bug
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#69734
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @d1g1t on GitHub (Aug 1, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11621
What is the issue?
As per https://github.com/QwenLM/Qwen3-Coder/blob/main/README.md, both Tool calling and FIM usage are supported
Current template - https://ollama.com/library/qwen3-coder:30b/blobs/c6a614465b37
Relevant log output
OS
No response
GPU
No response
CPU
No response
Ollama version
No response
@nicolasembleton commented on GitHub (Aug 1, 2025):
Yes, I can confirm.
@awaescher commented on GitHub (Aug 1, 2025):
This should be the reason why qwen3-coder can be selected in the Copilot model dropdown in Visual Studio Code, but it will not appear in the model selection afterwards.
@Ramzee-S commented on GitHub (Aug 1, 2025):
Same/similar problem here.
In goose (code name goose) its not working with the ollama tool error:
request failed: registry.ollama.ai/library/qwen3-coder:latest does not support tools (type: api_error) (status 400)
In Anon Kode (an open source claude code derivative that was DMCA taken down (still downloadable using npm i -g anon-kode)) not working with error:
Failed to configure provider: init chat completion request with tool did not succeed. API Error: API request failed:
registry.ollama.ai/library/qwen3-coder:latest does not support tools
However, when i use ollama trough the void editor, qwen3-code shows up, and is working (although not great).
Maybe this this related to:
https://github.com/langflow-ai/langflow/issues/8805
https://github.com/langflow-ai/langflow/issues/8805#issuecomment-3024294926
If so then this is all because qwen3-code it is not in the Langflow list, and therefore 'officially' does not support tools.
How to update that list? Do we really need that list? Is there some other temporary work around like disabling the official tool requirements but still using them any way?
@pminervini commented on GitHub (Aug 1, 2025):
Why there's no tool use support in the ollama version? https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct
@0xn3bs commented on GitHub (Aug 1, 2025):
Having same issue in Goose using Ollama version
Request failed: registry.ollama.ai/library/qwen3-coder:latest does not support tools (type: api_error) (status 400)@d1g1t commented on GitHub (Aug 1, 2025):
As a temporary workaround, for tools, using the template from one of the non-thinking qwen3 models works fine.
Such as https://ollama.com/library/qwen3:30b-a3b-instruct-2507-fp16/blobs/636353bf6b2f
@Wangyiquan95 commented on GitHub (Aug 1, 2025):
same issue for VS code continue: "registry.ollama.ai/library/qwen3-coder:latest does not support tools"
@hoborobott commented on GitHub (Aug 1, 2025):
same error
@0xn3bs commented on GitHub (Aug 2, 2025):
I tried this. It sort of works, but likely missed something. Would you be able to provide a gist with your modelfile?
@d1g1t commented on GitHub (Aug 2, 2025):
I used the template I linked as is. I only tested if it actually called the tools with that template and not much more. I pulled the qwen3-coder model from ollama and replaced the contents of the template file in the blob directory
sha256-c6a614465b370a1b4eb95e964f907a24396a5bb842eab6cc730f2cc4c309dc48I tried copying over the FIM stuff from qwen2.5-coder with trial and error. It kind of worked, but code autocomplete seemed much worse than the smaller 2.5 coder base models and it had issues with indentation (I'm not familiar with go templates, so there could easily have been mistakes there).
I was hoping the default template would be updated or someone who knew what they were doing would share the right template here
@Ramzee-S commented on GitHub (Aug 2, 2025):
I have tried (with llm) help to modify the template. I can get it to work to the point that it does not give the tool error. So it runs. But agents then sort of crash in the middle of using the tools. There is no normal continuous flow like usual.
I have tried with hybrid templates from normal Qwen3 and Qwen2.5 coder. But without very usefull success. I hope some one really know something more about these templates can help. How are these templates generated, do they come from Alibaba/Qwen? or are they generated by ollama team? or llama team?
@moll commented on GitHub (Aug 2, 2025):
I've seen this with multiple models and my current understanding is that it's because Ollama insists on parsing tool calls and filtering out non-existent ones. What probably happened is that the model hallucinated a tool and, Ollama having filtered it out, the conversation stopped abruptly. It'd probably be better if Ollama didn't filter out non-existent calls and instead let us API users inform the model it should retry.
@javier-lasheras commented on GitHub (Aug 2, 2025):
My Modelfile (use
ollama createto derive a working model):I suppose it's similar to @d1g1t 's one.
To test it I have modified
fill-in-middle.pyandmulti-tool.pyexamples from ollama-python repo and looks good for me. Can anyone test it. I'm not an expert either.@minzanupam commented on GitHub (Aug 3, 2025):
The above provided template did work for me, so I create my own. I hope that this is helpful.
@d1g1t commented on GitHub (Aug 3, 2025):
If tool calls still feel off, it may be due to Qwen3-Coder expecting tool definition and calling to be in an xml format rather than json (https://github.com/ggml-org/llama.cpp/issues/15012).
Template on the Qwen3-Coder model page https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct?chat_template=default
The regular Qwen3 does not have this (https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507?chat_template=default)
@junzhang-bjtu commented on GitHub (Aug 3, 2025):
https://ollama.com/SimonPu/Qwen3-Coder:30B-Instruct_Q4_K_XL/blobs/84f542b6545d
You can try this template.
@javier-lasheras commented on GitHub (Aug 3, 2025):
I've asked Qwen coder chat for a solution and give me some complaints on golang templates limitations to migrate jinja templates https://chat.qwen.ai/s/eee02f5c-757c-455f-913f-c6b46a174fef?fev=0.0.170
Anyway is needed add a big if/else/end block for FIM support:
I've not checked it and I've not time for now to prepare a good test cases. I'll hope be useful for someone.
@minzanupam commented on GitHub (Aug 3, 2025):
So all we can do is wait until the llama.cpp team add a custom parser for qwen3-coder-30b-a3b then this issue be fixed in ollama?
@fenris commented on GitHub (Aug 3, 2025):
$ curl -s localhost:11434/api/show -d '{"model":"qwen3-coder:30b-a3b-q8_0"}' | jq .capabilities
[
"completion"
]
$ ollama create qwen3-coder-fix:30b-a3b-q8_0 -f Modelfile
gathering model components
using existing layer sha256:1a8a72d99ed2b27bcc69ca3c0c858487a52202e94cf924a92ba99e0816b9c014
using existing layer sha256:d18a5cc71b84bc4af394a31116bd3932b42241de70c77d2b76d69a314ec8aa12
using existing layer sha256:1d68e259ca720b7d3b2256e2890861064652662b87705713ddcba09e42deee79
using existing layer sha256:84ffc241f2d42ea4ecfe4d552a250babfbca958aabc7cf5d16ec6fef0b7fcaae
writing manifest
success
$ curl -s localhost:11434/api/show -d '{"model":"qwen3-coder-fix:30b-a3b-q8_0"}' | jq .capabilities
[
"completion",
"tools"
]
thanks !
@bradleyandrew commented on GitHub (Aug 4, 2025):
I was able to partially get this working by using the attached modelfile which is similar to @javier-lasheras and @minzanupam modelfiles detailed above. For reference this was created as follows:
ollama create qwen3-coder-with-tools-32k -f path/to/modelfile/qwen3-coder.modelfileThis allows Qwen3 Coder to run in Qwen Code.
However when you give it a task, it seems to not be able to actually execute the tools. Below is an image of the Qwen Output in Terminal:

I assume that this is the LLM returning tool output:
{"name": "write_file", "arguments": {"file_path": "/qwen-test/index.html", "content":However Qwen Code does not execute it or prompt the user to create the file and commit it to disk. I have read that qwen3-coder returns tool output in XML not JSON so I suspect this is what the issue is here. It seems there is some work on converting XML to JSON via llama.cpp happening but not sure when/if this will come to Ollama. From what I have read there is currently no solution and it seems that Ollama + qwen3-coder:30b + Qwen Code CLI is not usable at this stage.
Modelfile:
qwen3-coder-modelfile.zip
@pminervini commented on GitHub (Aug 4, 2025):
I got Qwen Code to work with qwen3-coder by just using LM Studio (Beta channel), with
OPENAI_MODEL=qwen/qwen3-coder-30b qwen -y --openai-api-key apikey --openai-base-url http://localhost:1234/v1-- so I guess it boils down to how tool calls are parsed and returned by the endpoint@FranBarInstance commented on GitHub (Aug 5, 2025):
I have managed to get it to work partially in vscode with Copilot using this model: https://ollama.com/library/qwen3-coder:30b 19GB in Agent and Edit mode.
The template:
Spaces seem to be important:
Context size:
OLLAMA_CONTEXT_LENGTH=8192Optional:
/set parameter num_ctx 32768But it's terribly slow.
@Marabii commented on GitHub (Aug 6, 2025):
I also managed to make it work, but it's unbelievably slow, i have a good computer (RTX 5080 and 32 gb of ram) I can chat with it without issues and it's as fast as it should be.
@johnnysn commented on GitHub (Aug 8, 2025):
I got this jinja template from the unsloth GGUF file for the Qwen3-Coder-30B-A3B-Instruct model. It seems to have interesting instructions, such as suggesting the model to reason in natural language before tool calls, which will probably improve effectiveness in agentic coding tasks. Does anybody know how to port this to Ollama modelfile? I couldn't make it work yet...
@minzanupam commented on GitHub (Aug 8, 2025):
@jcx
@johnnysn
Qwen3-coder has a problem that the tool calling syntax has changed json to a custom xml type. The problem is that llama.cpp doesn't support it yet.
ggml-org/llama.cpp#15012
You can still get it work somehow with json by altering the templates but it is very unreliable.
I think this is reason why currently tools are not added to qwen3-coder default template.
But people working on llama.cpp are working on it and they have a draft pr ready. So it should not be too long before this issue also get's fixed.
ggml-org/llama.cpp/pull/15162
@gordo1337 commented on GitHub (Aug 11, 2025):
Getting the same problem in Goose : Request failed: registry.ollama.ai/library/qwen3-coder:latest does not support tools (type: api_error) (status 400)
@nicolasembleton commented on GitHub (Aug 13, 2025):
You can simply use the Unsloth versions from Huggingface, for example:
More options in UnSloth collection on HuggingFace, filter by the model size you look for, just use the "Use model with" on the top-right to get the ollama command.
@Codelica commented on GitHub (Aug 13, 2025):
@nicolasembleton I assume they are actually working for you in Agent/tool mode? I had downloaded one to try a couple days ago and it did appear as an option to choose in the Agent model list but output from prompts was returned and displayed as json like the following:
@nicolasembleton commented on GitHub (Aug 16, 2025):
I think these depend on the tools you use. Some agentic coding will work better than others depending on the models. Also, temperature, system prompt, etc... The way models return tool use is not the same from one model to another so the tools need to support them.
@SMFloris commented on GitHub (Aug 18, 2025):
I made a little tool that fixes tool use. It mostly works - sometimes I cannot get the LLM to call the tools at all. Sometimes it reverts to the json format for no apparent reason.
https://github.com/SMFloris/ollama-qwen3-coder-proxy
I also tried the Modelfile approach.
Both approaches are kinda the same in terms of quality of output and tool use. The proxy approach seems just a tad bit more stable and triggers tools much more reliably.
@DXXS commented on GitHub (Aug 19, 2025):
Just got this myself, attempting to use the creator's "qwen-code" w/local v1 endpoint:
[API Error: 400 registry.ollama.ai/library/qwen3-coder:30b does not support tools]
Apparently model no longer ~recognizable to model's own creator!?
@alperakgun commented on GitHub (Aug 21, 2025):
EDIT3: I have added the strict prompting instructions in Modelfile - and got better results.
EDIT2: It's not really running any better :( . SEE MY LATEST COMMENT https://github.com/QwenLM/Qwen3-Coder/issues/475#issuecomment-3210807876
EDIT1: Linking to the main comment https://github.com/QwenLM/Qwen3-Coder/issues/475#issuecomment-3210746025
Here's how I re-edited the unsloth's UD-Q4_K_XL gguf, to be able to run tools more reliably
@neurostream commented on GitHub (Aug 22, 2025):
Also seeing this with qwen3-coder:480b (there's no "instruct" -named model : https://ollama.com/library/qwen3-coder:480b) and "tools" are not listed in the capabilities.
@pminervini commented on GitHub (Aug 22, 2025):
with llama.cpp and lm studio, everything works like a charm
@robbiemu commented on GitHub (Aug 30, 2025):
don't know if it would interest anyone but I think I have tool calling working well in ollama with my own template (I am using unsloth's version based on discussions in llama.cpp): https://ollama.com/robbiemu/qwen3-coder:30b-a3b-i-q4_K_XL
@djholt commented on GitHub (Aug 31, 2025):
Needing to put Ollama aside for now over this issue. llama.cpp and lm studio are both handling tool use with qwen3-coder flawlessly.
@fuwh617 commented on GitHub (Sep 1, 2025):
https://docs.unsloth.ai/basics/qwen3-coder-how-to-run-locally#tool-calling-fixes
@djholt commented on GitHub (Sep 1, 2025):
Unsloth's template fixes work great in llama.cpp.
@robbiemu commented on GitHub (Sep 1, 2025):
IMO, the issue largely boils down to:
If you want something more faithful to unsloth's than the one in the model I published, I think this might be close to best-effort without changing the ollama template syntax.
@fuwh617 thank you... I think I will go update my model tomorrow (its late now)
@neurostream commented on GitHub (Sep 4, 2025):
How does a model get from its origin with the Qwen org on Hugging Face to the ollama.com registry? I'd love to see an official qwen3-coder model with full tool use capability on ollama.com.
Who published https://ollama.com/library/qwen3-coder ? Is this the right place to file this issue?
@awaescher commented on GitHub (Sep 23, 2025):
It has been nearly 8 weeks and still noone updated the manifest to reflect tool capabilities?
@drifkin commented on GitHub (Sep 23, 2025):
We recently added first class tool support for both qwen3-coder models in v0.12.0 (with some improvements coming in v0.12.1 very soon). This was implemented in https://github.com/ollama/ollama/pull/12248 via a custom renderer and parser specifically for qwen3-coder (the format is a bit specialized).
Be sure to upgrade Ollama, and pull the model (e.g.,
ollama pull qwen3-coder:30b). Once you do that, it should have tool support (which you can verify viaollama show qwen3-coderand look under "Capabilities")Here's a quick example call you can run:
qwen3-coder-tool-call.sh
Response:
We had a bug with the site that we just fixed that didn't show that these models support tools now, but it's fixed now. We were planning to put the announcement in the 0.12.1 release notes, but this message is a sneak peak.
I'll close out this issue now, but open a separate one for FIM to investigate (EDIT: opened at #12387). It might just be doable in "user-space" since the qwen3-coder examples show it being just a message that happens to have special tokens in it, but I want to test it end-to-end in a few different ways.
@Fhrozen commented on GitHub (Sep 24, 2025):
Just to report (Not sure if someone else already tried).
Using a Docker image updated 10 hours ago, the model was deleted and downloaded again.
show qwen3-codershows:Now, VSCode lists it:
But, it cannot be used as agent:
Did VSCode need any additional setup?
@ozdang commented on GitHub (Sep 24, 2025):
For those who want to convert a GGUF file and register it in Ollama, please refer to the following when writing your Modelfile:
Starting from version 0.12.0, it seems that Ollama handles TEMPLATE with RENDENDER, PARSER automatically. From what I found in the issues, it’s still unstable for now, so we’ll need to wait and see a bit longer. I briefly tried to convert from GGUF on version 0.12.1.
@drifkin commented on GitHub (Sep 24, 2025):
thanks @ozdang, yeah the intention is for you to be able to use
RENDERER qwen3-coderandPARSER qwen3-coderwith whatever weights you want to.I'd expect the parser to be relatively stable, should be generally useful. Quite possible we'll need to tune some escaping ambiguities. So curious which issues you ran into, definitely want to run them down asap
@awaescher commented on GitHub (Sep 24, 2025):
That's way more than I expected, thanks a lot @drifkin and team
@neurostream commented on GitHub (Sep 24, 2025):
@neurostream commented on GitHub (Sep 24, 2025):
@drifkin thank you!!!
https://ollama.com/library/qwen3-coder now shows TOOLS!
also, thanks for the bash shell script instead of needing an extra python/golang setup step for this type of quick test! :))
@maks commented on GitHub (Oct 20, 2025):
@drifkin This doesn't seem to have been fixed.
I deleted and pulled qwen3-coder today.
and then:
yet:
And sure enough when I try to use qwen3-coder with opencode it reports the model doesn't support tool calling. If I try it with qwen3 the tool calling works as expected (note I set both models to 16k context when testing with oepncode because it requires bigger ctxs)
@BradKML commented on GitHub (Oct 24, 2025):
Can someone help out here? https://github.com/charmbracelet/crush/issues/447#issuecomment-3443074261
@drifkin commented on GitHub (Oct 24, 2025):
can you show me the output of
ollama show qwen3-coder:latestand alsoollama show qwen3-coder --modelfile | grep -C2 RENDERER? qwen3-coder isn't actually using that dummy template, it should be using our newer built-in renderer and parser for it. When it uses that parser it should have the tools capability@maks commented on GitHub (Oct 27, 2025):
Sorry for the slow reply @drifkin .
Here is the output I get, is this what is expected?
@rcillo commented on GitHub (Oct 27, 2025):
@maks one thing I noticed is that when making changes to parameters, such as
/set parameter num_ctx 16384, as many opencode users do, after saving these changes with/save qwen3-coder:opencode, thetoolscapability is gone. It's no longer there whenollama show qwen3-coder:opencode.Before:
ollama show qwen3-coder:latestAfter:
ollama show qwen3-coder:opencodeSomething happens when saving the change to parameters that erases
toolfrom the list of capabilities for this particular model. I was able to make these changes for other models, such asqwen3, for example, without issues. That might explain your frustration. It's not that @drifkin didn't fix it, it's that it's still partially broken (unable to update parameters).@drifkin commented on GitHub (Oct 27, 2025):
Thanks so much @rcillo! I can repro that easily. I have a suspicion about what it is, I'll verify later today and get a fix pushed.
@pminervini commented on GitHub (Oct 27, 2025):
it's been ~3 months and qwen3-coder is still broken in ollama, this is awkward -- how is someone supposed to use it over llama.cpp or lm studio?
@drifkin commented on GitHub (Oct 27, 2025):
@pminervini: are you running into trouble with this model? qwen3-coder is generally working very well, I think in the past few replies we've discovered an issue with a bug with model saving, which I'll get fixed quickly
@drifkin commented on GitHub (Oct 27, 2025):
was able to repro and fix in https://github.com/ollama/ollama/pull/12793, will try to get that in shortly. Thanks again for reporting @rcillo and @maks (I suspect it's the same root cause for you too, since you said you increased the context size for the model?)
@maks commented on GitHub (Oct 27, 2025):
awesome thanks @drifkin ! 🎉 Yes I did increase the context size for the model, but I think I was also seeing this when trying to use the "original" model that I downloaded and looking at your PR it seems this fix is only for the bug where modified models are created using /save. But I'll double check and test with original model here to be sure I'm remembering correctly.
@rcillo commented on GitHub (Oct 28, 2025):
hi @maks, just updating here on this thread, the problem is solved on
mainhttps://github.com/ollama/ollama/pull/12793#issuecomment-3456316404 kudos to @drifkin for the fast fix.@ramarivera commented on GitHub (Nov 15, 2025):
Hey @drifkin 👋🏻
Just trying to understand what is the fix for this issue, and how to check whether we have the fix or not. Should I just update to latest ollama and it should work?
@drifkin commented on GitHub (Nov 15, 2025):
yes and re-pull the model as well (some metadata may have changed since you originally pulled it).
ollama pull qwen3-coder@ramarivera commented on GitHub (Nov 15, 2025):
Same thing unfortunately after updating ollama and the model :(
Output in case its useful
@drifkin commented on GitHub (Nov 15, 2025):
that all looks correct, could you open a new issue with some details on how you're trying to use it and what's happening?
@ramarivera commented on GitHub (Nov 15, 2025):
Done @drifkin https://github.com/ollama/ollama/issues/13093