mirror of
https://github.com/ollama/ollama.git
synced 2026-05-06 16:11:34 -05:00
Open
opened 2026-05-05 02:43:43 -05:00 by GiteaMirror
·
46 comments
No Branch/Tag Specified
main
dhiltgen/ci
parth-launch-plan-gating
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
parth-launch-codex-app
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
hoyyeva/opencode-image-modality
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
No Label
bug
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#71855
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @KeuntekLee on GitHub (Apr 4, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15315
Originally assigned to: @drifkin on GitHub.
What is the issue?
After updating with ollama version 0.20.1 for fixing gemma4 tool parsing error(#15254), ollama still show errors on tool calling.
Using gemma4:e4b with opencode, oh-my-opencode, ollama 0.20.1
Relevant log output
OS
Linux
GPU
Nvidia
CPU
AMD
Ollama version
0.20.1
@Habitante commented on GitHub (Apr 4, 2026):
Root cause analysis
The underlying issue is that Gemma 4's tool call format is not JSON — it's a custom serialization:
Rules:
<|"|>(special token 52), not regular"quotes<|"|>delimiters can contain any characters: double quotes, single quotes, backticks, backslashes, newlines, etc.The current parser appears to do a naive
<|"|>→"replacement and then calljson.Unmarshal. This breaks whenever the string content contains characters that aren't valid unescaped JSON — which is exactly what both error messages in this issue show:invalid character ''` — backtick inside a markdown code block argumentinvalid character '\''— single quote inside a regex pattern argument like':\s*\w+'PR #15255 fixed one case (internal double quotes), but the problem is more fundamental: you can't convert Gemma 4's format to JSON with string replacement alone, because the content between
<|"|>delimiters is arbitrary text that needs proper JSON string escaping (backslashes, control chars, etc.) before it can be placed between"quotes.What a correct parser looks like
vLLM's
gemma4_tool_parser.pyhas a working reference implementation. Their approach:call:name{...}key:valuepairs with a custom tokenizer that understands<|"|>as string delimiters<|"|>delimiters): capture the raw content, then properly escape it for JSONjson.dumps()/json.MarshalThe critical step is #3 — the raw content between
<|"|>delimiters must be JSON-escaped (\n→\\n,"→\",\→\\, etc.) before being placed into a JSON string value.Reproduction (minimal, no app needed)
This reliably triggers the bug because
print("hello world")contains double quotes that generate<|"|>escaping:Expected:
tool_callswithprint("hello world")Actual: Empty content,
done_reason: "stop", notool_callsfield. The WARN log shows parsing failed.Contrast with a quote-free argument that works fine:
This returns a valid
tool_callsresponse becauseprint(2 + 2)has no internal quotes.Additional failure modes from systematic testing
print(2 + 2)def fibonacci(n):...print("hello world")The pattern: any tool argument containing characters that need JSON escaping will fail. Simple numeric expressions work; anything with string literals, regex patterns, shell commands, or multi-line code fails.
References
vllm/tool_parsers/gemma4_tool_parser.pyandgemma4_utils.py@drifkin commented on GitHub (Apr 6, 2026):
Thanks for reporting. These look like the model emitted invalid tool calls:
<|"|>quotes around the valueSometimes very small models will make mistakes in tool calling, but perhaps there are some other issues at play, we'll investigate. Depending on what we find, we may make the tool parser try to repair these bad calls. Thanks again for reporting, the logs were extremely helpful.
@AbeEstrada commented on GitHub (Apr 6, 2026):
What is the issue?
Error while calling a tool, I'm using
gemma4:26b-a4b-it-q4_K_MI hope this is relevant and can help fix this issue
Relevant log output
OS
macOS 15.7.4
GPU
M2 Max
CPU
M2 Max
Ollama version
0.20.2
@AdriRayns commented on GitHub (Apr 6, 2026):
I've also been dealing with that error all weekend while running the 31b model. I ran a lot of tests to see if I could adjust the behavior using the modelfile, stops, templates, temperature, and a prompt to “force” it to make the tool calls correctly... but nothing worked. I saw that there were open issues over the weekend, and I figured it was something “common” to other users.
@BaneusCatrix commented on GitHub (Apr 6, 2026):
Same:
time=2026-04-06T19:16:04.667Z level=WARN source=gemma4.go:293 msg="gemma4 tool call parsing failed" error="invalid character '<' looking for beginning of value" content="call:bash{command:<|\"|>ls}"ollama: 0.20.2
model: gemma4:26b
os: cachyos
gpu: Nvidia RTX 4080
@aalzehla commented on GitHub (Apr 6, 2026):
same issue here
MacBook Pro M1
LM Studio 0.4.9+1
Model: gemma4:26b
@AdriRayns commented on GitHub (Apr 7, 2026):
I can confirm that the new version 0.20.3 resolves this issue (which was discussed at https://github.com/ollama/ollama/issues/15241)
I think this issue should be closed.
@kevincox commented on GitHub (Apr 7, 2026):
Forgive me if I am missing something, but isn't this being handled at the wrong later. IIUC Gemma 4 has a specific token for tool call arguments. But this token is currently being mapped to the text
<|"|>then this text is being parsed for cases of that replacement string.But this means that the argument parsing will also interpret other tokens that "evaluate" to
<|"|>as tool call argument delimiters. I think this will cause issues if the delimiter appears inside of the tool call.It seems that Google did something really clever here to make the delimiters clear and unambiguous but the way Ollama is handling this defeats that by doing this logic after converting from tokens to strings rather than before.
So the new patch probably "mostly works" but isn't fully correct.
@kevincox commented on GitHub (Apr 7, 2026):
Also I'm still getting issues with tool calls on 0.20.3 It may be a touch better, but still fails very often on
gemma4:e4b.@drifkin commented on GitHub (Apr 7, 2026):
Hi @kevincox you're right that operating at the token-level rather than string-level is more correct, and I'm similarly very happy about this new special token approach, escaping is one of the trickiest issues for models and I think this will ultimately help that issue a lot. Switching everything to token-based will be fairly invasive, but it is on my list of things to investigate. However, I don't think that this is the root cause of any of the issues in this thread, or any failed tool calls I've looked at so far. They've all been from actually malformed tool calls. (Though I'm sure I could easily repro the token v. string problem by trying to use gemma4 to work on this gemma4 parser, for example!)
Thanks for the additional two logs, I'll work on a repair for those as well. Notice that they're not ambiguous, but rather malformed:
is missing the opening
<|"|>, but has a closing one.@drifkin commented on GitHub (Apr 7, 2026):
@kevincox: I took a closer look and those examples are more malformed than I thought. They're actually missing the tool name itself (e.g., if the intended tool name was
bash, thencall:bash{command:<|"|>ls -F web/<|"|>,description:<|"|>...<|"|>})I wonder if you can provide more example logs to see if there are any other repairable cases or if they're all in this very malformed state? Or if you have a straightforward repro, that would be helpful too.
@kevincox commented on GitHub (Apr 7, 2026):
I can't seem to find any super simple cases. I'm mostly using opencode, I'm trying to figure out what logging is available there. I found a different case here:
It does seem unexpected that tool calling is so inconsistent. I know it is a fairly small model but it seems that this behaviour should be well baked in. I'll try to gather more evidence and post soon.
@drifkin commented on GitHub (Apr 7, 2026):
thanks so much! super helpful. I'll see if I can repro in opencode as well.
@kevincox commented on GitHub (Apr 8, 2026):
Opencode's instructions to enable debug logging don't seem to work, so I don't now how to get a reproducable example. But I'm trying to get a working example with just the Ollama API. For now I'll log failed tool calls in this message.
@kevincox commented on GitHub (Apr 8, 2026):
Ok, found a relatively small example:
@kevincox commented on GitHub (Apr 8, 2026):
This has a different failure mode:
@kevincox commented on GitHub (Apr 8, 2026):
@drifkin commented on GitHub (Apr 8, 2026):
@kevincox thanks so much! I was able to repro (though for me I had to use a different seed)
these look much more repairable, so I'll take care of these at least
@drifkin commented on GitHub (Apr 8, 2026):
One thing that's interesting about all these examples is that
"tools"is either not provided, or it contains invalid entries (e.g., the first one has a valid tool definition forwrite_file, but thenglobis missing the{ "type": "function", "function}: { ... } }"wrapper".Beyond us needing better warnings instead of silently failing to parse unexpected tool shapes, I wonder if this could be the main source of these remaining malformed tools: if the model is told to make a tool call, but doesn't "see" the definition of such a tool, it starts hallucinating and is worse at following its trained format. Your final example is particularly helpful since it only defines the tool in the system message, rather than in the format it expects.
If this is the case, and you're still seeing issues with opencode, I wonder if there's perhaps a bug somewhere that's dropping some of their tool definitions (or they're incorrect?). Or it could just be that this is an easier form to try to get a repro for
@kevincox commented on GitHub (Apr 8, 2026):
Those are surely not the case with opencode. That was just me trying to make an API call too quickly. It seems that it doesn't really matter if the definition is there or not. I noticed the error when I switched to trying llama.cpp (which doesn't seem to have tool call issues at all, but I need more testing to say for sure). So I would assume it is a red herring.
@kevincox commented on GitHub (Apr 8, 2026):
Yeah, found a case with a valid tool call definition (I think)
Might have been coincidence but it did take a lot longer to find a seed. So maybe this makes the model less likely to fail the call.
@drifkin commented on GitHub (Apr 8, 2026):
The arguments belong under
function.parameters.propertiesinstead offunction.parameters.parameters, so this example still only has the tool partially defined. Here's a corrected one:@kevincox commented on GitHub (Apr 8, 2026):
Thanks for the fixing. Some better validation here would definitely be a UX improvement :)
But after a bit of seed-searching I quickly found an example:
@kevincox commented on GitHub (Apr 8, 2026):
And a slightly cut-down case:
But I really don't think the specific examples are too important. It just shows that cases like this are easy to hit. I've also seen examples with other tool calls.
@warabe1122 commented on GitHub (Apr 9, 2026):
Environment
/api/chatReproduction
Using OpenClaw's writer agent with gemma4:26b to edit a ~15K-character Markdown file. The model generates correct edit content but the tool call JSON parsing fails every time.
Log output (Ollama 0.20.4)
Attempt 1
Attempt 2
In both cases the raw
contentfield shows Gemma4 using its native<|"|>string delimiters insidecall:edit{edits:[{newText:<|"|>...Key observation
The model IS generating semantically correct edits. The content inside
newText/oldTextis valid and relevant. The failure is in Ollama'sgemma4.goparser not handling the<|"|>string delimiters that Gemma4 uses in its native tool call format.This is blocking all file-editing tool calls with gemma4:26b, making it unusable as an agentic writer despite the model's actual capability being fine.
Updated from 0.20.0 to 0.20.4, same issue persists.
@drifkin commented on GitHub (Apr 9, 2026):
Do you have an example raw call so I can check whether it is indeed well-formed? And if it is, then I can fix whatever gap the parser has (this would be the first well-formed example provided in this thread that misparses).
@warabe1122 commented on GitHub (Apr 10, 2026):
Thanks for jumping on this! Here are captured raw parser failures from my logs. Environment:
gemma4:26bQ4_K_MNote: I've since uninstalled gemma4:26b, so I can't capture fresh traffic right now, but the
content=field in the warnings below is the exact text the parser was given. Happy to reinstall and capture a full/api/chatrequest/response pair on whichever pattern is most useful.Pattern A — simple exec, embedded quotes
Semantically both are correct —
ls -F "vault/"and thesummarizeinvocation are exactly what I asked for. The gap looks specific to<|"|>-delimited string values that themselves contain".Pattern C — file edit,
edits:[{newText, oldText}, ...]This is the one that made gemma4:26b unusable as a writer agent. Note it's hitting
gemma4.go:299and the error mentions the repair path, so it's going through repair and still failing:Semantically this is a perfectly valid
{newText, oldText}edit pair list targeting a markdown file. The gap looks like (a)<|"|>-delimited strings containing newlines and", combined with (b) the array-of-objects shape aroundedits:[...]. I have multiple captures of this exact pattern — happy to drop the raw log lines in a gist if that helps.Let me know what's most useful — raw log gist, or a fresh
/api/chatcapture on Pattern A or C once I reinstall. Thanks for looking at this.@drifkin commented on GitHub (Apr 11, 2026):
thanks @warabe1122 for the logs, I'll take a look at them soon. Earlier today Google released a set of changes to their Gemma 4 format, and we have a new prerelease up that incorporates those same changes: https://github.com/ollama/ollama/releases/tag/v0.20.6-rc0
I'd be curious if using this pre-release fixes some of the tool calling issues you've been running into.
@drifkin commented on GitHub (Apr 11, 2026):
@warabe1122: looking more closely at your logs: Pattern A is almost certainly fixed by Ollama 0.20.4, so I wonder if maybe you're still running a stale (i.e., 0.20.0) server? Running
ollama --versioncan help check this.Pattern C is still a problem (assuming v0.20.6-rc0 and later will still generate such a call), appears to be from those newlines. I don't think the upstream parser would handle that either, but I'll either modify ours to accept it or make it part of the repair process, I don't see any reason not to. If you could post a full trace in a gist it would be useful to make sure that the newline is the only issue. Thank you!
@warabe1122 commented on GitHub (Apr 11, 2026):
You were right — I double-checked my log headers:
server-4.log:Listening on 127.0.0.1:11434 (version 0.20.0)— this is where Pattern A came from (2026-04-03)server-1.log:Listening on 127.0.0.1:11434 (version 0.20.4)— this is where Pattern C came from (2026-04-09)So Pattern A was on stale 0.20.0 and is consistent with being fixed by 0.20.4. Sorry for the noise there.
Pattern C is on 0.20.4 — full trace in a gist here:
https://gist.github.com/warabe1122/d2835d3c2b2adabcdc72c816b933b26e
It includes the raw log line, a decoded/pretty-printed version of the
content=field so the structure is easy to read, and notes on what I think is going on (newlines inside<|"|>, plus theedits:[ ... ]array-of-objects shape). The edited file was a Japanese markdown doc, so the strings contain CJK +**bold**+ embedded", in case any of those matter.I'll reinstall
gemma4:26bonv0.20.6-rc0and try to reproduce Pattern C with a fresh/api/chatcapture — will post back if I can get a clean trace on the prerelease. Thanks for looking into this!@drifkin commented on GitHub (Apr 11, 2026):
Great, thanks so much! It wouldn't surprise me if it still repros, it might just be the model seeing a lot of newlines and then getting confused about the tool calling format itself not usually having newlines. I have a fix up at https://github.com/ollama/ollama/pull/15494 that relaxes the parser into allowing whitespace before tool call keys, which I expect to fix this issue. You could run that branch yourself via
go run . serveif you're curious, but should have a second release candidate up soon-ish that has that patch in it, will post here when it's ready.@warabe1122 commented on GitHub (Apr 11, 2026):
Thanks, that matches what I was seeing. I'll hold off on building from the branch and wait for the next RC — happy to re-test Pattern C against it with a fresh
/api/chatcapture once it drops. Appreciate the quick turnaround!@drifkin commented on GitHub (Apr 12, 2026):
@warabe1122: rc1 is up at https://github.com/ollama/ollama/releases/tag/v0.20.6-rc1
@emansom commented on GitHub (Apr 12, 2026):
@drifkin Consider looking at LiteRT-LM and how it parses and emits Gemma 4 stuff. It is the canonical reference implementation:
https://github.com/google-ai-edge/LiteRT-LM/blob/main/runtime/conversation/model_data_processor/gemma4_data_processor.cc
https://github.com/google-ai-edge/LiteRT-LM/blob/main/runtime/conversation/model_data_processor/gemma4_data_processor_test.cc
@warabe1122 commented on GitHub (Apr 12, 2026):
Tested on
v0.20.6-rc1— Pattern C is fixed!Environment
v0.20.6-rc1(binary from release assets, running viaollama serve)gemma4:26b(fresh pull, ID5571076f3d70)Test 1 — English, short multiline
edits:[...]Sent a
/api/chatrequest with a tool schema matching the original failure:edittool witheditsarray of{newText, oldText}pairs. Model returned a well-formed tool call with two edit pairs. Parser accepted it cleanly.Test 2 — Japanese, long multiline
edits:[...]Same structure but with Japanese markdown content,
**bold**markup, and multi-linenewText/oldTextvalues containing\n— this is the closest match to the original Pattern C failure. Model returned a well-formed tool call, parser accepted it.Server log
Zero
gemma4 tool call parsing failedwarnings in the server log across both tests.Looks like #15494 resolved it. Thanks for the quick turnaround @drifkin!
@CamJN commented on GitHub (Apr 14, 2026):
Using ollama 0.20.7:
With the js library:
The
response.message.contentis"```json\n{\n "json details": "elided"\n}\n```"which has erroneous```jsonand```markers.@CamJN commented on GitHub (Apr 14, 2026):
gemma4:31bhas the same issue, and additionally does not follow the schema.@drifkin commented on GitHub (Apr 14, 2026):
can you open a different issue for that @CamJN, I don't think you're doing anything with tool calls, right?
@emansom commented on GitHub (Apr 14, 2026):
@drifkin Could you explain to me why Ollama has specific Gemma 4 workarounds and testing, when that is already in llama.cpp project? It seems like a duplication of effort to me, which could result in false positives and even harder to debug issues later.
Or is Ollama using a different parser? Different from the default PEG parser in llama.cpp?
And why?
Fix in one place, different behavior/broken in other place?
Shouldn't llama.cpp be functional by default, and Ollama acting as a gateway? Not a duct-tape?
https://github.com/search?q=repo%3Aggml-org%2Fllama.cpp+gemma4&type=code
https://github.com/ggml-org/llama.cpp/pull/21760
https://github.com/ggml-org/llama.cpp/pull/21418
https://github.com/ggml-org/llama.cpp/pull/21326
@drifkin commented on GitHub (Apr 14, 2026):
yup, we use our own renderers and parsers, for quite a while now. This all predates the parsers you're mentioning. We have several underlying inference engines and having the parsers at a different layer is useful to us.
@emansom commented on GitHub (Apr 14, 2026):
I see. Appreciate the insight/context!
Are you aware of the LLGuidance support in llama.cpp (and other inference engines)?
https://github.com/ggml-org/llama.cpp/blob/master/docs/llguidance.md
Ollama could utilize that to achieve better performance, and enforce better parsers; while reducing code complexity (Lark syntax vs custom parsers)?
I'm positive @mmoskal can concur.
@drifkin commented on GitHub (Apr 15, 2026):
thanks for the links! We do want to do more with structured outputs/constrained decoding in the future. And particularly for tool calls.
@emansom commented on GitHub (Apr 15, 2026):
I see @ParthSareen also had some thoughts about it
https://parthsareen.com/writings/sampling/
Seems it would be very beneficial for guiding tool calls:
https://github.com/guidance-ai/llguidance/blob/main/docs/toolcalls.md
@jk105jk105 commented on GitHub (Apr 17, 2026):
Hi, I'm still getting msg="gemma4 tool call parsing failed".
Full error
Also the invalid character is inconsistent.
Context: I'm trying to have gemma4 generate a elasticsearch query so that I can use in a search API tool call.
System
ollama: 0.21.0
os: linux
model: gemma4:eb4
Let me know if you need more information. Thanks!
@PureBlissAK commented on GitHub (Apr 18, 2026):
🤖 Automated Triage & Analysis Report
Issue: #15315
Analyzed: 2026-04-18T18:22:38.782881
Analysis
Implementation Plan
This issue has been triaged and marked for implementation.
@johnny-smitherson commented on GitHub (Apr 23, 2026):
yes
ollama version is 0.21.0