mirror of
https://github.com/ollama/ollama.git
synced 2026-05-08 17:49:24 -05:00
Closed
opened 2026-04-22 12:58:42 -05:00 by GiteaMirror
·
49 comments
No Branch/Tag Specified
main
hoyyeva/opencode-image-modality
hoyyeva/anthropic-renderer-local-image-path
hoyyeva/anthropic-local-image-path
dhiltgen/ci
dhiltgen/llama-runner
parth-launch-codex-app
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.30.0-rc7
v0.30.0-rc6
v0.30.0-rc5
v0.23.2
v0.23.2-rc0
v0.30.0-rc4
v0.30.0-rc3
v0.30.0-rc2
v0.30.0-rc1
v0.30.0-rc0
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
No Label
bug
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#32077
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @DoiiarX on GitHub (Mar 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9680
Originally assigned to: @ParthSareen on GitHub.
What is the issue?
gemma3 lack function calling tag
Relevant log output
OS
No response
GPU
No response
CPU
No response
Ollama version
No response
@Marcisbee commented on GitHub (Mar 12, 2025):
4b+ supports
vision. But I do not see that it actuall supports tools, am I missing something.I get this:
{"error":"registry.ollama.ai/library/gemma3:4b does not support tools"}@CesarPetrescu commented on GitHub (Mar 12, 2025):
I have the same issue while on https://blog.google/technology/developers/gemma-3/ it says:
"Create AI-driven workflows using function calling: Gemma 3 supports function calling and structured output to help you automate tasks and build agentic experiences."
Any way to fix it in ollama?
@rick-github commented on GitHub (Mar 12, 2025):
Function calling is not mentioned on the HuggingFace repos for gemma3 and the
chat_templatemakes no mention of tools. The template can be modified to support a generic tool use capability, but if the model was actually tuned for tool use, it would be best to use a suitable template. Until Google releases the details I think it's a matter of rolling your own and hoping the results are good enough.@rick-github commented on GitHub (Mar 12, 2025):
Parsing the tool calls has nothing to do with why gemma3 doesn't support tools.
@rick-github commented on GitHub (Mar 12, 2025):
phi4-mini was published 11 days ago.
ollama can be quite useful without tools, but these days it's certainly true that tool uses expands the scope of deployment. Fortunately tool support is quite good, but there's always room for improvement, so we'll see what happens now that 0.6 series has been started.
@tripolskypetr commented on GitHub (Mar 12, 2025):
Tools are the AI development because they are the base for Agent Swarm implementation. Without them, all the model can do is to work in demo mode with obsolete data without third party integrations
@rick-github commented on GitHub (Mar 12, 2025):
Then it's fortunate that ollama supports tool using models and has many third party integrations for use in deploying ollama based systems.
@joaquindas commented on GitHub (Mar 12, 2025):
The model card on HF also doesn't have a role for tools. Does the model inherently support function calls?
@rick-github commented on GitHub (Mar 12, 2025):
The blog post says it does, but neither the HF page nor the
chat_templatemake any indication of support.@tripolskypetr commented on GitHub (Mar 12, 2025):
It is, try on LMStudio
The problem is Ollama Team does not even test are tool calls working when publishing the model with
toolstag. For example,nemotron-minigot the tools tag but it does not call the toolsSo they simple started to publish every model without tools label. Even if the model support tool calls
@joaquindas commented on GitHub (Mar 12, 2025):
It's a bit confusing bc the google team was the ones that uploaded the model to HF with relevant configs. Either they messed something up or we're missing something?
@joaquindas commented on GitHub (Mar 12, 2025):
I tried looking for it, but couldn't find it here. Double checking that it's not Gemma2 you're talking about?
@rick-github commented on GitHub (Mar 12, 2025):
Not available from the LM Studio library. Did you import from elsewhere?
nemotron-mini does support tools, see here.
phi4-mini was published 11 days ago and has a
toolslabel.@rick-github commented on GitHub (Mar 12, 2025):
I've had read through the tech report and browsed their kaggle,hf and cloud.google sites and there's no concrete examples of tool use. I'm wondering if it's a feature of their aitstudio platform. I've probed the model for tool support and it seems to respond in the right way. I'll see if I can spin a tool-using template, even if not in the format it might have been trained for.
@kucukkanat commented on GitHub (Mar 12, 2025):
@tripolskypetr this is an open source project. the model is an open source one. if you are unhappy stop shitmouthing, contribute or fork, or you can go "entertain" yourself
@tripolskypetr commented on GitHub (Mar 12, 2025):
This is exactly what I am talking about. The models published to ollama registry are fake: to use ollama you have to download them, fix them and upload them.
And it does not guarantee the model quality: you defenitely will spend a time for writing your own system prompt for a model, but the model itself can be unusable
As an open source contributor a am making these facts public available. People must known the problem with low quality of ollama models still exist and maintainers do nothing about it
@rick-github commented on GitHub (Mar 12, 2025):
And yet many people are using the default models without a problem.
Prompt engineering is a skill that many developers need. However for the default case, the existing template seems to work fine. For example, the nemotron-mini model works with the default system prompt as shown here.
It there's a genuine problem with a model, the maintainers will fix it. Recently phi4 was not ready out of the gate and it was fixed in a few hours.
@rick-github commented on GitHub (Mar 12, 2025):
So the model is actually pretty good at generating tool calls, but not so great at processing the result of a tool call. The model doesn't have a
toolrole or have tokens likeipythonor<tool>to indicate to the model that it's getting generated data. Despite what the blog says, I'm leaning towards this model not having been trained in tool use.@joaquindas commented on GitHub (Mar 13, 2025):
How are you testing this if the model doesn't have special tokens for tool inputs or outputs?
@rick-github commented on GitHub (Mar 13, 2025):
Getting the model to generate tool requests is straightforward, most models are capable of that with a change to the template. Processing tool call results is where the model struggles due to the lack of special tokens. I'm trying out a few variations but so far the results aren't great, but I might hit on the magic sauce, we'll see.
@ParthSareen commented on GitHub (Mar 13, 2025):
Hey everyone, the Deepmind team worked with us pre-launch and decided to hold off on allowing function calling at the moment. It's being looked into from their end and we'll update the model and modelfiles if that happens.
@maglat commented on GitHub (Mar 13, 2025):
Thank you for clarification. Was there any estimation when they plan to integrate function calling?
@tripolskypetr commented on GitHub (Mar 13, 2025):
As mentioned before someone is really waiting for a model with stable tool calling. I hoped this would be deepseek but not
@ParthSareen commented on GitHub (Mar 13, 2025):
Not aware of the timeline as of yet, but Ollama will support is as soon as there is official support. Will keep you all posted if there are any updates!
@eugene-kamenev commented on GitHub (Mar 13, 2025):
This template seems to work for tool calling and tool response handling. Tested with gemma3:27b, will try others tomorrow.
To test template rendering difference between jinja2 and Ollama I created a simple online tool: ollama-template-test.
@ParthSareen commented on GitHub (Mar 13, 2025):
Really cool work @eugene-kamenev! This is super neat!
@ParthSareen commented on GitHub (Mar 13, 2025):
Hey @tripolskypetr,
We've had this discussion before. We'll continue to focus on the support that the model makers have outlined. Which means that if what we're instructed about the model is that there are no tools we will follow that. If you have an issue with this you are welcome to create your own templates or tool calling prompts. This will not be discussed further.
@brenzel commented on GitHub (Mar 13, 2025):
I have successfully tested this ollama model with tools:
https://ollama.com/PetrosStav/gemma3-tools
@CesarPetrescu commented on GitHub (Mar 13, 2025):
Hello, for me gemma3 works with LMStudio so it might be an ollama related issue.
https://ollama.com/PetrosStav/gemma3-tools didnt work on Ollama with Flowise for me.
@jmadden91 commented on GitHub (Mar 14, 2025):
This seems to work perfectly with home assistant assist tool calling
@DoiiarX commented on GitHub (Mar 14, 2025):
work for me. thanks.
@CesarPetrescu commented on GitHub (Mar 14, 2025):
Update, https://ollama.com/PetrosStav/gemma3-tools works for me too, maybe at first i had an error made by me. Now everything is fine!
@atoulmin commented on GitHub (Mar 14, 2025):
Hmmm still doesn’t work for me with https://ollama.com/PetrosStav/gemma3-tools
@oybekdevuz commented on GitHub (Mar 21, 2025):
Based on the PetrosStav/gemma3-tools solution I have just fixed command-r which was also broken(
https://ollama.com/oybekdevuz/command-r
@mmb78 commented on GitHub (Mar 22, 2025):
I tried this with: PetrosStav/gemma3-tools:12b
class ImageDescription(BaseModel):
title: str
description: str
keywords: list[str]
schema = ImageDescription.model_json_schema()
print(response)
ChatCompletion(id='chatcmpl-385', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Alpine Landscape\n\nAlpine, mountains, landscape, nature, trees, grass, sky, clouds, hills, valley, forest, meadow, wood, rural, outdoors, scenic, panorama, vegetation, foliage, peak, summit, elevation, tranquility, serenity, pastoral, idyllic, green, blue, wood cabin, fence, horizon, daytime.\n', refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=None))], created=1742604503, model='PetrosStav/gemma3-tools:12b', object='chat.completion', service_tier=None, system_fingerprint='fp_ollama', usage=CompletionUsage(completion_tokens=70, prompt_tokens=506, total_tokens=576, completion_tokens_details=None, prompt_tokens_details=None))
Of course "tool_calls" was empty.
The prompt (the same code) works for ChatGPT-4o-mini.
where the response look like this (shortened):
ChatCompletion(id='', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content=None, refusal=None, role='assistant', annotations=[], audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='', function=Function(arguments='{"title":"Man in Window of Wooden House","description":"A man is sitting in a window of a wooden house, smiling and holding an object. The house features a rustic wooden exterior with multiple windows, some of which have wooden shutters. The lower part of the house is painted white, contrasting with the dark wood above. There is a bench below the window and a grassy area in front.","keywords":["man","window","wooden house","shutters","smiling","holding object","rustic","exterior","white","bench","grassy area","multiple windows","dark wood","house","architecture","outdoor","nature","sitting","interior","scenery","view","facade","home","country","rural","landscape","summer","casual","clothing","happy"]}', name='image_info'), type='function')]))], created=1742604423, model='gpt-4o-mini-2024-07-18', object='chat.completion', service_tier='default', system_fingerprint='', usage=CompletionUsage(completion_tokens=163, prompt_tokens=25655, total_tokens=25818, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0)))
Any ideas how to use tools with Ollama properly in a similar way like OpenAI models?
Thank you!
@tripolskypetr commented on GitHub (Mar 22, 2025):
@mmb78 This repo contains several tool calling projects which can be used with ollama
https://github.com/tripolskypetr/agent-swarm-kit/blob/master/demo/cohere-token-rotate/src/logic/completion/ollama.completion.ts
@mmb78 commented on GitHub (Mar 22, 2025):
Sorry for potentially stupid question .. but the extra template to make Gemm3 understand tools, is this something that the model receives when loaded to the memory, or is this added to each prompt? My point is that I have "system" part of my prompts, would that override this "template", or is that a separate set of instructions? Can one just add such "template" to any mode? How to do that?
@tripolskypetr commented on GitHub (Mar 22, 2025):
@mmb78
There are only two options for tool calls. The first is to patch the modelfile with these lines
Or inject this message on top of each conversation like it made in agent-swarm-kit. This is the easiest way to fix the tools
@ParthSareen commented on GitHub (Mar 22, 2025):
Hi @mmb78,
I'd recommend trying out another model for tools https://ollama.com/search?c=tools
Gemma3 does not have official tool support as it was not trained for it. Hope this helps!
@mmb78 commented on GitHub (Mar 22, 2025):
Actually ... had quite a good success as explained here:
https://github.com/ollama/ollama/issues/9941#issuecomment-2745370597
@mmb78 commented on GitHub (Mar 23, 2025):
Thank you for your help!
I noticed one thing .. this template (which works well but not perfect for me):
https://ollama.com/PetrosStav/gemma3-tools:12b/blobs/dbb9d04f85fb
has this instruction:
However, some other templates instruct the LLM that the tool output of used tools should be wrapped differently (as mentioned above): https://github.com/ollama/ollama/issues/9680#issuecomment-2722586870
the key difference is
<tool> vs <tool_call>I'm not sure how Ollama parses the LLM output to decide if it should return successful "tool call" .. but this small difference may explain why it fails sometimes with this model:
https://ollama.com/PetrosStav/gemma3-tools
@tripolskypetr commented on GitHub (Mar 23, 2025):
@mmb78
If you not so sure, change the third parameter of Adapter
The difference of
<tool>and<tool_call>was discussed in the next issue: https://github.com/ollama/ollama/issues/8287If the tools not being called time to time take 27b version
@ParthSareen commented on GitHub (Mar 23, 2025):
Closing this issue out for now as it's not within scope. When there are updates to the model I'll follow up here!
@JMLX42 commented on GitHub (Mar 26, 2025):
Google just dropped this article:
https://ai.google.dev/gemma/docs/capabilities/function-calling
Ans the Ollama gemma3 model just got an update.
Is function calling on the table now?
@ParthSareen commented on GitHub (Mar 26, 2025):
Hey @JMLX42 - this is basically what I experimented with trying to template it out as many people have done now. In the article it's mentioned that this is part of the prompt and that it can return output (which would be under the
content) as a tool call in Python or JSON. The model is still not trained on the tool focused keywords which means you can't do things like passing in tool results to get the model to explain the result or use it in another way reliably.So at this time, while we are not officially supporting it we are working with the Gemma team to make sure the experience is the best it can be :) Hope this brings some clarity.
However, planning to test this out a bunch more and see if reliability is "good enough" at a certain size.
@softmarshmallow commented on GitHub (Apr 20, 2025):
Haven't tried, someone made a tool-compat distro.
https://ollama.com/PetrosStav/gemma3-tools:12b
@tonydamage commented on GitHub (May 19, 2025):
I think it is not just emitting correct tool calling command. Playing with gemma3-tools - and comparing to other models, the Gemma3 tends to analyze output data and propose creating code parser for returned JSON or tables, the others (qwen2.5, qwen3, command-r7b) actually uses returned data to answer users questions.
@markemus commented on GitHub (Jun 4, 2025):
Is this still not planned? I was really hoping to set up an agent with Gemma 3 and the tool compatible distro above is not working with langchain.
@ParthSareen commented on GitHub (Jun 4, 2025):
I really want to have reliable tools in Gemma but finding some difficulty with the 4b model to call tools. Definitely trying to get something working though :D