mirror of
https://github.com/ollama/ollama.git
synced 2026-05-06 16:11:34 -05:00
Closed
opened 2026-04-28 15:41:17 -05:00 by GiteaMirror
·
21 comments
No Branch/Tag Specified
main
dhiltgen/ci
parth-launch-plan-gating
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
parth-launch-codex-app
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
hoyyeva/opencode-image-modality
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
No Label
feature request
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#50412
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Kinglord on GitHub (Aug 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6237
Hello,
This isn't a feature request, but it's the best category I could pick. This is really a question around merging PRs for exposing an existing feature to users of Ollama that are being ignored or declined without good context. I'm asking this to get more public visibility from the Ollama team on grammar features, specifically those implemented and existing in llama.cpp.
I understand Ollama provides json schemas functionality as a way to direct and control the output from models, another popular approach is the use of GBNF / Grammar, which is supported and implemented in llama.cpp currently. Several PRs have been submitted to expose this feature to Ollama users, and have been either sitting idle or closed. This particular point is going to continue to surface and make noise (there was a large help thread in the Discord started today) until Ollama makes a clear and public statement on this issue. If Ollama as a product has decided not to give users this choice, and is saying if you want to use or test this feature it must be done outside of Ollama, then you need to let us (the community) know. If there is some problem with the way the community is exposing this feature in the PRs, then again just let us know so we can fix it. I understand as a contributor it can be hard to understand why a product does not want to give users more choices and options, and I think Ollama needs to clearly state why this choice has been made for the product.
This is not a post to talk about which approach between GBNF and json is better or worse - this is a post to clarify that there is community demand for the ability to use this feature in Ollama, and Ollama apparently actively rejecting the inclusion of it based on what I have to assume are product calls the community does not have visibility on. I hope this post will end that lack of clarity for all involved, so we all will know Ollama's stance and as a community we can stop bringing this up and submitting additional PRs. If anyone wants to start a more technical post and provide data on why one approach can be better than another, I welcome you to do so and link it to this topic.
My simple personal example is this. As a newer Ollama user I actually would like to try out both approaches to see which one works better for me and my product. Right now in Ollama I simply cannot, and from appearances (which can be deceiving) it appears that what's stopping me from testing these both in Ollama is a simple code change to expose the feature in llama.cpp to me. (edit: It was brought to my attention that Ollama actually uses GBNF internally to enforce json syntax, so the only thing that's really missing is exposing this feature to the end user to customize or use different grammar.)
There might be more, but for reference here are some links to other discussions about this topic as well as a link to Discord thread from earlier today. Thanks to the Ollama team for taking a look at this and helping align the community with their future response.
Discord:
https://discord.com/channels/1128867683291627614/1236730825928741034
Github PRs:
https://github.com/ollama/ollama/pull/565
https://github.com/ollama/ollama/pull/830
https://github.com/ollama/ollama/pull/1606
https://github.com/ollama/ollama/pull/2404
https://github.com/ollama/ollama/pull/2754
https://github.com/ollama/ollama/pull/3303
https://github.com/ollama/ollama/pull/3618
https://github.com/ollama/ollama/pull/4525
https://github.com/ollama/ollama/pull/5348
Github Issues:
https://github.com/ollama/ollama/issues/808
https://github.com/ollama/ollama/issues/1507
https://github.com/ollama/ollama/issues/3616
https://github.com/ollama/ollama/issues/4074
https://github.com/ollama/ollama/issues/4370
https://github.com/ollama/ollama/issues/6002
@NeuralNotwerk commented on GitHub (Aug 7, 2024):
I currently use llama.cpp for anything production that requires structured output. I'd love to see the feature in Ollama.
@coder543 commented on GitHub (Aug 7, 2024):
Even though Ollama’s core team has frustratingly not communicated it clearly anywhere that I’ve seen, my feeling is that they’ve been waiting on OpenAI to officially support this, in order to stay aligned with the OpenAI API specification as much as possible. Therefore, the single most relevant link to this conversation is probably this one: https://openai.com/index/introducing-structured-outputs-in-the-api/
Maybe we’ll finally get some movement on this.
@MHugonKaliop commented on GitHub (Aug 8, 2024):
As one of the people expressing my interest for some of these PRs, I agree that it would be nice to have some feedback from ollama team regarding long sitting PRs. Even a "don't have time for this" would be better to nothing at all. I can completely understand that it's almost impossible to be able to react to all the activity of a successful project like Ollama, so maybe you could use milestones just to let other people know what are your priorities
As @coder543 said, now that OpenAI supports this, maybe it will become a "must do" feature in order to keep up with OpenAI compatibility, but I think that this discussion is interesting.
And thank you so much for your work on this project !
@Kinglord commented on GitHub (Aug 15, 2024):
Bumping this as it's been a week and we still have complete radio silence from the Ollama team about their stance on this issue and the state of the numerous PRs and issues still open around it. I'm really not one to annoy people, but I truly and deeply believe that the community deserves a 5-10 minute response from Ollama so we can all get on the same page here.
@PaulCapestany commented on GitHub (Aug 15, 2024):
Re: exposing llama.cpp's grammar feature, seems like @royjhan, @dhiltgen, and of course @jmorganca may be the most/recently involved in potentially-related features?
@Kinglord - appreciate you taking the time to write up your overview on this, as I'm also pretty interested in the topic! FWIW, it could very well be that the core ollama folks aren't "ignoring"/"declining" grammar support within ollama, perhaps they just haven't had the bandwidth and/or visibility into this issue yet (I mean, ollama has literally thousands of issues, and I don't think I saw them directly respond to any of the more recent grammar feature issues/PRs posted)
@jmorganca commented on GitHub (Sep 4, 2024):
Hi all, first off, I'm very sorry for the radio silence on adding structured outputs and/or grammars to Ollama. Thank you everyone who wrote PRs, filed issues and shed light on why the feature is valuable. And thanks @Kinglord for bringing this all into one mega-issue here, it's really helped me catch up.
The short answer is yes, let's add structured outputs to Ollama. Specifically, starting with specifying a JSON schema in the API similar to OpenAI and the existing JSON mode. PRs are very welcome for this, especially if we can do it incrementally.
I'm currently hesitant to add Context-Free Grammar (CFG) support, only so we can focus on making JSON-schema based structured outputs really fast and reliable first. As you may have seen from experimenting with CFGs, they can be tricky to get right, and we've seen them cause models (especially smaller ones) to produce unnatural output (e.g. repeating whitespace indefinitely). I mostly just wouldn't want new users trying the API to hit a usability wall or performance issues if a JSON-schema based approach can work for them (especially since a ton of tooling is supporting this since August).
In terms of what took so long: we've been focusing on fleshing out API features (e.g. tool calling, suffix/fim, embeddings), catching up on OpenAI compatibility, and making the existing API surface area faster and more reliable (there's still lots to do here and some great work is happening – PRs to help with performance and reliability are always super welcome!). This isn't a great reason for the radio silence but I thought it would be helpful to share what the maintainers have been up to in the meantime!
@mitar commented on GitHub (Sep 4, 2024):
@jmorganca So I made a PR adding JSON Schema support here: https://github.com/ollama/ollama/pull/5348 In contrast with other PRs this one really works because it updates also the C server part, which is necessary for this to work (based on example C server from llama.cpp).
@cesarandreslopez commented on GitHub (Sep 13, 2024):
I'm looking forward to seeing JSON Schemma support merged!
@Kinglord commented on GitHub (Sep 16, 2024):
Big ❤️ @jmorganca - appreciate the reply and also super pumped to see this make its way into Ollama!
@rlouf commented on GitHub (Oct 9, 2024):
Outlines author here. I don't know if this can help, but we are about to release a Rust port of our structured generation algorithms, which are of course faster, but can also be compiled as a shared library and be called by C++ code.
@mitar commented on GitHub (Oct 9, 2024):
So llama.cpp has C implementation which Ollama just has to call into.
@rlouf commented on GitHub (Oct 9, 2024):
Of course, only mentioning this because the approach is different and so is runtime latency.
@cpfiffer commented on GitHub (Oct 16, 2024):
Related: https://github.com/ollama/ollama/issues/6473
@tucnak commented on GitHub (Oct 22, 2024):
Guys, I'm sorry but your efforts are completely misguided. What you should be doing instead is parsing the system prompt for ```gbnf code blocks. This approach would not impact the API surface, and it would also allow for dynamically generating the grammar on the fly from any existing Ollama client.
I have implemented this a few months back for our close-circuit agent environment, & it works beautifully—as a substitute, in some positions—for a wire controller like AICI. It works really well as a workflow primitive (block) or a tool in the agent environment. The screenshot below is the implementation of Grammatical tool we have built in Dify; it accepts a prompt, & a schema in either jsonschema, gbnf, or text instruction format.
This is a really powerful primitive, and allows to reduce hallucinations considerably!
@isaac-mcfadyen commented on GitHub (Nov 15, 2024):
Currently the underlying backend to Ollama (llama.cpp) accepts a grammar as an optional parameter in the actual request.
By actually parsing the prompt of the input for a codeblock like
gbnfit allows for users to arbitrarily inject whatever grammar they like, which can be a big security issue (e.g. user on a chatbot on example.com says to model "generate me some output with grammar x" and crashes the backend as it doesn't find the generated fields it expects). If that's a non-issue in your case then great—but IMO Ollama should use the existing platforms' method instead of doing it's own, non-standard thing that might easily turn into a security issue.@tucnak commented on GitHub (Nov 18, 2024):
Hey, security is a fair point. I really dig security! Correct me if I'm wrong but what you're saying is that picking up grammars from untrusted input is undesirable, right? I don't think anybody would argue with that... However, I also wonder what percentage of Ollama users actually expose the instances directly to untrusted clients? (Not to mention that the system prompt-grammars don't necessarily have to be enabled by default.) I mean, surely for any kind of meaningful application you would want to implement some RBAC, QoS, caching, what have you. Given that it's only able to serve one request at the time, and all. In our case, we have multiple ollama processes (one per model, basically) behind the gateway that does a bunch of things, including token accounting, but most importantly RBAC. I like to think we're taking security seriously, & I can't imagine Ollama in the current shape or form being self-sufficient to that end by any stretch of imagination.
The main issue is that there isn't a "standard" way to do grammars that would propagate throughout the stack, not really! There is the "grammar" parameter in the llama.cpp server API, sure, and Ollama supports it internally, of course. However, none of the actual clients support it, or expose it, for that matter, and are unlikely to do so for different reasons. For example, we're using Dify which doesn't allow for customizing the Ollama parameters per-request; it's just one set of settings, and that's it. In order to bring grammars, they would need to augment their whole UI, and that's obviously not unique to Ollama, so now you have divergent UI's per provider, that's hard to support, etc... I'm sure you know why that's problematic.
The reason I bothered with my patch in the first place is so that it had at the time enabled us to create ad-hoc grammars in the existing agent environment, as well as multiple existing tools, and clients, without ever making any modifications to any of them, or the internal API's! The path of least resistance, if you will. But then again what do I know! At any rate, I don't believe merges like these are actually that important: my patch is as easy to rebase against upstream, as any other change, & it will do. The same stands for the dozen of prior implementations that we have itt.
To be honest, if I were the OP, I would honestly close the issue at this point. 😃
@isaac-mcfadyen commented on GitHub (Nov 18, 2024):
Fair point! Just wanted to point that out since I wasn't aware of the specifics of your application.
For sure, but then again, Ollama is mainly an API-based project and UIs or other clients that build on top of Ollama are free to do as they wish. The idea is that Ollama should make it available and then the client (UI or otherwise) can decide whether it wants to use it or not.
Also, clients probably don't support it because it's not available in the Ollama API yet 🙃
I see from above that PR #5348 has been opened to add grammar support so I'm not sure closing the issue would be productive until that PR is either merged or closed for any other reason.
@ParthSareen commented on GitHub (Dec 5, 2024):
Hey everyone!
With the merging of #7900, we're introducing structured output to be able to go from a json schema to structured generation! Really appreciate all the feedback and contributions. Extremely thankful for all of you being so involved in this 🙏🏽
There are a few things we're still keeping in mind over the next few months. The first focus is going to be around performance - speed and accuracy. There has been a lot of research coming out around this, we're keeping a close eye and are going to see how we can integrate some of this into Ollama. We're also thinking about how to support structured generation in the long term and that'll play nicely with a lot of the work we're doing on our new engine.
Stoked for the coming few months, hope to improve both performance and accuracy around sampling and constrained decoding.
Thank you again for your patience, we're super excited to get this out in an upcoming release! Will spin out more issues around this as well - happy to keep you all posted as well!
@0xdevalias commented on GitHub (Feb 18, 2025):
@ParthSareen Curious, would that potentially include support for Coalescence/similar?
@ParthSareen commented on GitHub (Feb 18, 2025):
@0xdevalias Currently working on a new constrained sampling engine to have fast + accurate structured outputs. Thoughts around this were - any external library would need a good amount of integration in order to be useful. For sampling, that would mean exposing logits, the tokenizer, and integration with the runner. This would also mean we can't iterate fast if new SOTA for structured outputs comes out.
So unlikely for now but always keeping an eye out :)
@bZichett commented on GitHub (Mar 19, 2026):
Any update @ParthSareen ?