mirror of
https://github.com/ollama/ollama.git
synced 2026-05-07 16:40:08 -05:00
Open
opened 2026-04-28 20:30:41 -05:00 by GiteaMirror
·
59 comments
No Branch/Tag Specified
main
dhiltgen/ci
dhiltgen/llama-runner
hoyyeva/anthropic-local-image-path
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
parth-launch-codex-app
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
hoyyeva/opencode-image-modality
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.30.0-rc5
v0.23.2-rc0
v0.30.0-rc4
v0.30.0-rc3
v0.30.0-rc2
v0.30.0-rc1
v0.30.0-rc0
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
No Label
feature request
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#51543
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @itsPreto on GitHub (Nov 27, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7865
Originally assigned to: @ParthSareen on GitHub.
Model Context Protocol as the name suggests standardizes the external datasource interaction.
Official Github
15-minute-walkthrough-yt
@Rakhsan commented on GitHub (Nov 28, 2024):
+1 should be useful
@josx commented on GitHub (Nov 28, 2024):
Some technical details for implementation https://glama.ai/blog/2024-11-25-model-context-protocol-quickstart
@ptomczyk commented on GitHub (Nov 28, 2024):
This is indeed an interesting topic, but I can't even wrap my head around MCPs. I briefly understand how it's utilised by Claude Desktop, but what would be the possible outcome of "implementing" (integrating?) this protocol into Ollama. What is the potential benefit? I can't even come up with the use case because I'm lost in the details of this. Can someone point me in the right direction of thinking or understanding this as a concept? I'm not asking for explanation of MCP but rather Ollama+MCP.
@24601 commented on GitHub (Nov 29, 2024):
MCP is very important and impressive and likely will become a dominant standard but it still seems more “ollama adjacent” than core feature of ollama. @ptomczyk I think you’re correct that this isn’t an ollama feature.
Perhaps someone would want an ollama MCP server that would allow hosts (in MCP parlance) to call ollama models or even other models (through a host) to call an ollama model.
But, all in all ollama strictly (beyond providing a very bare convenience CLI) isn’t a “host” (in MCP parlance) and I think isn’t seeking to be.
Supporting those who want to enable ollama to be called by servers makes sense but I’m also not seeing a credible core use case for MCP in-tree to ollama ATM.
@gpertea commented on GitHub (Nov 29, 2024):
👍
The client/host/server terminology is a bit confusing around MCP (for me), but it seems to me that it could be a great benefit for the community if ollama were to also provide a basic working implementation of the MCP protocol as a client (or "host") to be distributed alongside the usual (+chat) API (like Claude desktop chat app does).
I think that could greatly simplify (and standardize the interfacing with) 3rd party chat UI implementations that can make use of new tools/functions (e.g. adding web search to any model that support tool use, and any chat UI etc.) . It would be also a great way to perform local / low cost prototyping while experimenting with various models for robust generic, agent/tool designs using MCP.
Perhaps this could also make it easier for developers to explore/test ways to elevate the functionality of cheap local models to SOTA commercial models' performance - when using MCP-based tools.
@anan1213095357 commented on GitHub (Nov 30, 2024):
All along, we've been trying to enhance the capabilities of large models primarily through function calls, but this approach has its limitations. It would be great to have a middleware that can directly interface with databases. Does MCP support database integration? Can it connect to services like web APIs?
@erodactyl commented on GitHub (Nov 30, 2024):
@anan1213095357 MCP is a more standardized version of function calling. As you can read more about here - it's a way to connect to MCP servers, negotiate capabilities with them (i.e. what functions are available to call from the server) and use those capabilities.
Maybe we can add a few minimal built-in (or somehow added on) functions to Ollama and have a separate software that would act as the MCP client. The function would act as an intermediary to use that MCP client.
I do also believe that having local MCP capabilities would greatly improve what we can do with local LLMs.
@24601 if I understand everything correctly, MCP is a way for LLMs to call external sources, not vice versa. While calling LLMs is very useful, LLMs calling external services in standardized ways like automatic capability negotiation is also crucial, similar to function calling. Edit: I think I now better understand what you meant - someone would create a "host" which would do everything the MCP is supposed to do and use Ollama - similar to Claude desktop app being the host without the model itself, but still connecting all the necessary information and passing it to the model.
@SwePalm commented on GitHub (Nov 30, 2024):
The naming might confuse some people, but if Ollama would implement the open sourced MCP client, then we could use all MCP servers available, which just is a new name / wrapper for tools...but aspire to be implemented in a "standard" way.
Cool thing is that it is pretty easy to build your own MCP server, that could wrap you custom python function to do....whatever you want to add to your LLM. IF this takes off, and we see a community building a ton of MCP servers, Ollama need to integrate a MCP client.
Just think of it as equal to use Langchain tools...-ish.
@ryana commented on GitHub (Nov 30, 2024):
My first thought is that this would actually live inside OpenWebUI. Someone else had the same idea: https://github.com/open-webui/open-webui/discussions/7363
@ptomczyk an ollama native integration would look like something like:
imo this would be a big additional set of functionality to add to ollama and may not be philosophically aligned with the initial goals of this project. However, it would be relatively ease to spin up another server in front of ollama which handles this.
@24601 commented on GitHub (Dec 1, 2024):
Checkout the sampling and other modes, it’s quite clever and functionally creates bi-directional comms. Someone has even written a server that allows Claude to call OpenAI LLMs. It’s far more than just what you’re describing. It is quite nice and very understatedly named.
@erodactyl commented on GitHub (Dec 1, 2024):
Yes, an LLM can both be a context provider and a context consumer.
So the question still remains, should we implement MCP host in Ollama? Or is it more semantically correct to have another software which wraps Ollama and also acts as an MCP host? Similar to what @ryana suggests for open-webui.
@24601 commented on GitHub (Dec 1, 2024):
@erodactyl definitely a bit of a scope question, I think maintainers and stakeholders probably have to ask themselves the question of scope independent of MCP. As it stands now Ollama can be used with the MCP OpenAI server by pointing it to the Ollama endpoint and as for host, I would think if the core feature work is done or the backlog is thin it maybe worth growing into adjacent spaces but for now the point made about being more appropriate for the UI projects on the ecosystem seems one that apt. But I'm no a maintainer and just my thought as someone actively implementing MCP in the ecosystem.
@andrewssobral commented on GitHub (Dec 1, 2024):
@erodactyl @24601 @ryana @itsPreto @ParthSareen
In my opinion, the ideal approach would be to implement the MCP host directly in Ollama. This is because we need to consider enterprise solutions that communicate directly with Ollama for privacy reasons.
I also don’t think it’s a good idea to “delegate” the integration with MCP to external applications (e.g., user interfaces like open-webui) because not all “users” of Ollama are people. Many are integrated systems that can communicate directly with Ollama using the Ollama Python SDK.
I’m also extremely excited to see this feature implemented natively in Ollama because I believe it will open up many possibilities, especially for those who prioritize privacy and are looking to develop automated solutions.
@24601 commented on GitHub (Dec 2, 2024):
@andrewssobral - I am genuinely (not rhetorically) asking the following questions, as I really think MCP is great and I may very well just be missing or failing to grasp your vision for what you're suggesting, so please know that/read it with that genuine curiosity and non-oppositional stance:
What would that look like? MCP has a many modes of operations? Are you suggesting, for example, the CLI for Ollama function as an MCP Host? Are you suggesting that Ollama maintain an official MCP Server that allows other LLMs to call Ollama APIs (in which case I wonder why the extant OpenAI API Server wouldn't suffice).
Again, I am just a rando in the ecosystem intersection of Ollama and MCP implementing MCP, so I really do see the value what MCP is doing is quite great/large and should be taken seriously, I just have not quite seen/heard a concrete use case or implementation for MCP in-tree to Ollama yet, which doesn't mean there isn't one, but what are you thinking, specifically? So far, "support MCP" is pretty much all I've seen/heard, which is kind of like "let's go somewhere!", which, well, yes, I'm all for adventure, but where makes a huge difference, and I just want to know/hear what your ideas are specifically, because MCP has many interesting uses, and I would be very interested to hear where you think MCP overlaps with core in-tree Ollama feature/function and scope of the project.
@andrewssobral commented on GitHub (Dec 2, 2024):
@24601 I believe the ideal approach would be for the Ollama CLI to function as an MCP Host, so that when a user makes a request in an external tool (e.g., “search for this on the web”), Ollama can initiate communication with the appropriate tool. To enable this, users should be able to register which MCP Servers they want to use, similar to how it is done with Claude Desktop.
@erodactyl commented on GitHub (Dec 2, 2024):
I fully agree with @andrewssobral, I think you are looking at this in reverse. We don't want other AI tools to call Ollama as an MCP server. We want Ollama to be the MCP host / client similar to Claude Desktop. It will READ external data sources, nor PROVIDE the data to other MCP clients.
We want to give standardized access to external tool reading to people / companies who use Ollama. They will be able to use any MCP server out of the box. We already have many MCP servers like filesystem, Postgresql, Google Maps and you can find many others in this repo. All of this, and many more data source integrations that will become available over time, will be available to use directly with Ollama. This will provide huge value for local LLMs.
@24601 commented on GitHub (Dec 2, 2024):
Fair enough @andrewssobral and @erodactyl - those seem like nice use cases, and leads me to wonder about a couple (again, not leading or rhetorical, but genuine) questions:
I am not overly familiar with Ollama's governance model and maybe very well be if a PR for such a feature/function tickles a maintainer's fancy it ends up in-tree (the beauty of OSS!), so there may just not be this level of formalism or study here, but I genuinely wonder how people use Ollama and how that maybe shifting.
Personally (and in the circles I work in), which I hardly will assume is or should be representative of the broad and diverse Ollama community, I see the Ollama CLI as a "smoke test" and Ollama almost always is accessed as an API by one of the many apps in "Community Integrations" in the README.md, etc. In that sense/use case, Ollama is an inference engine at its core, bringing the ability to pull/package models and run them on diverse hardware behind a relatively easy to use/build/deliver toolset/library/API. In that use case (again, I know hardly the only one nor even one I will claim is or is not dominant/primary/core, that's not a discussion for me and one I'll leave to the maintainers/governance process stakeholders), the embodiment of MCP I would find most useful would actually be substantially different than packaging Ollama as an "MCP Host" to compete/function with OpenWebUI/Claude Desktop/etc (a steep and unnecessary hill that seems to be a side journey IMO), I think an Ollama MCP server would be one that would allow other LLMs to discover (via search) and pull/bring up for inference a broad gamut of models on a given machine (e.g. imagine Ollama providing Claude the ability to search and pull the full spectrum of models to then, once the download is complete, begin to make calls against!). This is far more 1) unique 2) central to what at least I perceive to be the unique and meaningful value proposition of Ollama in the ecosystem 3) differentiated and valuable to the user/ecosystem 4) non-competitive/distractive from the core work.
Of course, "the best at everything" is something that is very appealing, but before any MCP-fun at the Ollama layer as a host, I think Ollama could benefit much more from moving things forward that are in the core-inference-engine-scope like:
Again, don't mean to sound like a downer or against any of this, just an interesting conversation with others who see the value of MCP and are users of Ollama that I think we can learn from each other in how we use Ollama and what we value and the future direction of the value of Ollama as it grows!
@dsp-ant commented on GitHub (Dec 2, 2024):
Hey,
I got a link to this conversation. I am one of the MCP core team members. We'd love to see Ollama and the larger Ollama ecosystem adopt MCP in the way the community feels best suited for Ollama. After all, we want this to be an open protocol with involvement from wider open source community. We are happy to answer any questions or discuss any problems/feedback, etc you have on MCP.
@andrewssobral commented on GitHub (Dec 2, 2024):
@24601 I see that you view the idea of MCP in the opposite direction from what we are envisioning in this post. Imagining Ollama offering Claude the ability to search and bring the full spectrum of models wouldn’t solve the privacy problem. Additionally, there are already MCP servers that allow, for instance, Claude Desktop to query models from OpenAI and others, and these even seem to support Ollama (see mcp-server-openai and unichat-mcp-server). In summary, this isn’t the direction we’re considering.
Moreover, @dsp-ant , correct me if I’m wrong, making Ollama natively an MCP Host can be seen as an extension of the Function Calling/Tools concept (which Ollama already supports). The advantage of using MCP servers compared to the current Function Calling/Tools support is that we can leverage the entire ecosystem being built around MCP technology instead of requiring users to implement Function Calling/Tools manually every time they need Ollama to communicate with a third-party tool.
Keep in mind that integrating Ollama with MCP Server support is a significant step for enterprise solutions that need automation with privacy. A major plus of MCP is its focus on privacy, where MCP Servers run locally and are responsible for accessing private data, ensuring that OpenAI’s and Claude’s LLMs don’t have direct access to sensitive information. Furthermore, an MCP server alone solves only part of the problem. Even if MCP servers isolate data access, the data itself still flows to OpenAI or Claude LLMs, which many companies aim to avoid. To ensure a complete privacy pipeline, Ollama needs to act as an MCP Host and communicate with MCP servers, guaranteeing that all data communication happens internally without data leakage.
When I referred to the Ollama CLI, I meant the core of Ollama, because, as you said, not everyone uses the CLI (via command prompt or terminal) but instead uses the Ollama Python SDK to integrate with other third-party tools. Naturally, having MCP support both in the CLI and via its API (Python SDK) is crucial to enable integration and automation with third-party software.
@bartolli commented on GitHub (Dec 3, 2024):
MCP-LLM Bridge
I created a bridge for using MCP tools with Ollama and other OpenAI-compatible endpoints. The bridge translates between MCP's tool interface and OpenAI's function calling spec.
While I originally built it to work with OpenAI's API, it also works great with Ollama, LM Studio, or any endpoint that implements the OpenAI API spec.
The code is up on GitHub here. Feel free to check it out and let me know what you think!
@dsp-ant commented on GitHub (Dec 3, 2024):
To some degree this is true. Note that MCP supports more than just tools. Resources and Prompts (which effectively are just templates) are also part of the specification. For a CLI runtime such as Ollama, focusing purely on tool integration might be useful as an abstraction to use the wider ecosystem and to make it fairly trivial for people to add tools in any programming language they feel most comfortable. IF ollama implements MCP as a host, I would suggest to also support Sampling, such that an MCP server can ask Ollama for completions, effectively enabling little standalone agents of arbitrary complexity.
Resources and Prompts are likely most useful to applications (particular UIs) wrapping ollama that do additonal logic. Prompts are meant as user driven additions, (.e.g zed implements these as slash commands e.g.
/pg-schema <table>will give you a schema of a database table from the postgres mcp server). Resources are meant as file-like objects that the application itself can decide how to use (this is most useful for applications who do additional context selection, etc).Hope that helps
@sammcj commented on GitHub (Dec 5, 2024):
Here's a gross hack up I did of an Ollama MCP bridge the other day, absolutely do not use it for anything other than toying with - https://github.com/sammcj/gomcp
@24601 commented on GitHub (Dec 5, 2024):
You certainly aren't wrong, I am just curious what level of end-user application functionality lives in-tree and what does not and where that line is now and how it may/should evolve. Seems like @dsp-ant does draw out a very good distinction between the various use cases and feature functions that seems reasonable.
@josx commented on GitHub (Dec 5, 2024):
Here a cli to use with ollama https://github.com/chrishayuk/mcp-cli
@nixoid commented on GitHub (Dec 10, 2024):
cli is a good start but server-side implementation is def. necessary for mobile devices and other clients
@josx commented on GitHub (Dec 23, 2024):
Question:
Is there any way to use any of the above bridges and but not using their cli intefaces? maybe as proxy for API calls?
I want to integrate openwebui as interface that I have already configured with ollama as inference.
@yuyangchee98 commented on GitHub (Dec 23, 2024):
@josx this may be what you want https://github.com/SecretiveShell/MCP-Bridge
@ggozad commented on GitHub (Jan 15, 2025):
Hey,
I am the author of oterm a terminal client for Ollama. I was told about this discussion and found it super intriguing.
I spent some time to bridge MCP tools to Ollama tools in order to be used with oterm. This will need some polishing but it opens so many possibilities!
You can follow progress here, and see a screenshot of using the "git_status" tool from the git MCP server in oterm 😎
I will try to merge and release as soon as I test a bit more and document properly.
@ggozad commented on GitHub (Jan 20, 2025):
oterm 0.8.0 will now connect to MCP servers and convert MCP tools to Ollama tools for use with tool-supporting models.
@defaultsecurity commented on GitHub (Mar 6, 2025):
Yes, please +1!
@diiyw commented on GitHub (Mar 17, 2025):
Yes, please +1!
@TeamDman commented on GitHub (Mar 23, 2025):
I've only briefly explored MCP stuff, but my understanding is it's basically an abstraction over tool-calling that adds an approval stage for tool use? This seems like something not impossible to do well as a helper library/program, rather than something that can only be done well by upstreaming it.
I love Ollama, and I know that developer time is a limited resource.
To add my voice, this feature isn't a priority for me and I hope Ollama contributors are happy spending their time as they see fit.
If only there was some way that I could use language to communicate with the computer to figure out how hard it could possibly be to implement this myself, if it were such a pressing need for me.....
@ggozad commented on GitHub (Mar 23, 2025):
So I have spent quite some time looking at MCP from the perspective of implementing what makes sense in terms of my Ollama client oterm. I have already implemented a
Toolbridge and plan to also supportPrompts,SamplingandResource. I find the idea great and super useful but I am not so sure how Ollama would fit in or if it needs to. Here are some thoughts:Tools: that's a no brainer. Ollama already has tool support. It could definitely improve (and there are a number of issues open for that, unreleated to MCP) but all that is needed is a bridge and there are already some libraries out that can be used. This is atm well handled by clients that just do that, fetch MCP tools and convert and feed them to Ollama.Resources: this seems irrelevant to me from the perspective of Ollama. It's a feature best handled by clients who can use them to provide context to LLMs.Prompts: again a feature to be consumed by client. The MCP server constructs a list of messages that the client can feed to Ollama.Sampling: now this is kind of interesting. Ollama could use this to give direct access to completions on demand using its hosted models. There is a significant amount of work to do this and I haven't seen other implementations. I am planning to do it (again as a bridge to Ollama) withotermbut it's not a priority.Roots: a client feature. Can't see how Ollama would use this.Do I miss something here?
@lemassykoi commented on GitHub (Mar 23, 2025):
I would have thought to use Resources as "tool without input"
@ggozad commented on GitHub (Mar 23, 2025):
Resources can have input (the dynamic ones), which of course would mean "tool with input".
They also support "Updates".
I think the intention (which is admittedly vague) is that they are meant to be consumed from clients to serve context.
@pavleb commented on GitHub (Mar 25, 2025):
@ggozad Following your line of thought and description here: https://developers.redhat.com/blog/2025/01/22/quick-look-mcp-large-language-models-and-nodejs#, simple conversion between JSON of the MCP discovery makes this compatible with tools. In the link above check section Ollama. So following this, isn't tools enough or at least close enough?
@ggozad commented on GitHub (Mar 25, 2025):
Yeah, I think your implementation is straightforward and very similar to what I ended up doing in
oterm.Judging from what is out there, most MCP servers are just about Tools and the rest seems secondary. I just added support for
Promptsin oterm and like I said plan to do more, but yes, Tools gets you to cover connecting to Ollama most MCP servers out there.@ParthSareen commented on GitHub (Mar 28, 2025):
Hi folks - just wanted to give you all a shout to say that we're keeping a close eye on this to see where we best fit into this story. You are heard and please continue putting down whatever thoughts you have on MCP, us, and how you would like to use Ollama with it (if you don't already do so). And yes improvements to tool calling and structured outputs are in the works to hopefully improve reliability! 🙏🏽 Thank you!
@moresearch commented on GitHub (Mar 29, 2025):
Thanks @ParthSareen
Here is a golang implementation that uses Ollama and had to do some extra parsing to convert from mcp-tools to ollama tools to make it work. If ollama supports with this out-of-the-box would be much more reliable.
https://k33g.hashnode.dev/building-a-generative-ai-mcp-client-application-in-go-using-ollama
@sheffler commented on GitHub (Apr 2, 2025):
See https://github.com/ollama/ollama/pull/10091 which helps Ollama handle tool argument types better. (Works great with oterm!)
@ggozad commented on GitHub (Apr 13, 2025):
oterm now supports MCP Sampling 🎉
This basically allows connected MCP servers to request completions from Ollama-hosted LLMs.
@reustle commented on GitHub (May 5, 2025):
Worth mentioning here that the latest MCP spec for remote servers using Streamable HTTP (taking over for SSE) has landed in the Typescript SDK, MCP Inspector, etc. There's still a lack of clients that support it, but backwards compatible servers w/ SSE as well are possible.
https://modelcontextprotocol.io/specification/2025-03-26/changelog#major-changes
@bu2 commented on GitHub (May 23, 2025):
I use LibreChat as MCP client powered by local Ollama models and it works fine with Qwen 3 32b.
@rkonfj commented on GitHub (Jun 26, 2025):
@ParthSareen The official modelcontextprotocol/go-sdk is out.
@itsPreto commented on GitHub (Jun 30, 2025):
Hi @ParthSareen I feel like there should be enough information and feedback in here to move forward with some kind of decision-- as others have mentioned the ecosystem and overall adoption has grown quite a lot since coming out.
@ParthSareen commented on GitHub (Jul 6, 2025):
We're working on it! @itsPreto and thanks @cherrydra
@ncolesummers commented on GitHub (Aug 29, 2025):
Is there an existing branch where this feature is being developed? I would be happy to contribute to this. It would unblock multiple projects I'm working on.
@vielhuber commented on GitHub (Oct 1, 2025):
Do I understand it correctly that in the future Ollama supports MCP (Model Context Protocol) server configuration similar to what Anthropic provides in their API?
Anthropic's Claude API supports MCP server configuration as shown in their MCP Connector documentation (https://docs.claude.com/en/docs/agents-and-tools/mcp-connector), which allows passing MCP servers directly in API requests like this:
Looking at the Ollama API documentation for chat completions (https://docs.ollama.com/api#generate-a-chat-completion), I'm wondering if Ollama supports a similar
mcp_serversparameter in their API requests?Specifically:
mcp_serversparameter?Any clarification or examples would be greatly appreciated.
@Code4me2 commented on GitHub (Nov 15, 2025):
Native MCP Integration for Ollama
Following community requests for native MCP support, this implementation provides server-level integration, works with both the CLI and API, and could easily be adapted to the desktop/mobile app.
Architecture: How MCP is Integrated
MCP Discovery and Registration Pipeline:
Multi-Source Discovery (
mcp_registry.go):~/.ollama/mcp-servers.json→/etc/ollama/mcp-servers.json→ ENV varsmcp_command_resolver.go)Server Registration Process (
mcp_manager.go):Tool Injection into Inference Pipeline:
--toolsflag triggersMCPCodeAPI.InjectContextIntoMessages()Core Strengths
1. Native JSON-RPC Implementation (
mcp_client.go):2. Intelligent Tool Orchestration (
mcp_manager.go):3. Security-First Design (
mcp_validator.go+mcp_security_config.go):4. Model Integration (
mcp_code_api.go):Performance Metrics
Testing with production workloads shows:
Integration Points
CLI Integration:
API Integration:
Environment Variables:
Why This Implementation
Current Status & Next Steps
The implementation is complete and actively used in production environments. We've structured it for incremental merging:
Phase 1 - Core Protocol (Ready):
mcp_client.go: JSON-RPC transportmcp_manager.go: Server lifecycle managementPhase 2 - Discovery & Configuration (Ready):
mcp_registry.go: Server discoverymcp_config.go: Configuration managementmcp_command_resolver.go: Cross-platform supportPhase 3 - Security & Validation (Ready):
mcp_validator.go: Input validationmcp_security_config.go: Security policiesPhase 4 - Model Integration (Ready):
mcp_code_api.go: Context injectionPhase 5 - Tool loop (ready):
Questions for Maintainers
This implementation brings Claude-like tool capabilities to every Ollama user while maintaining data sovereignty. @ollama ready to begin the PR process based on your feedback. Working alongside @riteshcode9 & @build4me2
@bartolli commented on GitHub (Nov 15, 2025):
Hi there,
I built a small PoC that lets the desktop Ollama app connect to external tools and data sources through the MCP.
Made it for personal use and wanted to check whether the @ollama team would find this valuable. If yes, I will open a PR.
It’s based on Anthropic’s Go SDK and integrates into the app lifecycle.
https://github.com/user-attachments/assets/487bf8e1-ae3a-4e39-ac41-77c113e41caf
Implementation details
Database changes
• Schema migrated to v13 via migrateV12ToV13()
• Added mcp_servers TEXT NOT NULL DEFAULT '{}' column to the settings table
• Defaults to ~/.ollama/mcp.json via defaultMCPConfigPath()
• Migration checks for column existence before running ALTER TABLE
Architecture
• MCP client (app/mcp/client.go) uses stdio transport via exec.Command() to spawn MCP servers
• Each server maintains its own *mcp.ClientSession inside a map[string]
• Tool-adapter pattern: MCPTool implements Ollama’s Tool interface and wraps CallToolResult
Tool registration
• Tools use namespaced pattern mcp__[server_name]__[tool_name] via FormatMCPToolName()
• Dynamic registration at chat time through registerMCPTools() in ui/ui.go
• Registered only when no attachments are present to avoid conflicts with multimodal tools
API surface
• Added mcpToolLister interface to ui.Server for dependency injection
• Methods: ListTools(), GetServerNames(), CallTool()
• No changes to core server code; all integration stays in the app layer
Content-type handling
• MCP content types (TextContent, ImageContent, AudioContent, ResourceLink, EmbeddedResource) mapped to Ollama’s display format in formatToolResult()
• Structured outputs fall back to JSON marshaling
Session lifecycle
• Servers spawn on app init (app.go:81)
• Clean shutdown via Disconnect() on app termination
• Failed server connections are logged but do not block app startup
Configuration
• JSON config supports command, args[], and env{} per server
• Environment variables merge with the parent process env
Testing
• Added unit tests for tool-name formatting/parsing and execution flow
• No integration tests yet; MCP servers are external processes
Impact
• No changes to /server or core inference paths
• Tool calls route through the existing execution pipeline
• Settings UI extended with a file picker for the config path
• TypeScript types regenerated for Settings state
This keeps MCP as an optional feature in the desktop app without affecting the core Ollama server.
@Code4me2 commented on GitHub (Nov 15, 2025):
This is a great UI implementation! The desktop integration looks really clean. Our approaches could actually complement each other well - the server-level integration we've built could provide the backend, while your desktop UI work could enhance the user experience for desktop users. Would love to collaborate once we get maintainer feedback on the best path forward!
@vielhuber commented on GitHub (Nov 15, 2025):
Backward compatibility is not needed in my opinion. Given that this is a relatively new area and still evolving, it seems reasonable to prioritize a clean and coherent MCP-based design over carrying forward legacy behavior. This keeps the implementation simpler, avoids confusing edge cases, and encourages users to adopt the new approach directly instead of relying on transitional shims.
JSON should be used as the default configuration format, in line with what most other tools and ecosystems do. It is widely understood, well supported across languages, and keeps the configuration story straightforward. Other formats could always be added later if there’s a strong need, but starting with JSON as the primary option keeps things consistent and predictable.
It should be enabled by default. Treating MCP as a first-class, always-on capability sends a clear signal that this is the standard way forward rather than an experimental or optional add-on. Users who do not need it can simply ignore the functionality, while those who do benefit from having it available out of the box without extra build-time configuration.
Both should be supported first class. Stdio is a good default for simple, local integrations and aligns well with many existing tools, but gRPC and HTTP are important for more complex or distributed setups. Treating these transports as equally supported options gives users the flexibility to choose what best fits their environment and deployment model, instead of locking them into a single IPC mechanism.
@libreosley-stack commented on GitHub (Dec 5, 2025):
I could really do with this is there any way of getting a detailed guide of what and where to place things and what to add where? so i can implement this please?
Thought i would update this to say a very big thank you and i got this work and am thankfully to all the work you put into this.
@Code4me2 commented on GitHub (Dec 7, 2025):
I haven't open sourced the fork yet, I was waiting to hear back from the ollama code maintainers. I will make the fork public soon! Did you find another implementation somewhere else?
@libreosley-stack commented on GitHub (Dec 8, 2025):
no I copied your post and handed it to one of my guys and asked if they could try it on a local install of ollama and in a few hours they came back saying they had got it working just from the information in your post. we had to play with it a bit to get the output as we wanted it but got it working then we added markdown support on the output which worked then we tried adding a embedding AI for RAG and that Brock it and we ended up going with langchain nut it works 100% and in CLI witch was our man area we needed it.
@Code4me2 commented on GitHub (Dec 8, 2025):
Glad to hear it! I am curious, what broke it specifically? Embeddings? I am completing a rebase on top of the most recent ollama to submit a PR and make the fork public. Any issue specifics you could share I'd be greatful
@Code4me2 commented on GitHub (Dec 8, 2025):
Also I'd note on here that I am implementing a better permission system so that autonomous loops can include human in the loop processes for more sensitive MCP tools.
@libreosley-stack commented on GitHub (Dec 8, 2025):
let's say the issue was an in-house issue it wouldn't get repeated with anyone else.
@BuyWhere commented on GitHub (Apr 25, 2026):
Jumping in to share a real-world MCP server use case for this thread: BuyWhere MCP — a hosted MCP server for real-time product search across 3.8M+ products from Southeast Asian merchants (SG, MY, PH, ID, TH, VN).
Once Ollama has native MCP support, users would be able to connect local Llama/Mistral/Qwen models directly to live product data:
This kind of use case — local model + remote MCP data source — is exactly what makes MCP valuable for privacy-first users in SEA who want to run local models but still access live e-commerce data.
Smithery listing: smithery.ai/servers/partners/buywhere-mcp | Docs: api.buywhere.ai/docs/guides/mcp