mirror of
https://github.com/ollama/ollama.git
synced 2026-05-07 08:30:05 -05:00
Closed
opened 2026-04-22 02:01:13 -05:00 by GiteaMirror
·
64 comments
No Branch/Tag Specified
main
hoyyeva/anthropic-local-image-path
dhiltgen/ci
dhiltgen/llama-runner
parth-remove-claude-desktop-launch
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
parth-launch-codex-app
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
hoyyeva/opencode-image-modality
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.30.0-rc3
v0.30.0-rc2
v0.30.0-rc1
v0.30.0-rc0
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
No Label
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#26085
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @vRobM on GitHub (Oct 4, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/703
This means not loopback but all other private networks
Makes it unusable in containers and configs with proxies in front.
@65a commented on GitHub (Oct 5, 2023):
This suprised me because it is not settable by flag (which is where I usually look for stuff like that), but setting
OLLAMA_HOST=0.0.0.0in the environment works for me, and should be easy to include container stuff like k8s or docker.@vRobM commented on GitHub (Oct 5, 2023):
It would be nice to have it be a command line argument.
the port can be changed through the same variable as there doesn't appear to be a OLLAMA_PORT.
export OLLAMA_HOST=0.0.0.0:8080@jtoy commented on GitHub (Oct 6, 2023):
agree it should be cli option
@byteconcepts commented on GitHub (Oct 23, 2023):
in the /etc/systemd/system/ollama.service file, you may also add
Environment="OLLAMA_HOST=0.0.0.0:8080"and the ollama system service will listen on all Interfaces/IPs so you may reach it from any machine in the network.
In console you may reach it for example like this:
OLLAMA_HOST="127.0.0.1:8080" ollama list@jmorganca commented on GitHub (Oct 26, 2023):
Hi @vRobM this should be configurable with
OLLAMA_HOSTnow. I'll close this issue but please do re-open it if it's not solved.@mattbisme commented on GitHub (Dec 21, 2023):
Where do you set
Environmentwhen usingOllama.appon macOS?@NeuralEmpowerment commented on GitHub (Jan 5, 2024):
I'm also curious, as I've having trouble connecting to Ollama from another front-end on my network and I haven't been able to get it working with
export OLLAMA_HOST=0.0.0.0:8080orexport OLLAMA_HOST=0.0.0.0:11434🤔@Ectalite commented on GitHub (Jan 13, 2024):
You have to use
launchctl setenv OLLAMA_HOST 0.0.0.0:8080and restart ollama and the terminal.https://stackoverflow.com/questions/603785/environment-variables-in-mac-os-x
@AnsenIO commented on GitHub (Feb 18, 2024):
To allow listening on all local interfaces, you can follow these steps:
OLLAMA_HOST=0.0.0.0 ollama servecommand to specify that it should listen on all local interfacesOr
Environment="OLLAMA_HOST=0.0.0.0"Once you’ve made your changes, reload the daemons using the command
sudo systemctl daemon-reload,and then restart the service with
sudo systemctl restart ollama.For a Docker container, add the following to your docker-compose.yml file:
This will allow the Ollama instance to be accessible on any of the host’s networks interfaces. Once your container is running, you can check if it’s accessible from other containers or the host machine using the command:
curl http://host.docker.internal:11434.@shamitv commented on GitHub (Feb 25, 2024):
Is there a way to do something similar for Windows ?
EDIT : Setting OLLAMA_HOST works on windows command line
Windows will prompt for Firewall Permission, allow that
Setting this env var at system level should work as well.
@mattbisme commented on GitHub (Feb 25, 2024):
If I'm running
ollama serve, this works fine. However, is there a way to get the Ollama.app to respect thisenvvariable? The only way I can utilize this is with Terminal running.@ghost commented on GitHub (Mar 4, 2024):
amazing thanks!!
@iliasch-dev commented on GitHub (Mar 6, 2024):
i did set the OLLAMA_HOST=0.0.0.0 but now i cannot access it locally , only remotely
@Gdesau commented on GitHub (Mar 12, 2024):
I had the same issue but I'm working on Colab, how can I fix it? Below you can find the error:
@ksylvan commented on GitHub (Mar 18, 2024):
To do the same in Windows Powershell, you can do:
@nickian commented on GitHub (Mar 20, 2024):
FYI,
setx OLLAMA_HOST 0.0.0.0will have Windows remember the variable. So you don't have to launch it from the command line. The ollama.exe app seems to remember the setting fine for the Windows user.@ksylvan commented on GitHub (Mar 31, 2024):
When you set
OLLAMA_HOST=0.0.0.0in the environment to ensure ollama binds to all interfaces (including the internal WSL network), you need to make sure to resetOLLAMA_HOSTappropriately before trying to use any ollama-python calls, otherwise they will fail (both in native windows and in WSL):The same call with OLLAMA_HOST set to
localhostworks.@dillfrescott commented on GitHub (Apr 2, 2024):
how come its only listening on ipv6? i set
OLLAMA_HOST=0.0.0.0:11434@piclez commented on GitHub (Apr 2, 2024):
@dillfrescott don't set the port together, only the IP:
OLLAMA_HOST=0.0.0.0@dillfrescott commented on GitHub (Apr 2, 2024):
Gotcha. Thank you.
@min918 commented on GitHub (Apr 10, 2024):
i get the same problem..
@Verfinix commented on GitHub (Apr 10, 2024):
Can any one advise, how to get it working on IP4 ?
@dillfrescott commented on GitHub (Apr 10, 2024):
I have no clue. I removed the port from the env variable like @piclez said and its still only listening on ipv6. And yes ive even rebooted the machine many times inbetween then and now.
@AnsenIO commented on GitHub (Apr 12, 2024):
if you are on linux, best is to follow the second approach listed below especially if you reboot the machine. ensure that the service is enabled (to start automatically) and you start it with systemctl . if in stead you run it manually using ollama serve, then use the first method.
Check if is enabled and active with
systemctl status ollama@mattbisme commented on GitHub (Apr 13, 2024):
0.0.0.0is what you would use for IPv4. You would use::for IPv6. I suspect you have some other network/configuration issues going on that's preventing you from making requests outside of the host. It's also possible that your Ollama installation is not respecting yourENVvariable for some reason and is, therefore, defaulting to127.0.0.1.Or at least that would be my best guess. @Verfinix
@coder903 commented on GitHub (Apr 18, 2024):
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/home/mike/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sb>
Environment="OLLAMA_HOST=0.0.0.0"
[Install]
WantedBy=default.target
Editing ollama.service by adding this line: Environment="OLLAMA_HOST=0.0.0.0" worked for me. One note. I just upgraded Ollama and the service file was overwritten to it's default state so I had to redo it.
@darkBuddha commented on GitHub (Apr 20, 2024):
Why is
OLLAMA_HOST=0.0.0.0in /etc/environment not working? Should persist systemd service file updates...@letsruletheworld commented on GitHub (Apr 21, 2024):
same problem here. configured to
Environment="OLLAMA_HOST=0.0.0.0"still it only listen to IP6 instead of IP4.
@Kmfernan5 commented on GitHub (Apr 23, 2024):
It seems that to permanently set the
OLLAMA_HOSTenvironment variable on a Windows system, you can use thesetxcommand. This tool allows you to define environment variables at the system level or for the current user, ensuring that the settings persist across reboots. Here's how you can do it:Open Command Prompt as Administrator: This step is crucial as setting system-wide environment variables requires administrative privileges.
Set the Environment Variable for All Users: If you want
OLLAMA_HOSTto be available to all users on the system, use the following command:The
/Mswitch specifies that the setting should be applied system-wide.Set the Environment Variable for the Current User Only: If you only need the environment variable for your user account, omit the
/M:After executing the appropriate
setxcommand, you'll need to restart any applications or command prompts that need to access theOLLAMA_HOSTvariable, as changes made withsetxare only recognized in new sessions.This method ensures that
OLLAMA_HOSTis set permanently and will survive system reboots, making it available whenever required by the application. Is that about right?@mobile-appz commented on GitHub (Apr 24, 2024):
How do you set this permanently on MacOS? Shouldn't this be an option in the UI?
OLLAMA_HOST 0.0.0.0
@JOduMonT commented on GitHub (Apr 26, 2024):
MacOS is based on UNIX so like Linux you simply set environment variable at
at the user level is would be in your shell profile
on Mac your default shell is more likely ZSH instead of BASH so
~/.zsh_profileor instead of~/.bash_profile@mobile-appz commented on GitHub (Apr 27, 2024):
What do you put in the ~/.zsh_profile file? Have you got this to work on MacOS with the .app gui application? Thanks
@jonathanq9 commented on GitHub (May 6, 2024):
I've added the macOS Ollama.app to the "Open at Login" list in Login Items to automatically start at login. To make the Ollama.app listen on "0.0.0.0", I have to close it, run
launchctl setenv OLLAMA_HOST "0.0.0.0"in the terminal, and then restart it. However, the OLLAMA_HOST environment variable doesn't persist after a reboot, and I have to set it manually again. How can I automatically set the environment variable OLLAMA_HOST to "0.0.0.0" before the Ollama.app opens at login and have it persist after a reboot?@AnsenIO commented on GitHub (May 6, 2024):
on Mac OS, you can check set it to auto launch in ~/Library folder, either on LaunchAgents or LaunchDaemons
Here is what Llama3 says about it:
A Mac OS enthusiast!
To set the
OLLAMA=0.0.0.0variable to be loaded before the automatic launch of OLLAMA on system startup, you can follow these steps:Method 1: Using Launch Agents
~/Library/LaunchAgentsdirectory using the following command:Replace
yourusernamewith your actual username.3. Load the Launch Agent using the following command:
Then, add the following line at the end of the file:
This will load the setting every time you log in.
Method 2: Using launchd configuration files
~/Library/LaunchDaemonsdirectory using the following command:Replace
yourusernamewith your actual username.2. Load the Launch Daemon using the following command:
Then, add the following line at the end of the file:
This will load the setting every time your Mac restarts.
Remember to replace
yourusernamewith your actual username in both methods.I hope this helps! Let me know if you have any further questions.
@jonathanq9 commented on GitHub (May 9, 2024):
@AnsenIO Thanks for the reply. I tried the steps provided, but I couldn't get this to work on my Mac for unknown reasons. After a reboot, I can't connect to the Ollama port 11434. I use
launchctl getenv OLLAMA_HOSTto check if the environment variable is set, but it isn't set after reboot.I did a search, and it mentioned that plist files use XML format, so I tried that approach.
I also generated the plist XML for
launchctl setenv OLLAMA_HOST "0.0.0.0", but I'm not sure if this is correct.I wish Ollama provided a toggle or supported configuration files to set
OLLAMA_HOST=0.0.0.0. This is seeming more complicated than I originally thought. I'll just set the environment variable manually. Not a big deal. I appreciate the help.@ch0c0l8ra1n commented on GitHub (May 11, 2024):
@ksylvan Could you clarify what you mean by reset
OLLAMA_HOSTbefore trying to use ollama-python calls? My code is currently throwing the "address is not valid in context" error but I managed to solve it by launching an ollama client with an appropriate host.@ksylvan commented on GitHub (May 12, 2024):
@ch0c0l8ra1n The ollama-python client code does not like
OLLAMA_HOSTbeing set to0.0.0.0- even if that's what you did to make sure the ollama server binds to all interfaces. You must setOLLAMA_HOSTto something likelocalhostbefore exercising the python bindings.@isvicy commented on GitHub (May 12, 2024):
we should not edit the ollama daemon service file directly. what we should do is creating a extra config file like what docker did in its doc. in summry, you do the following stuff:
the content of the http-host.conf file should be:
after all this, you can tell ollama is indeed serving on all interfaces by
sudo systemctl status ollama, there will be logs likeListening on [::]:11434@airtonix commented on GitHub (May 20, 2024):
No need for alarm; This already happens when you run
systemctl edit ollama.service@nuaimat commented on GitHub (May 31, 2024):
Listening on [::]:11434does not mean all interfaces, it means ipv6 interfaces.compare your output to the one from:
sudo systemctl status sshd.servicethe output contains:
which is reflected on:
i believe this is a bug and needs a fix
@grandoth commented on GitHub (May 31, 2024):
If you're running into this and on WSL, take a look at Issue #1431. Turns out despite
tcp6 0 0 :::11434being reported when binding to0.0.0.0, it was still actually bound toeth0on ip4 as well (a17x.x.x.xfor the WSL VM) at least in my case. I could put thateth0ipaddress:portin my browser and access ollama. I added the suggested firewall rules and port proxy and I can now get to it through my host's IP.Note: I was able to set
Environment="OLLAMA_HOST=0.0.0.0:11434"in the override .conf file (including the port)@HitLuca commented on GitHub (Jun 6, 2024):
As @nuaimat mentioned, setting
OLLAMA_HOST=0.0.0.0doesn't make ollama serve requests from the network using ipv4@nuaimat commented on GitHub (Jun 6, 2024):
@HitLuca a workaround is to disable ipv6 on your machine.
@HitLuca commented on GitHub (Jun 6, 2024):
Good to know, I'll try out on the Google cloud vm
@oskapt commented on GitHub (Jul 7, 2024):
For everyone freaking out that netstat shows
tcp6: unless you specify that something should only listen on IPv6, thetcp6notation includes IPv4 by default. You can verify this bync -v x.x.x.x 11434, using your IPv4 address forx.x.x.x.All IPv4 addresses exist within the IPv6 address space.
A simple search on Google or SO will show questions about this going back to 2014. Do your homework.
And whoever suggested disabling IPv6 as a workaround is wrong. You don't disable something as a workaround when you don't know why something isn't working. That doesn't fix anything. It only tells the world that you don't know what you're doing and leaves your system in a state that you don't understand.
@lmaddox commented on GitHub (Jul 29, 2024):
I forgot my workstation has a firewall. I'm leaving this here for anyone else who needs a reminder:
sudo iptables -A INPUT -p tcp --dport 11434 -j ACCEPT@alansenairj commented on GitHub (Aug 14, 2024):
I will relate my experience here.
My Ollama is running as a Linux service at my PC
OpenUI is running at a container in my NAS.
As mentioned above, I put one more variable in service:

If ollama get some update I will put it again or do some extra config to maintain this variable for this service.
One thing causing some confusion is netstat put tcp port as ipv6. I am using Fedora and ss is used intead netstat.
Then to test I use a request to my local service
curl http://localhost:11434/api/generate -d '{ "model": "llama2", "prompt": "Tell me a joke." }' | jq .
I am using logs of ollama to check it working too:

journalctl -u ollama -f
It is working locally and accepting request in it's API.
So I worked at openwebui frontend. It is working at my NAS, not in my local PC.
I am getting error connections and the problem was to configure docker to use it as openwebui was created. It uses a backend to avoid exploitations.
docker run -d -p 777:8080 -e OLLAMA_BASE_URL=http://192.168.129.106:11434 --add-host=host.docker.internal:host-gateway -v open-webui
:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
--add-host=host.docker.internal:host-gateway:
This option adds a new host entry (host.docker.internal) that points to the gateway IP address (host-gateway). This allows the container to access the host machine by its hostname (e.g., host.docker.internal) instead of its IP address.
This is my PC with VGA running ollama service OLLAMA_BASE_URL=http://192.168.129.106:11434
As you can see it is processing using my GPU.

@ThatCoffeeGuy commented on GitHub (Aug 17, 2024):
I've done this months ago, today updated ollama and wasted half an hour troubleshooting this - it seems it just simply rewrote my systemd file.
@ThatCoffeeGuy commented on GitHub (Aug 31, 2024):
Today I upgraded to 0.3.8, once again, it wiped the Environment="OLLAMA_HOST=0.0.0.0" variable from the systemd file. Please make sure the script respects already defined parameters or gives an interactive way to handle it. (Overwrite Y/N?)
@mdlmarkham commented on GitHub (Sep 2, 2024):
Same here - it would be great if these settings weren't overwritten.
@nuaimat commented on GitHub (Sep 2, 2024):
@ThatCoffeeGuy @mdlmarkham there's a problem with your approach, do the following:
sudo systemctl edit ollama.serviceThis will result in an override systemd file, that will survive across ollama upgrades.
Don't ever manually edit the original systemd file.
@liudonghua123 commented on GitHub (Sep 13, 2024):
I added
Environment="OLLAMA_HOST=0.0.0.0"line to/etc/systemd/system/ollama.service. And reload the systemd configuration, then it listened on all the network interface.Also notice that the HOME environment is updated in the daemon of ollama.
Details
@LiMingchen159 commented on GitHub (Sep 19, 2024):
It seems that your ollama service only listening the ipv6 port? Can you use your ipv4 IP and port to use ollama service?
@liudonghua123 commented on GitHub (Sep 19, 2024):
Even netstat only shows ipv
6 listening info, the ipv4 also works for me actually.
@oskapt commented on GitHub (Sep 30, 2024):
all IPv4 space fits within IPv6 space, so if you have IPv6 enabled on your system, it will list tcp6 ports for everything that's listening.
@ajfriesen commented on GitHub (Nov 11, 2024):
I ran into the Netstat confusion twice as well.
One time I wrote it down in a blog post.
The second time I googled years after and found my blog post.
TLDR:
Source:
https://www.ajfriesen.com/netstat-shows-tcp6-on-ipv4-only-host/
@VishwaS-22 commented on GitHub (Dec 14, 2024):
I'm using ubuntu ec2, I tried adding env for 0.0.0.0, but only ipv6 is opened, I couldn't able to
send req from Postman of my local machine.
@bonyiii commented on GitHub (Dec 31, 2024):
On my machine, it began functioning properly once I opened port 11434 in the firewall.
@Verizane commented on GitHub (Jan 11, 2025):
In case someone gets here and ask themselves, how to make ollama serve to the network when starting from terminal without using a service on linux debian, in my case simply setting OLLAMA_HOST via
did not work. I had to set it this way:
Edit: in another attempt this did not work, so I had to try this and it worked:
@Ghania-Sarwar commented on GitHub (Jan 31, 2025):
I am trying to link my app and ollama model using its prebuilt in docker but I am having this error how ever locally my app is linked with ollama and everything is working fine but in docker this issue persist.
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe4c0193310>: Failed to establish a new connection: [Errno 111] Connection refused'))
Traceback:
File "/app/locallama.py", line 57, in
summary = chain.invoke({'issues': issues_text}).strip()
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3024, in invoke
input = context.run(step.invoke, input, config)
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 390, in invoke
self.generate_prompt(
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 755, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 950, in generate
output = self._generate_helper(
@xelemorf commented on GitHub (May 24, 2025):
The above settings were good, but would run Ollama as a process under command prompt. To still use the desktop app the following worked for me under Windows OS:
setx OLLAMA_HOST "0.0.0.0" /M@meraklimaymun commented on GitHub (May 27, 2025):
I'm using OpenWebUI with a Docker on Raspberry Pi 5. And I somehow couldn't find where the docker-compose.yml file is located.
@AnsenIO commented on GitHub (Jul 13, 2025):
how did you start the openwebui ? I guess you did git clone of the openwebui docker git. inside it there is the docker compose file. you can tweak it.
@meghuizen commented on GitHub (Jul 21, 2025):
If you're using Ubuntu / systemd and want to keep the changes when upgrading ollama, create an override file, like the following, with the example content:
File location: /etc/systemd/system/ollama.service.d/listen-all.conf
File content: