mirror of
https://github.com/ollama/ollama.git
synced 2026-05-06 16:11:34 -05:00
Open
opened 2026-05-03 16:30:37 -05:00 by GiteaMirror
·
61 comments
No Branch/Tag Specified
main
dhiltgen/ci
parth-launch-plan-gating
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
parth-launch-codex-app
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
hoyyeva/opencode-image-modality
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
No Label
bug
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#64196
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @jsrcode on GitHub (Apr 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3504
What is the issue?
C:\Users\18164>ollama run qwen:0.5b
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=pa9U-g8eXWKfTiK3NN_FdQ&scope=repository%!A(MISSING)library%!F(MISSING)qwen%!A(MISSING)pull&service=ollama.com&ts=1712324131": net/http: TLS handshake timeout
What did you expect to see?
Pull the model
Steps to reproduce
Pull the model
Are there any recent changes that introduced the issue?
No
OS
Windows
Architecture
x86
Platform
Docker
Ollama version
0.1.30
GPU
Intel
GPU info
No response
CPU
Intel
Other software
No response
@jsrcode commented on GitHub (Apr 5, 2024):
C:\Users\18164>ollama pull llama2
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=-dL8dGX7EOvm7PlquSf5lw&scope=repository%!A(MISSING)library%!F(MISSING)llama2%!A(MISSING)pull&service=ollama.com&ts=1712326755": net/http: TLS handshake timeout
@jsrcode commented on GitHub (Apr 5, 2024):
This is true for all models
@igorschlum commented on GitHub (Apr 5, 2024):
Hi @jsrcode
I will try to help you. There is an issue with your network configuration as Ollama pull lama2 works for any of us and no problem is reported here.
The error message you're encountering, Error: pull model manifest: Get "https://ollama.com/token?...": net/http: TLS handshake timeout, suggests a problem with establishing a secure connection to the server. This could be due to several reasons, including network issues, firewall restrictions, or problems with SSL certificates. Here are some steps to troubleshoot and potentially resolve the issue:
1 - Check Network Connection: Ensure your internet connection is stable and fast enough. A slow or unstable connection can cause timeouts during the TLS handshake process.
2 - Firewall or Proxy Settings: If you're behind a firewall or using a proxy, it might be blocking or interfering with the connection. Try disabling the firewall temporarily or configuring it to allow connections to ollama.com. If you're using a proxy, ensure it's correctly configured in your environment variables or Ollama's configuration.
3 - SSL Certificate Issues: The error could be related to SSL certificate issues, such as a self-signed certificate. If you're in a controlled environment where you can trust the certificate, you might consider using the --insecure flag with the ollama pull command to bypass SSL certificate verification. However, be cautious with this approach as it can expose you to security risks.
4 - Environment Variables for Proxy: If you're using a proxy, ensure that the HTTPS_PROXY environment variable is correctly set to point to your proxy server. This is crucial for applications that need to connect to the internet through a proxy.
Restart Ollama Service: Sometimes, simply restarting the Ollama service can resolve transient issues. Use the appropriate command for your operating system to restart the service.
5 - Manual Pull Attempts: As a workaround, you can try pulling the model multiple times in quick succession. This approach has been reported to sometimes bypass the issue, especially if it's related to temporary network glitches or server-side issues.
6 - Can you try from another network? Can you share your network configuration to see if you are behind a company network, a university network or a home provider network. If your network is managed by a inhouse administrator you can ask him to help you.
Remember, when dealing with network issues or SSL certificates, always ensure you're following best practices for security and privacy.
Let us know here if you find a solution so Ollama could displayed a better documented error message if possible.
@ajwillia69 commented on GitHub (Apr 6, 2024):
PS C:\WINDOWS\system32> ollama run llama2
Error: error loading model C:\Users\ajwil.ollama\models\blobs\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
PS C:\WINDOWS\system32>
same problem here ran one model and then pulled another model and now won't run any model.
@igorschlum commented on GitHub (Apr 6, 2024):
@ajwillia69 I think that the issue you face is different as it's not a network issue, but rather a memory issue or a model naming issue. Could you post a new issue and try with tiny models? (search tiny in the list of models).
@jsrcode commented on GitHub (Apr 6, 2024):
There is nothing wrong with the firewall, and the ollama.com is accessible normally in the browser, but I get this error when pulling the model
@xiehongxin commented on GitHub (Apr 7, 2024):
maybe you can try
ping ollama.comto check you network@igorschlum commented on GitHub (Apr 11, 2024):
@jsrcode did you try from another location? You did not answer to know if you are at home or at University. Did you tried with new version 0.1.31?
@Seedmanc commented on GitHub (Apr 12, 2024):
Same here. Clearly "Works for us" is not acceptable here.
@igorschlum commented on GitHub (Apr 12, 2024):
@Seedmanc It works for hundreds of users, so we have to find out what is the issue in some particulars configuration. One solution would be to be able to download manually the model and installl it manually in the directory.
Another solution would be to have réplications of servers to be able to download models from other parts of the word.
In which country are you?
@Seedmanc commented on GitHub (Apr 12, 2024):
I'm not alone, as indicated by the opener of this issue and here's another link: https://forums.docker.com/t/docker-ollama-error-pull-model-manifest-get-https-registry-ollama-ai-v2-library-llama2-manifests-latest/140256/2
Russia. I expected this question, but no, VPN doesn't help. Tried several of them.
So far only thing that worked is installing Ollama on Colab, pulling the models there and then downloading them from Colab and putting manually in folder on my system. This is a terrible amount of hoops to go through just to get started.
@igorschlum commented on GitHub (Apr 12, 2024):
OK, there is an issue when downloading from certain countries because of proxies of limitations. The message should be at least more clear. I hope Ollama Team could take this point and offer a manual download of models.
@phpadminer commented on GitHub (Apr 22, 2024):
这个可能需要梯子并且命令行翻墙,当然就算翻墙了也有可能失败。多试几次就好了,至少我就是这样子的。
This may require a VPN and the command line must use it, although even if the VPN is on, it may fail. Just try a few more times.
@Seedmanc commented on GitHub (May 2, 2024):
So I've tried a lot of different things and some of them must have worked, it does pull models now. I can't redo them one by one to tell for sure which, but I can list a few.
Perhaps some of you with better knowledge might pick an action from this list that most likely have fixed the issue.
On an unrelated note I have a similar TLS problem with a Unity game that tries to access Google docs on launch and hangs when it fails due to "Curl error 60: Cert verify failed: UNITYTLS_X509VERIFY_FLAG_USER_ERROR1". I was hoping the nature of the problem is the same as I had here and now that it's fixed, the game would also work. It didn't.
@bmizerany commented on GitHub (May 9, 2024):
Hello, Everyone!
At Ollama we're working on a solution to this issue, and have been seeing some positive results!
Now we need your help testing in your enviroments as well!
How to help:
Run a test pull through our staging server
From the list below, pick one (or many) of the models that you have not pulled already, and perform a pull.
Remove and retry 2 or 3 more times
Report back!
Please respond here answering these questions to the best of your ability:
ollama pullcommand you ran including model?30-50 MB/s)ollama pull <model>for the same model(s)?Thank you all so much in advance. We look forward to hearing back from you.
@Alchemistqqqq commented on GitHub (May 10, 2024):
Hello, let me first explain my environment to you: I am in China, using the linux ubuntu system server, the server is using the campus network. This causes many one-click operations to fail due to network reasons. According to the manual installation tutorial provided, I have installed ollama on the server and can ping through the ollama website. But as the picture above shows. I need to use qwen to fix the problem, but both run and the pull operation you provided will fail. This problem has been bothering me for a long time, I hope you can provide a detailed manual installation tutorial, because my own host can hang vpn, so I now think a better solution is to download the corresponding file on the local machine, and drag and drop to the corresponding location of the server to achieve the same effect as the run command.
@igorschlum commented on GitHub (May 10, 2024):
Hi, I tried with very bad and good internet connection. With good internet connection, it's fast and with poor internet connection, it's not dropping as it was doing before. When the connection was halted, Ollama said that the connection drop
So for me, it's all good.
pulling manifest
pulling 377876be20ba... 36% ▕█████████████ ▏ 841 MB/2.3 GB 2.4 MB/s 10m17s
Error: max retries exceeded: Get "https://issue1736.ollama.dev/v2/library/llava-phi3/blobs/sha256:377876be20bac24488716c04824ab3a6978900679b40013b0d2585004555e658": read tcp 192.168.1.80:50744->66.241.124.100:443: read: connection reset by peer
@pj-connect commented on GitHub (May 16, 2024):
Same issue here.
There is no debug, nor verbose, option ? Even on discord, some people say it works for them as a proof that there is no issue at all. I very rarely get a TLS handshake timeout, but consistently with ollama.com.
Using my vpn, I selected a US server, and now the manifest, and the model, is downloading. So this seems a strictly a geolocation issue.
@saymanq commented on GitHub (May 16, 2024):
I was getting the TLS handshake timeout, but when I used a VPN and changed my server to the United States it started working as expected just as someone else here has also pointed out. It seems to be a geo restriction problem.
@sunnyisabaster commented on GitHub (May 20, 2024):
I changed rule to global in my vpn. it's solved.
@igorschlum commented on GitHub (May 20, 2024):
@jsrcode is the issue solve on your side with latest version of Ollama and VPN Settings as explain by sunnysisbaster.
@taobiaoli1314 commented on GitHub (May 21, 2024):
The same error, I tried everything, both the ollama version and the vpn, and they all failed.
@taobiaoli1314 commented on GitHub (May 21, 2024):
I met the same error:
pulling manifest Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/6a/6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240521%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240521T063315Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=0b9c67a70a1f8baba2a93135998888e394f987bf417a3b126995e05ed27b60ea": net/http: TLS handshake timeout@pj-connect commented on GitHub (May 21, 2024):
Simply set your local VPN to select and use a server in the USA to make seem that your internet traffic originated from the USA.
To achieve the appearance that your internet traffic originates from the United States, configure your local VPN to connect to a server located within the USA. This will mask your actual location and provide you with an American IP address, thus making it seem as if your online activity is taking place within the United States.
For instance, one practical application of this technique can be seen in accessing region-locked content on streaming services like Netflix, or better the ollama server. Many shows and movies are available exclusively to U.S. viewers due to licensing agreements. By using a VPN to connect to a U.S. server, international users can bypass these geographical restrictions and gain access to a broader library of content.
@smartexpert commented on GitHub (May 22, 2024):
I'm running docker on linux and encountered a certificate verification error. I was able to solve it be adding the custom certificate and build a new docker image based on the docs here.
@quanta-guy commented on GitHub (Jun 6, 2024):
Try Using Alternative DNS Servers: You can try changing your DNS servers to Google's public DNS (8.8.8.8 and 8.8.4.4) or Cloudflare's DNS (1.1.1.1). This worked for me

@weipengzou commented on GitHub (Jun 6, 2024):
set my VPN to "TUN mode"
and
ollama pull gemma:2bthat work for me.
@chris-at-work commented on GitHub (Jun 13, 2024):
Try downgrading:
this solved
connection reseterrors on any and all models for me. ⚠️ This will reinstall, meaning any edits to systemd files or anything the installer does will be lost.I don't know the connection between the CLI and servers but maybe ollama is rearchitecting?
@biandan commented on GitHub (Jul 14, 2024):
the new version still can not work at windows and wsl linux :ollama version is 0.2.5
the new version still can not work at windows and wsl linux :ollama version is 0.2.5
@shuurik commented on GitHub (Jul 14, 2024):
pulling manifest
Error: Head "
6a0746a1ec/data!F(MISSING)20240714%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240714T140258Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=1383414da685fbce952259fd7c69196641bed1f9a5cf49661feb8c675a45c9bc": dial tcp 104.18.9.90:443: i/o timeout@luckydevil13 commented on GitHub (Jul 29, 2024):
same here
pulling manifest Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/e1/e16120252a9b0e49ed8074d11838d8b0227957a09d749d18425e491243e13822/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240729%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240729T052444Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=34aeb1c147c02d5cd23f9d6fbf05318f6e77a738c83e55cec1d9f9c234a4afac": dial tcp 188.114.98.224:443: i/o timeout@igorschlum commented on GitHub (Aug 10, 2024):
@jsrcode and others who still face this issue. Could you try with version 0.3.4 of Ollama?
Are you able to donwload GGUF files from HuggingFace?
Thank you for the Update.
@xiehongxin commented on GitHub (Aug 25, 2024):
您好,我已收到您的邮件,查收后会尽快给您答复,辛苦您了,祝好!
@PrashantSakre commented on GitHub (Jan 17, 2025):
Hi Need help here too,
I am getting bellow error. And also 'ollama show codeollama' doesn't work. ( Error: model 'codellama:7b' not found)
pulling manifest
pulling 3a43f93b78ec... 0% ▕ ▏ 0 B/3.8 GB
Error: max retries exceeded: Get "
3a43f93b78/data": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host@xiehongxin commented on GitHub (Jan 17, 2025):
您好,我已收到您的邮件,查收后会尽快给您答复,辛苦您了,祝好!
@erix22 commented on GitHub (Jan 24, 2025):
Hi,
I did install a new PC yesterday and just installed Ollama this morning.
and I cannot pull DeepSeek-R1; I obtain
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/96/96c415656d377a..............
net/http: TLS handshake timeout all the time...
I did start with Wifi and now on ethernet cable, but still the same...
is there something going on today ?
cloudfare ?
Thanks in advance
@erix22 commented on GitHub (Jan 24, 2025):
well, how I did solve the problem or how it disappeared...
I did a copy of the URL from my previous post when I was trying to
pull DeepSeek-r1 and then I pasted it in my browser.
it asked me whereI wanted to save the "thing"
(because I do not know what is it at the moment)
and... it did work without a single problem..
so, I am not an expert, including Linux Network Ollama, but I am sure
there are a lot of experts who will read this message and I really hope
one of them will come and tell me what was wrong..?
for the record:
Memory:
System RAM: total: 64 GiB available: 58.73 GiB used: 2.79 GiB (4.7%)
Message: For most reliable report, use superuser + dmidecode.
Array-1: capacity: 96 GiB note: est. slots: 2 modules: 2 EC: None
max-module-size: 48 GiB note: est.
Device-1: Channel-A DIMM 0 type: DDR5 detail: synchronous unbuffered
(unregistered) size: 16 GiB speed: 5600 MT/s volts: note: check curr: 1
min: 1 max: 1 width (bits): data: 64 total: 64 manufacturer: Crucial
part-no: CT16G56C46S5.C8D serial: E94C903B
Device-2: Channel-B DIMM 0 type: DDR5 detail: synchronous unbuffered
(unregistered) size: 48 GiB speed: 5600 MT/s volts: note: check curr: 1
min: 1 max: 1 width (bits): data: 64 total: 64
manufacturer: Micron Technology part-no: CT48G56C46S5.M16B1
serial: EB029679
System:
Host: geeka8 Kernel: 6.8.0-51-generic arch: x86_64 bits: 64
Desktop: Xfce v: 4.18.1 Distro: Linux Mint 22 Wilma
Graphics:
Device-1: AMD Phoenix3 driver: amdgpu v: kernel
Display: x11 server: X.Org v: 21.1.11 with: Xwayland v: 23.2.6 driver: X:
loaded: amdgpu unloaded: fbdev,modesetting,vesa dri: radeonsi gpu: amdgpu
resolution: 1920x1080~60Hz
API: EGL v: 1.5 drivers: radeonsi,swrast platforms: x11,surfaceless,device
API: OpenGL v: 4.6 compat-v: 4.5 vendor: amd mesa v: 24.0.9-0ubuntu0.3
renderer: AMD Radeon Graphics (radeonsi gfx1103_r1 LLVM 17.0.6 DRM 3.57
6.8.0-51-generic)
Browser: Firefox Mint flavored Mint-001-1.0 134.0.2 (64-bit)
ollama version is 0.5.7
PING ollama.com (34.36.133.15) 56(84) bytes of data.
64 bytes from 15.133.36.34.bc.googleusercontent.com (34.36.133.15): icmp_seq=1 ttl=112 time=1173 ms
64 bytes from 15.133.36.34.bc.googleusercontent.com (34.36.133.15): icmp_seq=2 ttl=112 time=1131 ms
--- ollama.com ping statistics ---
10 packets transmitted, 9 received, 10% packet loss, time 10559ms
rtt min/avg/max/mdev = 824.670/1191.051/1523.466/215.513 ms, pipe 2
geeka8:# nslookup ollama.com
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: ollama.com
Address: 34.36.133.15
geeka8:# dig ollama.com
; <<>> DiG 9.18.30-0ubuntu0.24.04.1-Ubuntu <<>> ollama.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47908
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;ollama.com. IN A
;; ANSWER SECTION:
ollama.com. 166 IN A 34.36.133.15
;; Query time: 0 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Fri Jan 24 16:55:50 CET 2025
;; MSG SIZE rcvd: 55
if I may, which packages are supposed to be installed before the
installaion of Ollama ?
now, where and how to move the "thing to make it usable more my local instance
of Ollama ? it is actually in my Downloads dir, zwhere should I move it ?
what name should I give it ? etc etc
I even cannot pull "deepseek-r1:1.5b"
my location is France and I am not using a VPN
Ihave the same problem with another PC on another network on another ISP...
any ideas ???
@rick-github commented on GitHub (Jan 24, 2025):
https://github.com/ollama/ollama/issues/8535#issuecomment-2613241807
@erix22 commented on GitHub (Jan 25, 2025):
Thank you @rick-github
I also noticed some strange access rights on the Ollama directory (GID)
thank you for your help indeed
@martinwozenilek commented on GitHub (Jan 25, 2025):
Same problem here with a Jetson Orin, can't pull any model with ollama, TLS timout. In the end I've downloaded the models manually and made some adjustments to the filenames and access rights. I've used this for download:
https://github.com/amirrezaDev1378/ollama-model-direct-download
But of course the PowerShell script will help the same.
After download I've to correct the filenames from just "data" to "sha-92348238...". The filenames needs to have a dash behind "sha" and not a ":" like defined in the manifest file.
After a first ollama run the manifest file get's rewritten and everything is good to go!
The blob directory must look like this:
And after a first "ollama run" the manifest file looks like this:
@Mrahmani71 commented on GitHub (Jan 27, 2025):
Why is Ollama like this? I've been trying to pull a model for about 2 hours. It downloads about 1 GB and then drops back down to 400 MB. 🤦♂️😢
@rick-github commented on GitHub (Jan 27, 2025):
There are problems connecting to the Cloudfare CDN. Use one of the workarounds linked in this post to download the model.
@CasCard commented on GitHub (Jan 27, 2025):
Try this for Windows
From your ipconfig, your active interface is "Wi-Fi" (not Ethernet). Let's set Google DNS:
Command Prompt (Admin):
PowerShell (Admin):
Check if the Cloudflare domain resolves:
Success: You’ll see an IP address like 172.64.xxx.xxx.
@Mrahmani71 commented on GitHub (Jan 29, 2025):
I could download the models only when I used a VPN with an American server.
@heshi2019 commented on GitHub (Feb 3, 2025):
This doesn't seem to be a standalone issue. From about a month ago until now, many people have been experiencing the problem of not being able to pull models. Based on my own observation, switching networks, refreshing DNS, not using/using VPN, restarting the Olama service, and accessing the corresponding network address in the browser before pulling again can occasionally be useful. Another phenomenon is that the speed drops from 6MB/s to 100KB/s in the final stage of pulling,
@5UFKEFU commented on GitHub (Feb 7, 2025):
I ran into it too, my internet isn't that bad, but it just pulls over and over again and stays between 1%-2%.Had to manually upload the model file.
@JiangZhigz5055 commented on GitHub (Feb 13, 2025):
Could it be the server problem? I have encountered the error for a week. My laptop is macbook pro M3max 64G ram. I installed Ollama from ollama.com. Can anybody help? Thanks.
"xxxx@MacBook-Pro-M3max ~ % ollama run deepseek-r1:70b
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/deepseek-r1/manifests/70b": read tcp [2408:846a:10:608d:c564:516d:c33c:5ca8]:49232->[2606:4700:3036::6815:4be3]:443: read: connection reset by peer"
@CRASH-Tech commented on GitHub (Feb 13, 2025):
+1
@VanemKrAu commented on GitHub (Feb 22, 2025):
There is a file that cannot be downloaded for some reason, and below are the error codes:
PS C:\Users\Vanem> ollama run hf.co/bartowski/gemma-2-9b-it-abliterated-GGUF:Q4_K_M
pulling manifest
pulling 88d84ac97967... 100% ▕████████████████████████████████████████████████████████▏ 5.8 GB
pulling e0a42594d802... 0% ▕ ▏ 0 B/ 358 B
Error: max retries exceeded: Get "https://huggingface.co/v2/bartowski/gemma-2-9b-it-abliterated-GGUF/blobs/sha256:e0a42594d802e5d31cdc786deb4823edb8adff66094d49de8fffe976d753e348?__sign=eyJhbGciOiJFZERTQSJ9.eyJyZWFkIjp0cnVlLCJwZXJtaXNzaW9ucyI6eyJyZXBvLmNvbnRlbnQucmVhZCI6dHJ1ZX0sImlhdCI6MTc0MDIzMjcwMSwic3ViIjoiL2JhcnRvd3NraS9nZW1tYS0yLTliLWl0LWFibGl0ZXJhdGVkLUdHVUYiLCJleHAiOjE3NDAyMzMzMDEsImlzcyI6Imh0dHBzOi8vaHVnZ2luZ2ZhY2UuY28ifQ.EajXgKPztGGet2vc4EvabvsiDLdikeUaJZNqtavrkIt4rwNnX_Glhr8K7pZNwhF-DTDTygmyD5xOIaySf0HkAQ": dial tcp 157.240.20.18:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
@VanemKrAu commented on GitHub (Feb 22, 2025):
I finally solved this damn problem; I used an acceleration service for huggingface inside the Steam accelerator, and now I can pull it. This is so insane.
@kewlcode commented on GitHub (Mar 14, 2025):
still getting domain lookup error.
dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such host
@xiehongxin commented on GitHub (Mar 14, 2025):
您好,我已收到您的邮件,查收后会尽快给您答复,辛苦您了,祝好!
@rick-github commented on GitHub (Mar 14, 2025):
Follow these instructions: https://github.com/ollama/ollama/issues/8605#issuecomment-2639100703
@kewlcode commented on GitHub (Mar 14, 2025):
Editing the hosts file resolved the issue. But hope someone solves this longstanding issue. If the server ip addresses change - will have to edit hosts file manually again. :-(
@pragnesh-singh-rajput commented on GitHub (Mar 26, 2025):
This worked for me...
@adityanema2004 commented on GitHub (May 20, 2025):
It Worked , Thanks!!
@tianlichunhong commented on GitHub (Jun 9, 2025):
On windows, If I set the system proxy, ollama pull can't work. If I set htts_porxy all_proxy in system environment, ollama port 11434 will through the proxy. So I think there is a bug in ollama. ollama provides service must not through proxy, if it's required, need a ollama_proxy, not the system environment proxy. only pull need to use the system proxy.
@xiehongxin commented on GitHub (Jun 9, 2025):
您好,我已收到您的邮件,查收后会尽快给您答复,辛苦您了,祝好!
@sameepvicky commented on GitHub (Jun 29, 2025):
@xiehongxin commented on GitHub (Jun 29, 2025):
您好,我已收到您的邮件,查收后会尽快给您答复,辛苦您了,祝好!
@Potracheno commented on GitHub (Jul 15, 2025):
The same here. Seems like problem is in cloudflare which doesn't treat all countries equally. With US VPN it works.