mirror of
https://github.com/ollama/ollama.git
synced 2026-05-07 00:22:43 -05:00
Closed
opened 2026-04-12 16:51:26 -05:00 by GiteaMirror
·
90 comments
No Branch/Tag Specified
main
hoyyeva/anthropic-local-image-path
dhiltgen/ci
dhiltgen/llama-runner
parth-remove-claude-desktop-launch
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
parth-launch-codex-app
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
hoyyeva/opencode-image-modality
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.30.0-rc3
v0.30.0-rc2
v0.30.0-rc1
v0.30.0-rc0
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#5590
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @arjunivor on GitHub (Jan 28, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8632
Originally assigned to: @bmizerany on GitHub.
What is the issue?
while trying to run
ollama run deepseek-r1:7bit repeatedly fails at 6%. I tried to run llama 3.2 and it downloaded that flawlessly, but everytime i try to run deepseek i get an error sayingerror max retries exceeded: EOFOS
WSL2
GPU
Nvidia
CPU
AMD
Ollama version
latest
@efe3535 commented on GitHub (Jan 28, 2025):
I have the same problem.
@rick-github commented on GitHub (Jan 28, 2025):
https://github.com/ollama/ollama/issues/8535#issuecomment-2613241807
@epicwhale commented on GitHub (Jan 28, 2025):
having the same issue!
@ajayjoshioutdosolutions commented on GitHub (Jan 28, 2025):
Issue with Ollama
ollama pull deepseek-r1:8b
pulling manifest
pulling 6340dc3229b0... 0% ▕ ▏ 0 B/4.9 GB
Error: max retries exceeded: Get "
6340dc3229/data!F(MISSING)20250128%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20250128T135124Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=df1750b12731ec798303d375a9b75e4873a5ad7ea5c66aafc4e89cf29cd13cc7": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com on 127.0.0.53:53: server misbehaving@rick-github commented on GitHub (Jan 28, 2025):
Not an issue with ollama. DNS server is acting up. What's the result of
@efe3535 commented on GitHub (Jan 28, 2025):
@rick-github commented on GitHub (Jan 28, 2025):
What's the result of
@epicwhale commented on GitHub (Jan 28, 2025):
@rick-github commented on GitHub (Jan 28, 2025):
@epicwhale And now output of
@epicwhale commented on GitHub (Jan 28, 2025):
@rick-github commented on GitHub (Jan 28, 2025):
OK, not the expected output. What happens when you run
@blackhaj commented on GitHub (Jan 28, 2025):
I am not @epicwhale but I am getting the same issues (and similar outputs). When I run
ollama pull deepseek-r1:7bI get the same experience@rick-github commented on GitHub (Jan 28, 2025):
Something is broken at Cloudfare.
@aidanxyz commented on GitHub (Jan 28, 2025):
Strange that it fails on deepseek models and not on others.
@epicwhale commented on GitHub (Jan 28, 2025):
@aidanxyz commented on GitHub (Jan 28, 2025):
Are there an alternative sources where the model can be downloaded and plugged into ollama?
@rick-github commented on GitHub (Jan 28, 2025):
I think R2 in the cloudfare CDN is distributed storage, some backend hosting a chunk of the GGUF file is acting up.
It's not quite the same in that the GGUF file has a different hash and the template has FIM processing, but if you can't wait for Cloudfare to get its act together, it's better than nothing.
@ChaosCom commented on GitHub (Jan 28, 2025):
Hey @rick-github, one of the affected here.
If this is an R2 - related cloudflare issue, is it possible to mitigate this by doing what CF writes?
AWS recently updated their SDKs to enable CRC32 checksums on multiple object operations by default. R2 does not currently support CRC32 checksums, and the default configurations will return header related errors such as Header 'x-amz-checksum-algorithm' with value 'CRC32' not implemented. Impacted users can either pin AWS SDKs to a prior version or modify the configuration to restore the prior default behavior of not checking checksums on upload.
@aidanxyz commented on GitHub (Jan 28, 2025):
8b version works
ollama run deepseek-r1:8b@ChaosCom commented on GitHub (Jan 28, 2025):
The 8b version is based on llama, the 7b version is based on qwen2 - imo completely different architectures specializing in different things (qwen2 is code-focused).
What I also noticed: Investigating the "blobs" download directory, due to the oncomplete download, there's alot of state-tracking going on
sha256-<...HASH>-partial-0
sha256-<...HASH>-partial-1
...
These track how much of each respective chunk has been downloaded. For me, the 0-th chunk is
{"N":0,"Offset":0,"Size":292692074,"Completed":0}
whereas the 6-th chunk is
{"N":6,"Offset":1756152444,"Size":292692074,"Completed":292692074}
Which means, that the system ollama uses under-the-hood for downloads is not downloading things in sequence for deepseek-r1:7b. Is this normal behavior ?
@rick-github commented on GitHub (Jan 28, 2025):
Yes. A download is split into a bunch of chunks, the completion downloads is asynchronous. The partials are deleted either when the download is complete or the server is restarted.
@rick-github commented on GitHub (Jan 28, 2025):
deepseek-r1:7b: magnet:?xt=urn:btih:2JDGTZ7JT7KM24GCEXQQGDK4S3HKN23B&dn=models&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce
@ChaosCom commented on GitHub (Jan 28, 2025):
Thanks a bunch, the model now works. I'll keep the seed running today too so people familiar with this sort fiddling can get the model this way.
@AvtarSinghChundawat commented on GitHub (Jan 29, 2025):
How to use this sir ?
@rick-github commented on GitHub (Jan 29, 2025):
Use a torrent client to download the model, move the files to OLLAMA_MODELS.
@Paras1209 commented on GitHub (Jan 29, 2025):
Try changing your dns server . You can change by following ways:
To change your DNS settings:
Open the Control Panel and go to Network and Sharing Center.
Click on Change adapter settings.
Right-click on your active network connection and select Properties.
Select Internet Protocol Version 4 (TCP/IPv4) and click Properties.
Choose Use the following DNS server addresses and enter:
Preferred DNS server: 8.8.8.8 (Google DNS)
Alternate DNS server: 1.1.1.1 (Cloudflare DNS)
Click OK to save the changes.
@rabadur503 commented on GitHub (Jan 29, 2025):
so is there a solution to this problem or not?
@rick-github commented on GitHub (Jan 29, 2025):
The solution is for Cloudfare to fix their CDN. The workaround is to switch DNS servers or get the model from a different source.
@Paras1209 commented on GitHub (Jan 29, 2025):
For everyone who are confused i would like to make it clear for all that the problem in downloading is due to DNS server . And to solve this problem i already mentioned the steps in my above comment . For everyones conveinence i would like to mention all points again :
To change your DNS settings:
@Paras1209 commented on GitHub (Jan 29, 2025):
You can solve this problem by changing your DNS server from control
panel . Steps for the same are mentioned in my comment .
For your convenience i would like to mention the same in your reply here :
To change your DNS settings:
a. Preferred DNS server: 8.8.8.8 (Google DNS) .
b. Alternate DNS server: 1.1.1.1 (Cloudflare DNS) .
On Wed, Jan 29, 2025 at 7:25 PM rabadur503 @.***> wrote:
@rick-github commented on GitHub (Jan 29, 2025):
I suspect that Cloudfare have given dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com a new IP address but they have quite a long timeout value in the SOA, so that switching the DNS server or flushing the local DNS cache is required to get the new one.
@pdevine commented on GitHub (Jan 29, 2025):
Sorry about this, guys. We are looking at some ways to get more reliability out of Cloudflare.
@bmizerany commented on GitHub (Jan 30, 2025):
This appears to be fixed now. Closing. Please open a new ticket if the issue comes back and we'll look into it.
@seanmavley commented on GitHub (Jan 30, 2025):
Still appears to be an issue somewhere and not just DNS
WSL
Ollama 0.5.7
Starts download, downloads about 30%, out of nowhere, percentage drops to about 10%
Not sure what's going on, but that's just weird. Seems to be going back and forth and does this repeatedly.
Downloads to a point, rolls itself back somehow, goes forward, rolls back, all the while consuming data.
These models are big and those of us not on infinite Internet services, pretty adds up real quick in costs.
@pdevine commented on GitHub (Jan 30, 2025):
@seanmavley What is your location and what kind of net connection are you using?
@seanmavley commented on GitHub (Jan 30, 2025):
@pdevine
In Ghana, on MTN, connected via cable to a modem on 4G.
By the way, I've never had issues on same network downloading models via command line.
Current DNS is scancom (MTN) with fast.com saying network is 10Mbps
@rick-github commented on GitHub (Jan 30, 2025):
This is not the DNS problem, this is the stalling problem.
@seanmavley commented on GitHub (Jan 30, 2025):
@rick-github aah I see now
https://github.com/ollama/ollama/issues/8484
Stalling issue in my case then. Will follow updates on the other issue. Thanks.
@pdevine commented on GitHub (Jan 30, 2025):
@seanmavley I think you're unfortunately getting corrupted packets and the download is checking the file and seeing that it's incorrect and throwing out that chunk.
@rick-github commented on GitHub (Jan 30, 2025):
Verify by checking server log.
@sabbirsam commented on GitHub (Jan 30, 2025):
It keeps falling back to 15–18% after reaching 20%.
ollama run deepseek-r1:8b
@rick-github commented on GitHub (Jan 30, 2025):
Server log would show whether it's a stall, a corrupt packet, or some other problem.
In the meantime, you can work around it by killing the download every 10 seconds with this script: https://github.com/ollama/ollama/issues/8484#issuecomment-2623757960
@ERICK-ZABALA commented on GitHub (Feb 1, 2025):
error to download deepseek-r1
C:>ollama run deepseek-r1
pulling manifest
pulling 96c415656d37... 0% ▕ ▏ 2.6 MB/4.7 GB
Error: max retries exceeded: Get "
96c415656d/data": net/http: TLS handshake timeout@yashwanth2706 commented on GitHub (Feb 2, 2025):
I tried several times to downlaod but ollama keeps failing, I have good internet connection
It just restarts the download after downloading more than 5%
https://github.com/user-attachments/assets/ad8cf855-f85b-4ec0-87a0-f1e99d037b12
@yashwanth2706 commented on GitHub (Feb 2, 2025):
Even if i use
ollama pull deepseek-r1:7bissue still persists@rick-github commented on GitHub (Feb 2, 2025):
The Cloudfare CDN is having problems. The ollama team don't seem to be interested in fixing the problem. If you know how to use a torrent client, you can get deepseek-r1:7b from here:
magnet:?xt=urn:btih:a43e5b893b14f6c3dc78678e766101eeb7ca10c1&dn=models&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce
@jmorganca commented on GitHub (Feb 2, 2025):
Re-opening this to track the stalling issue. Sorry for all the problems and @rick-github for helping debug - we're definitely interested in fixing this, and are working with Cloudflare to resolve issues while we also make changes to Ollama's downloader for reliability
@rama-bin commented on GitHub (Feb 2, 2025):
Wondering if this issue is related. Can't pull anything from ollama toady (x509: negative serial number) -
○ → ollama pull nomic-embed-text
pulling manifest
pulling manifest
pulling 970aa74c0a90... 0% ▕ ▏ 0 B/274 MB
Error: max retries exceeded: Get "
970aa74c0a/data": tls: failed to parse certificate from server: x509: negative serial number@rick-github commented on GitHub (Feb 2, 2025):
Another problem with the Cloudfare CDN. dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com has been an issue for weeks.
@seanmavley commented on GitHub (Feb 2, 2025):
I mean ollama via cloud flare has been working for months for every model, no issues.
Deepseek arrives and all of a sudden Cloudflare doesn't work as expected for Deepseek in particular. Hmmm 🤔
@rick-github commented on GitHub (Feb 2, 2025):
Not unless they have a time machine, dd20bb891979d25aebc8bec07b2b3bbc has been a problem on and off since 2023. It's just gotten really bad lately, maybe due to the increased interest in using ollama.
@pdevine commented on GitHub (Feb 3, 2025):
I think we're definitely putting a lot of pressure on CF right now. Deepseek alone peeked at over 1 million pulls/day, and that's not including the rest of the models. As @jmorganca mentioned we are looking at a bunch of improvements here; just trying to figure out what we can do short term vs. long term.
@gitdexgit commented on GitHub (Feb 3, 2025):
thank you so much... I did nslookup... and it gave me unknown and time out .... yeah... my dns server was the problem.... now ollama pull deepseekR1:7b works I believe... . it also started from 27% not from 0.... maybe the .ps1 script that guy gave actually was working and doing the job
I'm not sure but I think if you set the number of retirees to idk like 500 if you have a shitty internet... it should keep going and going and downloading stop where it was then continue until it's finished... right..... ||OR|| Change your DNS server ... maybe try 1.1.1.1 (primary for claude flare) and 8.8.8.8(google's DNS).... as I'm typing it's now 56%... seems to be working... try changing your dns server.... then download then change it back to what you used to use
if it doesn't work ... .here I the .ps1 script I was talking about it's from this GitHub you can download it or read the .ps1 to make sure It's safe... here is it bellow
it's a simple PowerShell script.... just set the model you want.... in $ollamaCommand = "ollama pull deepseek-r1:7b" ......and the retires in $maxRetries = 100...... idk about Start-Sleep -Seconds 60 I just left it as is tbh... .I think leave it and it should run... I believe till it says complete... even if it doesn't show progress bar... it's downloading .... right... so yeah... this is the 2nd method if DNS doesn't work
!!!! IF YOU DON"T KNOW POWERSHELL JUST COPY PAST IT AND ASK AI WHAT IT DOES !!!!
here is the .ps1 script
@gitdexgit commented on GitHub (Feb 3, 2025):
also 3rd method is to just download the model from turrent... using like Bitcomet or something... as shown above in the quote by the poser
download it then go to your C:\Users%A_UserName%.ollama\models\manifests\registry.ollama.ai\library\deepseek-r1
and inter the deepseek-r1 and put it there and it should show in "ollama list" in your terminal
(create the folder if it's your first time using a deepseek-r1 Model ... I'm using 1.5b and that folder is what it created first hehhehe --> deepseek-r1)
@gitdexgit commented on GitHub (Feb 3, 2025):
Change DND and do Pull
1.1.1.1
8.8.8.8
@rama-bin commented on GitHub (Feb 3, 2025):
I was able to fix it by rolling back to an old ollama version (v0.1.34). For some reason, v0.5.7 is throwing "x509: negative serial number" error.
@Rudxain commented on GitHub (Feb 3, 2025):
Termux, built
ad22ace439from source:@rick-github commented on GitHub (Feb 3, 2025):
If the server quit during download it may be this problem: https://github.com/ollama/ollama/issues/8400. Server logs will confirm/deny.
When the ollama server starts, it does housekeeping which includes purging incomplete downloads, You can prevent this behaviour by setting
OLLAMA_NOPRUNE=1.@RohanSardar commented on GitHub (Feb 4, 2025):
I found out a solution, if you are on Windows:
This worked in my case
@m-petra-fn commented on GitHub (Feb 4, 2025):
The following script fixed it for me:
https://www.andreagrandi.it/posts/how-to-workaround-ollama-pull-issues/
@rick-github commented on GitHub (Feb 4, 2025):
This script only works for the stalling problem. If the client has problems connecting to dd20bb891979d25aebc8bec07b2b3bbc it won't help. Anecdotally, changing DNS servers has helped in the latter case, see above
@Maltz42 commented on GitHub (Feb 5, 2025):
Probably exacerbated by ollama using 16 simultaneous download connections when you pull a file. There's even a pull request to fix this back in August (https://github.com/ollama/ollama/pull/5683) but it was closed, even though it's still causing problems. An ollama pull drives my gigabit fiber connection into the 10-15% packet loss range when it's running. It's a very aggressive/unfriendly app when it comes to network traffic.
@MSR2201 commented on GitHub (Feb 6, 2025):
C:\Users\sanke>ollama pull mxbai-embed-large
pulling manifest
pulling 819c2adf5ce6... 0% ▕ ▏ 0 B/669 MB
Error: max retries exceeded: Get "
819c2adf5c/data": dial tcp: lookup dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com: no such hostits not even starting to download the model for me
@MSR2201 commented on GitHub (Feb 6, 2025):
this i tried but didnt work for me
@meglio commented on GitHub (Feb 6, 2025):
I can no longer download any models. It stalls forevery, the number of MB downloaded goes up, then drops down, and does so in circles. Also, sometime after 10 minutes of trying hard, it ends with a
TLS handshake timeout.@rick-github commented on GitHub (Feb 6, 2025):
The hacky way around this is to run the downloader for a few seconds and then restart: https://github.com/ollama/ollama/issues/8484#issuecomment-2627410336. If you are on Linux/MacOS you can avoid the stall restart and download directly with https://github.com/ollama/ollama/issues/8535#issuecomment-2613241807.
@meglio commented on GitHub (Feb 6, 2025):
Is the bug reproducible and being fixed? It hasn't been working for more than a week. Just unusable app atm.
@rick-github commented on GitHub (Feb 6, 2025):
https://github.com/ollama/ollama/pull/8831
@yashwanth2706 commented on GitHub (Feb 6, 2025):
This is being fixed, previously download would re-start after 5s if no packets received, but now the time has been increased to 30s and being worked to optimize
@QinCai-rui commented on GitHub (Feb 7, 2025):
Same here. pulling a model less than 1GB is kind of OK for me but any larger, it just 'reverses' the download.
https://github.com/ollama/ollama/issues/8280
@ajayjoshioutdosolutions commented on GitHub (Feb 7, 2025):
This issue is no longer present after updating Google DNS. It worked for me. @rick-github
@patillacode commented on GitHub (Feb 9, 2025):
It is still happening, I have tried both the
timeoutsolutionand the
DNSsolution without success.Also, the magnet link above is to download the
7bimage has only 1 peer (and we might be stressing that connection out)I am more interested in the
1.5Banyway so I wont leech, just wanted to try.Is there a way we can P2P this?
any updates from CF?
If I may be of service I'm around.
@yashwanth2706 commented on GitHub (Feb 9, 2025):
@rick-github @jmorganca
Successfully Built Ollama from Source on a Virtual Machine & Ran DeepSeek-R1:7B
Description
I cloned the Ollama repository and built it from source and it worked.!
Environment Details
deepseek-r1:7bOllama current pre-relese version: v0.5.8
https://github.com/ollama/ollama/releases/tag/v0.5.8-rc12
Let me know if any system or version information that'll help to rectify the current issue, thanks.!
@uripont commented on GitHub (Feb 9, 2025):
dial tcp XXX.XX.X.XX:XXX: i/o timeout, retrying
Same from Mac running macOS Sequoia 15.2 (24C101), won't even start pulling the model. No proxies, no VPN, not even Firewall. Neither with default WiFi settings, nor setting a different DNS (Preferred DNS server: 8.8.8.8 (Google DNS)
Alternate DNS server: 1.1.1.1 (Cloudflare DNS)).
Have tried on 2 different home networks from same ISP, none work.
It gets stuck here via CLI:
Inspecting server logs using
cat ~/.ollama/logs/server.log:Lots of these, for different "parts", and different seconds until next retry (seems an exponential backoff).
Tried pretty much everything, even a clean reinstall of Ollama, and still it can't pull any model.
Trying commands that @rick-github suggested:
When running:
I can get the manifest:
But the second request times out.
When running a ping as suggested on #8533:
So the issue seems regarding the connection with cloudflare r2 where the data that needs to be pulled from is stored.
Ollama worked well for me a few weeks ago.
EDIT: It seems to work when using mobile hotspot, different ISP. The two curl end successfully, and pulling starts actually getting data. Will try on a different WiFi network somewhere else and report back.
EDIT 2: On another network it works well. It seems like the issue is with the networks of one specific ISP (Movistar), which may have blocked Cloudflare?
EDIT 3: Most likely it was point number 2. Everything works as expected on those previously failing networks. Everything back to normal 👍
EDIT 4: As confirmed in https://www.youtube.com/watch?v=pj66vftqZZM, there was a weekend-long, large-scale ban on Cloudflare IP ranges made by Movistar/O2 ISPs, as an attempt to combat football pirating.
@bishwayan-saha commented on GitHub (Mar 1, 2025):
worked for me.
@rick-github commented on GitHub (Mar 4, 2025):
The download stalls should be mitigated as of 0.5.8 by #8831 and #9294 provides an overhaul of model pulling, so closing but feel free to add updates if you are still having issues.
The connection failures to r2.cloudfarestorage.com are being tracked in #8605.
@jagarojgrdev commented on GitHub (Mar 16, 2025):
I had the same problem: I ran "docker exec -it ollama ollama pull llama3.1:8b" and got no response.
After watching the referenced video, I installed WARP (Cloudflare VPN), and the error was resolved.
@marcelb commented on GitHub (Mar 17, 2025):
running ollama 0.5.11 and it is still happening all the time. I can download it by ctrl-c after every 5gb and restart the pull until it is done though.
Location: Germany
ISP: Vodafone 1gbit
OS: Windows 11
@rick-github commented on GitHub (Mar 17, 2025):
Server logs will aid in debugging.
@keithkmyers commented on GitHub (Mar 18, 2025):
I suspect it is DDOS protection run amok. I throttled my container down to 200Mbit/s and its staying connected. I was pulling at the full 1Gbit of my connection and it triggered the connection drops just as folks describe in this thread (and elsewhere).
Nice tip! It seems the resume feature built into the pull tool is faulty, it loses most of its progress upon stall, resulting in an infinite loop during this outage. As a result, it just serves to compound the server load issue they're facing. Folks are pulling the same data blocks over and over, downloading well more than 100% of the total file size in an infinite loop. It probably looks like a DDOS to the server admins but its a bug in the ollama pull tool.
@keithkmyers commented on GitHub (Mar 18, 2025):
A few more findings that might be helpful for the admins:
While 200Mbit works to stay connected, I still notice it delivers data in bursts. Here's what I think is happening:
This is probably why you think you're being DDOS'd every time a big model drops and you get heavy server usage.
... Maybe. Maybe not. Anyways, best of luck chaps!
@suredanish commented on GitHub (Aug 26, 2025):
mine gets stuck here
@cknotz commented on GitHub (Aug 27, 2025):
I'm having the same issue as @suredanish This started after a recent OS & llama update. Before, everything worked fine. I can still
pullmodels without getting an error, butrungets stuck (I tried different llama versions, deepseek, and gpt-oss versions, all of them stall.@rick-github commented on GitHub (Aug 27, 2025):
If you can pull a model then it's not a download problem. Open a new issue and include logs.
@cknotz commented on GitHub (Aug 27, 2025):
Thanks, @rick-github I actually did post a separate issue (totally fine if responses take a bit!). Perhaps a noob question, but what exact logs would you need to identify the issue?
cat ~/.ollama/logs/server.log?@rick-github commented on GitHub (Aug 27, 2025):
https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues
@dkayser commented on GitHub (Sep 7, 2025):
I solved this by adding OLLAMA_DEBUG=1
docker exec ollama-server sh -c "OLLAMA_DEBUG=1 ollama pull mixtral"- pulling 26GB at 119MB/s and no crashanything without OLLAMA_DEBUG=1 stalls. It even crashed my dedicated server, which is extremely weird. Both with Alma 9 and Fedora 24. Straight up froze it. No idea why.
I first thought the NIC is toast, but with this settings everything works.
@gitdexgit commented on GitHub (Sep 7, 2025):
wow nice.
btw are you on linux? I see you are using docker to execute ollama-server? that's really nice. I don't have a strong server but I would love to run tiny models with olama how can I do that ?
@rick-github commented on GitHub (Sep 7, 2025):
It's good that it's working for you, but unless
ollama-serveris a non-standard ollama container,OLLAMA_DEBUG=1has literally no effect on the pull command.@dkayser commented on GitHub (Sep 8, 2025):
Good to know, thanks. It did solve the problem for me, tested it with many models now. I have no idea what other side effects may have contributed.
I have an old dedicated server at a local hosting company with 256GB EEC and 2x10 Xeon Cores. It is not fast at all, but allows a lot of flexibility to run a couple of smaller models in parallel for vision, text extraction, and general tasks. The lack of VRAM is painfully obvious.