mirror of
https://github.com/ollama/ollama.git
synced 2026-05-07 00:22:43 -05:00
[GH-ISSUE #8484] Issue with Ollama Model Download: Progress Reverting During Download #67520
Closed
opened 2026-05-04 10:37:44 -05:00 by GiteaMirror
·
62 comments
No Branch/Tag Specified
main
hoyyeva/anthropic-local-image-path
dhiltgen/ci
dhiltgen/llama-runner
parth-remove-claude-desktop-launch
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
parth-launch-codex-app
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
hoyyeva/opencode-image-modality
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.30.0-rc3
v0.30.0-rc2
v0.30.0-rc1
v0.30.0-rc0
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#67520
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @mdjamilkashemporosh on GitHub (Jan 19, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8484
Originally assigned to: @bmizerany on GitHub.
What is the issue?
While downloading models using
ollama run <model_name>, the progress often reverts—sometimes after 10-12% or even after 60%. The total download size also decreases before continuing. I've tested different networks but faced the same issue. A few weeks ago, I downloaded models without any problems.OS
macOS
GPU
Apple
CPU
Apple
Ollama version
0.5.7
@privacyfreak84 commented on GitHub (Jan 19, 2025):
Same issue on my side
@h3isenbug commented on GitHub (Jan 19, 2025):
Same here.
Version: 0.5.7-1
Model that i'm trying to pull: llava:7b
This message is repeatedly logged while this problem occurs:
@h3isenbug commented on GitHub (Jan 19, 2025):
This problem is happening when a connection does not receive data for more than 5 seconds.
Unfortunately the 5 second limit is hard-coded:
https://github.com/ollama/ollama/blob/main/server/download.go#L368
@rick-github commented on GitHub (Jan 19, 2025):
Might be an upstream issue, this has been reported several times over the last few weeks.
#8406, #8384, #8330, #8280
https://github.com/ollama/ollama/issues/8330#issuecomment-2574510267 contains an explanation for the observed behavior, but no resolution.
@ahmedtalaltwd7 commented on GitHub (Jan 20, 2025):
Same issue.
Version: 0.5.7
Model : granite3.1-dense:8b
Or any model, even with "llama3.2:1b-instruct-q2_K".
@jyomu commented on GitHub (Jan 21, 2025):
As a temporary workaround, you can continue the download by stopping the
ollama pullwithin 5 seconds after the download speed drops by pressingCtrl+C, and then running it again. At least, this method works in my environment.@ahmedtalaltwd7 commented on GitHub (Jan 22, 2025):
That's a tough thing when you have a slow internet connection😥 !!!
@cniebla commented on GitHub (Jan 22, 2025):
It stops when an SSD is performing a write to the disk while being cached on memory first and the disk is busy, so a
Ctrl+Cis not usable in this case. I wonder if the 5 sec can be altered at run time.@zeroducksleft commented on GitHub (Jan 23, 2025):
Expanding on this suggestion by @jyomu, quick and dirty script downloads model unattended:
Change the timeout to a reasonable duration based on your internet speed. For example, my download progress usually reverts after 200MB at 20MB/s, so I set my timeout to 10 seconds. Hit Ctrl+C when the download is complete.
@mcapodici commented on GitHub (Jan 26, 2025):
This is the issue that caused that code to go in. https://github.com/ollama/ollama/pull/1916
Probably a good feature but 60 seconds might be better?
@FogoVoar commented on GitHub (Jan 27, 2025):
It is curious how developers can create such complex software and still fail at something so basic. Canceling an entire download after just 5 seconds without receiving packets? Is this done on purpose to annoy users? I just faced a situation where, at 95% on my fifth attempt, the speed started to drop, If it weren’t for @jyomu's trick I would be crying right now.
@ctx2002 commented on GitHub (Jan 30, 2025):
still same problem. I have turned above bash script to a Powershell script,.
I do not understand why Ollama needs this kind script to continue downloading.
for smaller model , <1G , the Ollama can download it without any problem.
@bertoxic commented on GitHub (Jan 30, 2025):
i used this script for mine and it worked perfectly downloaded a large model even with my very poor network it didn't restart.,
@grav commented on GitHub (Jan 31, 2025):
On Mac:
@Skizzy-create commented on GitHub (Jan 31, 2025):
This will give you the exact script, just run it: GitHub Issue Comment.
For Windows:
For Linux/mac:
@Unknownuserfrommars commented on GitHub (Feb 12, 2025):
while ($true) {
Attempting to download model...
pulling manifest
pulling 6150cb382311... 1% ▕ ▏ 115 MB/ 19 GB 16 MB/s 20m4sTpulling manifest
pulling 6150cb382311... 1% ▕ ▏ 121 MB/ 19 GB 16 MB/s 20m4sAttempting to download model...
pulling manifest
pulling 6150cb382311... 1% ▕ ▏ 235 MB/ 19 GB 16 MB/s 20m8sTimeout occurred, restarting download...
Attempting to download model...
pulling manifest
pulling 6150cb382311... 2% ▕█ ▏ 439 MB/ 19 GB 25 MB/s 12m42sTimeout occurred, restarting download...
Attempting to download model...
pulling manifest
pulling 6150cb382311... 2% ▕█ ▏ 495 MB/ 19 GB 11 MB/s 28m24sTimeout occurred, restarting download...
@Skizzy-create Your PS Script gave mt this output while i tried to download deepseek r1:32b. Somehow it keeps doing this. I'm kinda new to this but why does this happen? Will the model download successfully if i just wait enough time for this to (eventually) get to 100%?
Many thanks!
@Skizzy-create commented on GitHub (Feb 12, 2025):
Yes you just have to wait and the download will be complete.
Just wait till it gets to 100% and that's it.
And you can see that the model continues to download from the previous checkpoint.
@Unknownuserfrommars commented on GitHub (Feb 13, 2025):
Okay thank you very much!
@Forest-Person commented on GitHub (Feb 19, 2025):
#!/bin/bash
while true; do
echo "Attempting to download model..."
ollama pull deepseek-r1 &
process_pid=$!
sleep 10
done
Hey guys, wow this is turning out to be a doozy eh?
I tried the same script and it didnt work. Stuck on 90% goes to 91% goes backwards. I can usually download a 5 gig quantized model in like 2.5 hours even with my sad 1mb/sec avg dl speed. Best wishes and thanks for all your hard work.
@bertoxic commented on GitHub (Feb 20, 2025):
@SudoMds commented on GitHub (Feb 23, 2025):
For windows you can try this, this script is an update to
============
this script is an update to our freind script, it just check for retry if process exit
@mcapodici commented on GitHub (Feb 24, 2025):
Why all the workarounds still? Someone has put in what looks like a fix https://github.com/ollama/ollama/pull/8831
@rick-github commented on GitHub (Feb 24, 2025):
There's confusion about what the actual issue is. The "reversing during download" issue has been mitigated with #8831. What I believe people are seeing now, and attributing to "reversing during download", is the slowdown of a download that occurs when some chunks are slow. ollama runs multiple concurrent download streams to fetch different chunks of the model. At the start, most of there are downloading quickly and finish early. As the proportion of slow downloaders increases, the reported speed of download goes down. Restarting the download has the effect of creating new download streams, again some of which are faster than others. The real issue is why are some streams slower than others - is it an ollama issue, or throttling at the ISP, or throttling at the server, etc. This issue has been around for a while and nobody's really looked in to it because it's simple to work around it by just restarting the download.
@rick-github commented on GitHub (Mar 4, 2025):
This should be mitigated as of 0.5.8 by #8831 and #9294 provides an overhaul of model pulling, so closing but feel free to add updates if you are still having issues.
@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):
@rick-github, sorry to bother you, but it's still happening... it looks like it happens when the download speed goes above 200mbps; maybe it's rate-limiting?
@rick-github commented on GitHub (Mar 31, 2025):
What issue: stalling causing chunk restart, or slow chunks causing a slowdown? Logs may help.
@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):
@rick-github stalling and not downloading anything pretty much, here's the logs:
it pretty much restarts the download a billion times
ctrl+c does not help, it just starts the download again from where it stopped, and advances like 500mb/1Gb each time
@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):
As a side note, I've also tried the following, with pretty much no luck:
(sorry for the extra
OLLAMA_HOST="localhost:6000") but i have 2 ollama servers@rick-github commented on GitHub (Mar 31, 2025):
dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com has always been a problematic host. I've seen speculation that this also hosts content that occasionally falls afoul of copyright laws and that some ISPs block/limit connections to that server.
What does the following do:
@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):
@rick-github getting the following:
@rick-github commented on GitHub (Mar 31, 2025):
Whereas when I run the command, it downloads the model at 60MB/s. What's the result of
@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):
@rick-github output:
btw, I'd love to take a second to really thank you for the time you take to help everybody, let me know if you happen to have a buymeacoffee
I've read other issues regarding this, that are saying that might also be the ISP, that probably cause cloudflare to redirect to that host. Given that I have another ollama server connected on another ollama, how hard would be to download it from that one, and then transfer it using
scp? I mean, do I just need to note down the sha of the blob downloaded, and scp them, or there are other registry somewhere that should be manually updated?@rick-github commented on GitHub (Mar 31, 2025):
If you have access to another server, download the model there and then package into a single file with this script:
Then install it on the ollama server (adjust destination as necessary):
This version creates zip files and so should be a bit more platform independent:
@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):
@rick-github I tried between two servers where i can install commands, though the goes is to run it on a HPC server, which as the Gigabit connection that causes problems, in which I can't "decompress" the cpio file due to the fact that ollama runs in a singularity container. However, I'm not sure if this is still a "general enough" issue for you to spend time on it.
Maybe uploading ollama models to huggingfaces would fix the whole thing (I can download models from huggingface with no problem, since they don't come from that nasty cloudflare)
@rick-github commented on GitHub (Mar 31, 2025):
Not sure what a singularity container is, if it's like a docker/podman container you can extract the cpio archive outside and copy it in:
If downloading from HF works for you, you can just pull the model from bartowski, and then you can rename it to look like an ollama library model:
I don't know how different it is from the ollama model but it's likely very close.
@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):
It's a very bad copy of docker that is used in high performance compute servers... I wish nobody will have to ever know what singularity is.
Anyway, i'll try, though his Qwen2.5 72b GGUF was "splitted" in multiple parts, and ollama does not naively support them, so I went for ollama registry's version
@rick-github commented on GitHub (Mar 31, 2025):
ollama pull hf.co/bartowski/Qwen2.5-72B-Instruct-GGUF:Q4_K_Mshould download a single GGUF file (plus assorted supplementary files).@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):
Eheh I moving on a 6x nvidia L40s server, i wanted to spoil myself with a Q_6 quantization, but too bad i'll one day learn to merge the parts with llama.cpp, saw one of your posts on how to do so
@rick-github commented on GitHub (Mar 31, 2025):
@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):
This is the "merging" I was referring to (#5245)
@rick-github commented on GitHub (Mar 31, 2025):
Ah, OK. The q4_K_M quant downloads a single file, I thought the q6_K would be the same.
@AlbertoSinigaglia commented on GitHub (Mar 31, 2025):
yup makes sense, but I'm seeing a trend in models over 40Gb to split the model across multiple files
@Pablojosep commented on GitHub (Jul 30, 2025):
Same here, cant go any further than ~24%.
@Clemeaux commented on GitHub (Sep 13, 2025):
Even now, 3 months later, this flaw persists. I've been using Ollama for quite a while (~1 year+) and in the early stages this never happened. But now it turns out not be reliable any more, you have to rather rely on that you run into this problem when pulling a model. This isn't good. I wonder whether this is an issue caused by changes in the Ollama software or is it something related to the server, where Ollama models are stored?
@laichiaheng commented on GitHub (Oct 17, 2025):
It still happens.😭
@rick-github commented on GitHub (Oct 17, 2025):
Try reducing the bandwidth by setting
OLLAMA_EXPERIMENT=client2andOLLAMA_REGISTRY_MAXSTREAMS=1.@cogentcoder commented on GitHub (Oct 23, 2025):
Same here even today so it seems it has not been fixed in 10 months. This download stuff is really stupid, I tried the small model Gemma3:1b and after 745MB out of 778MB, it is dancing back and forth from 650 MB to 745 MB and then moving forward by 1MB in 1 minute when I have very good connection. It is funny and annoying at the same time.
@monolith-jaehoon commented on GitHub (Oct 23, 2025):
@rick-github, In my case, it works! It also works without
OLLAMA_REGISTRY_MAXSTREAMS=1, but in that case, error logs are written on the server.@Johannett321 commented on GitHub (Oct 23, 2025):
Same here!
@yiwansk commented on GitHub (Nov 3, 2025):
This wasn't fixed and shouldn't be "closed"
@svantiniho41 commented on GitHub (Jan 10, 2026):
I can also confirm this has not been fixed, i am continuously having the same problem despite trying the suggestions in this thread. The download can go from 80 percent down to 30 percent in extreme cases, Is there any other alternative way to pull models? e.g. from browser download or something?
I don't mind it cutting out but the whole download reverting is what i find puzzling, on a metered Wifi or data connection this can quickly get frustrating
@Wytoo commented on GitHub (Jan 22, 2026):
this should not be closed at all. I've got a stable 1gig landline and I'm going crazy with the all interruptions / reverts
@rick-github commented on GitHub (Jan 22, 2026):
What model are you trying to pull, what errors are you getting, what mitigations have you tried, wherabouts (approximately) are you in the world.
@Wytoo commented on GitHub (Jan 22, 2026):
mistral-small3.1 and western europe. But I've fixed it with the two environmental variables provided above.
OLLAMA_EXPERIMENT=client2
OLLAMA_REGISTRY_MAXSTREAMS=1
@ChrisXtractyl commented on GitHub (Jan 31, 2026):
Still reproducible as of today (Jan 2026).
Large model downloads repeatedly stall or reset near completion.
This issue has been open for over a year without a mitigation.
@rick-github commented on GitHub (Jan 31, 2026):
Mitigation shown here. Ways to help in investigating shown here.
@ChrisXtractyl commented on GitHub (Jan 31, 2026):
I can confirm the mitigation
OLLAMA_EXPERIMENT=client2makes the download complete for me.Model: gemma3:12b
Region: Western Europe
Side effect: enabling
client2breaks my browser-based/api/pullflow. The CORS preflight (OPTIONS) tohttp://localhost:11434/api/pullreturns405 Method Not Allowed, and the frontend fails with “NetworkError when attempting to fetch resource”.I can change my workflow, but this is not a trivial change for my setup, and it means the mitigation isn’t compatible with browser usage as-is.
Also, until today I could download the same model reliably with the same setup without
client2, so this looks like a recent change/regression (at least on my side).Given the above, I don’t understand why this issue was closed — the underlying reliability problem still exists, and the mitigation introduces a new blocker for browser-based clients.
@bjf5201 commented on GitHub (Feb 3, 2026):
These do not mitigate the issue for me, at least. I was struggling with a download beginning at all. The only different that setting
OLLAMA_EXPERIMENT=client2made was that the download was able sorta get off the ground (above 100 MB) but then it had the same behavior the others have described, where the download progress somehow reversed mid-download. SettingOLLAMA_REGISTRY_MAXSTREAMS=1didn't help either, I just was back to slow downloads that couldn't get over 100MB before failing.Environment Info:
I am using WSL2 on Windows 10 using Ubuntu 24.04. More details from running
wsl --versionbelow:WSL version: 2.6.3.0
Kernel version: 6.6.87.2-1
WSLg version: 1.0.71
MSRDC version: 1.2.6353
Direct3D version: 1.611.1-81528511
DXCore version: 10.0.26100.1-240331-1435.ge-release
Windows version: 10.0.26220.7670
Ollama: version 0.15.4
I live in the mid-southeast of North America.
I got a few slightly different error messages whenever trying to run
ollama pull qwen3-coder:Or:
Or:
Could this be looked into further?
Let me know if there's any additional information that would be helpful!
@rick-github commented on GitHub (Feb 3, 2026):
The first and third log lines look like connectivity issues: dd20bb891979d25aebc8bec07b2b3bbc has historicaly been a bit flakey for some users because it seems like ISPs like to block the server. "connection reset by peer" could be a slow connection timeout.
What happens if you run the following in WSL2:
What's the "Average Dload" speed while the download is running? Does the "Current Speed" fluctuate? When it's finished, how long did it take?
Is it different if you run it in native Windows?
@shafenbadar commented on GitHub (Apr 13, 2026):
Smarter version — restart if file stops growing for 2 minutes:
It's for Gemma4:e2b ~7GB
Tweak according to your needs
`$blobFile = "$env:USERPROFILE.ollama\models\blobs\sha256-4e30e2665218745ef463f722c0bf86be0cab6ee676320f1cfadf91e989107448"
while ($true) {
Write-Host "Starting ollama pull..."
$process = Start-Process -FilePath "ollama" -ArgumentList "pull gemma4:e2b" -PassThru -NoNewWindow
}`
@shafenbadar commented on GitHub (Apr 13, 2026):
I found Simplified IDM workflow for downloading larger models:
Brief: Download Bigger file with IDM, place to models/blobs folder run that model with ollama, and ollama will complete the prerequisites automatically and run the model
Details:
Step 1: Get the manifest to find the big blob hash (biggest file in model):
powershell
"Invoke-RestMethod "https://registry.ollama.ai/v2/library//manifests/" |
Select-Object -ExpandProperty layers |
Select-Object mediaType, digest,
@{N='Size(MB)';E={[math]::Round($.size/1MB,2)}},
@{N='Size(GB)';E={[math]::Round($.size/1GB,3)}}"
Example (Gemma4:E2B):
Invoke-RestMethod "https://registry.ollama.ai/v2/library/gemma4/manifests/e2b" |
Select-Object -ExpandProperty layers |
Select-Object mediaType, digest,
@{N='Size(MB)';E={[math]::Round($.size/1MB,2)}},
@{N='Size(GB)';E={[math]::Round($.size/1GB,3)}}
Step 2 - Download ONLY the big blob via IDM (or browser):
URL: https://registry.ollama.ai/v2/library/gemma4/blobs/sha256:4e30e2665218745ef463f722c0bf86be0cab6ee676320f1cfadf91e989107448
Save folder | C:\Users<YourName>.ollama\models\blobs
Filename | sha256-4e30e2665218745ef463f722c0bf86be0cab6ee676320f1cfadf91e989107448
Expected size | 6.67 GB
Step 3 - Just run it:
"ollama run :"
Example: ollama run gemma4:e2b "What is 2+2? Reply in one sentence."
Ollama will automatically:
Key insight discovered:
The only reason to use IDM is the big model blob - everything else (manifest, license, params, config) is tiny and ollama handles it in seconds.
Just IDM the big file → put to blobs folder →
ollama run <model>. ✅@ShivaMultiarmed commented on GitHub (Apr 30, 2026):
The issue occured when I had no available space on a disk. Succeded when switched download folders.