mirror of
https://github.com/ollama/ollama.git
synced 2026-05-07 08:30:05 -05:00
[GH-ISSUE #15453] Ollama Cloud Pro: 95% failure rate across all cloud models — service is unusable #71938
Open
opened 2026-05-05 03:04:44 -05:00 by GiteaMirror
·
40 comments
No Branch/Tag Specified
main
hoyyeva/anthropic-local-image-path
dhiltgen/ci
dhiltgen/llama-runner
parth-remove-claude-desktop-launch
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
parth-launch-codex-app
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
hoyyeva/opencode-image-modality
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.30.0-rc3
v0.30.0-rc2
v0.30.0-rc1
v0.30.0-rc0
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
No Label
cloud
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#71938
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @KUANKEI21 on GitHub (Apr 9, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15453
Originally assigned to: @jmorganca on GitHub.
Environment
brew)glm-5.1:cloud,kimi-k2.5:cloud,qwen3.5:cloud,deepseek-v3.2:cloudProblem
Ollama Cloud is effectively unusable. Both
/api/chatand/api/generateendpoints return empty responses or timeout for all cloud models. This is not model-specific — every single cloud model exhibits the same behavior.Reproduction
Simple test — 5 sequential requests per model, 20-second timeout:
Results (2026-04-09, ~21:00 UTC+8)
glm-5.1:cloudkimi-k2.5:cloudqwen3.5:clouddeepseek-v3.2:cloudEarlier in the day,
glm-5.1:cloudworked intermittently (2/3 success), so this appears to be a degrading situation.Both endpoints affected
Tested
/api/generateas well — same 0/5 failure rate forglm-5.1:cloud. This rules out a/api/chat-specific bug.Expected behavior
As a paying Pro subscriber ($20/month), I expect a reasonable success rate (>95%) for cloud model inference. A 5% success rate is not a degraded service — it is a broken service.
What I've ruled out
localhost:11434responds,ollama listshows all cloud models)/api/chatand/api/generateboth fail)"hi") — not a token limit issueRelated issues
This aligns with multiple existing reports:
Requests
Retry-Afterheaders on 503/502 responses so clients can implement proper backoff@bartlomiejwolk commented on GitHub (Apr 9, 2026):
Same for me. I'm a new Ollama Pro user and I started thinking that this is the normal. I'm glad it's not.
@dongluochen commented on GitHub (Apr 10, 2026):
@KUANKEI721 thanks for reporting the issues. Sorry about the experiences. Can you re-run your requests and provide the time and request ids (like below) for us to investigate.
@jmorganca commented on GitHub (Apr 10, 2026):
Hi all I'm sorry for the issues with Ollama's cloud this morning. We've been working hard to increase capacity. It should be improving now and we'll continue to monitor it.
@KUANKEI21 commented on GitHub (Apr 10, 2026):
@dongluochen Thanks for the quick response and for looking into this.
We don't have a
refbecause these failures are not500 Internal Server Error— they are502 Bad Gatewaywith an empty response body, so no app-level error UUID is returned.Per your own API error documentation,
502is defined as:This is a server-side cloud routing issue, not a client-side problem. The requests never reached the application layer that generates
refUUIDs — they failed at the gateway.Representative failed samples
From our local
~/.ollama/logs/server.log(all times UTC+8, all requests were sequential — one at a time, no concurrency):/v1/chat/completions/api/chat/api/chat/api/generate/v1/chat/completionsFull list: 48 total 502s (39 on 04-09, 9 on 04-10)
What the 502 responses look like vs. successful ones
Successful requests return rich tracing headers:
The 502 failures return empty body, no headers — there is nothing client-side to provide beyond the timestamps above.
As of 2026-04-10 ~05:30 UTC, the issue is no longer reproducing after the capacity improvements @jmorganca mentioned. This is consistent with a transient capacity/routing incident on the cloud backend.
Could you investigate the 502s using these UTC+8 timestamps against your edge/gateway logs? Happy to provide our account email privately if that helps correlate.
@matholland618 commented on GitHub (Apr 11, 2026):
same for me....I noticed this yesterday, maybe late wed. evening. I noticed it when I changed my hermes model to glm 5.1 cloud...thought it was an issue with that model, it would just freeze up during tasks, or not respond at all....then I went back to qwen 3.5, and it's doing the same thing..
@orrinwitt commented on GitHub (Apr 12, 2026):
I just switched my nanobot using glm-5.1 from ollama cloud to openrouter and still got the error. maybe it's the upstream providers?
@stanleyma610 commented on GitHub (Apr 13, 2026):
same for me, all ollama cloud models got 502
@coleman399 commented on GitHub (Apr 13, 2026):
same for me
@jackluo923 commented on GitHub (Apr 13, 2026):
same for me
@dongluochen commented on GitHub (Apr 13, 2026):
@KUANKEI721 thanks a lot for the detailed info. It's helpful! It looks you use ollama-cloud through local ollama which doesn't log request ids.
Looking at your table, I guess you may use a client with timeouts. What models do you use? The 5s and 20s are relatively short in LLM world. Requests may take more than 20s to complete, especially for large model and large prompts. If you set a short timeout some of the requests would fail even the backend is responding.
/v1/chat/completions/api/chat/api/chat/api/generate/v1/chat/completionsIf you continue to see failures, it'd be great you can run some tests with curl and share the responses, e.g.,
@GrigoriyNestsiarovich commented on GitHub (Apr 14, 2026):
Same for me
"Service Temporarily Unavailable (ref: e1a9b1fd-dd4b-4b03-83b6-e9daefce4b6b) (status code: 503)"
@dongluochen commented on GitHub (Apr 14, 2026):
@GrigoriyNestsiarovich thanks for providing the ref id. This request failed due to capacity constraints. At the top of each hour there are bursty traffic due to cron jobs. The request around 2:02am PT failed because of that. Retry a bit later might go through. Sorry about that. We are working to improve system performance.
@ghostmodel commented on GitHub (Apr 16, 2026):
I get unauthorised 403 , the 1st request goes thru but all other attempts after that error
@dongluochen commented on GitHub (Apr 16, 2026):
@ghostmodel can you post the response you get? I need the "ref" to understand the case. Thanks.
@harmssam commented on GitHub (Apr 17, 2026):
Qwen3.5 is completely unusable at the moment.
API call failed after 3 retries: HTTP 500: Error code: 500 - {'error': 'Internal Server Error (ref: cf66a179-44a1-45c9-ab6e-53058a47feef)'}
@ghostmodel commented on GitHub (Apr 17, 2026):
Using Hermes from the command line 1st one succeeds 2nd fails
In the .env file
OLLAMA_API_KEY=(and my key)
model:
default: minimax-m2.7
provider: ollama-cloud
base_url: https://ollama.com/v1
● hi
Initializing agent...
Hey! How can I help you today?
● hi
⚠️ API call failed (attempt 1/3): APIConnectionError
🔌 Provider: ollama-cloud Model: minimax-m2.7
🌐 Endpoint: https://ollama.com/v1
📝 Error: Connection error.
⏳ Retrying in 2.977621885121035s (attempt 1/3)...
@orrinwitt commented on GitHub (Apr 18, 2026):
I realize this problem has been going on a lot longer, but a look at this uptime chart from Openrouter does illustrate the bottleneck that the upstream providers are running into when a model gets in very high demand. This shows uptime for GLM-5.1 for April 17th, 2026.
I still want Ollama to up their cloud game, starting with getting a better handle on the abuse of the free tier, but some of this might be otherwise out of their control unless they're really running their own datacenters.
@ehnwebmaster commented on GitHub (Apr 18, 2026):
Same here, I'm using free plan
Ollama API Cloud
Internal Server Error (ref: feef02ad-98d4-4f0c-b3aa-e95604640135
@PureBlissAK commented on GitHub (Apr 18, 2026):
🤖 Automated Triage & Analysis Report
Issue: #15453
Analyzed: 2026-04-18T18:21:28.393021
Analysis
Implementation Plan
This issue has been triaged and marked for implementation.
@HardStyleMoose commented on GitHub (Apr 21, 2026):
Please, and i mean it from bottom of my heart ♥, Just remove the free tier or dramatically lower it, As there are obviously created models to just create free accounts with VPN and run multiple sessions and when rate limited just fallback to making a new one over again, it is a pretty simple workflow to get self trained models to do, THAT HERE! i think that is the reason since i myself was thinking of this same method until i realized the abuse can be negative
@unicornboat commented on GitHub (Apr 22, 2026):
Same here:
{"error":"this model requires a subscription, upgrade for access: https://ollama.com/upgrade (ref: b45ce2fb-7e5e-4d4c-8ab4-5ec893930553)"}
@hasanur-rahman079 commented on GitHub (Apr 22, 2026):
I just upgraded to pro and the same stack issue and now totally unusable. this issue still not solved?
@natera commented on GitHub (Apr 24, 2026):
Same problem here using pro, none of the models works, any update?
@michael-conrad commented on GitHub (Apr 25, 2026):
Is this related to infinite open socket hangs?
I'm using OpenCode Desktop with Ollama Cloud / the $100-month plan
I'm repeatably getting hangs where I have to interrupt the agent then tell it to resume/continue
I tried setting the chunk timout, but that just causes a SSE response failure with no retry mechanism - so not effective - especially for autonomous type work
@KayJay89 commented on GitHub (Apr 25, 2026):
Unfortunately I seem to be in the same boat (Pro plan):
┊ 🔀 preparing delegate_task…
[subagent-1] ⚠️ No response from provider for 180s (model: kimi-k2.6, context: ~45,862 tokens). Reconnecting...
[subagent-1] ⚠️ API call failed (attempt 1/3): APIConnectionError
[subagent-1] 🔌 Provider: ollama-cloud Model: kimi-k2.6
[subagent-1] 🌐 Endpoint: https://ollama.com/v1
[subagent-1] 📝 Error: Connection error.
[subagent-1] ⏱️ Elapsed: 241.89s Context: 18 msgs, ~45,863 tokens
[subagent-1] ⏳ Retrying in 2.4s (attempt 1/3)...
[subagent-0] ⚠️ No response from provider for 180s (model: kimi-k2.6, context: ~47,474 tokens). Reconnecting...
[subagent-1] ⚠️ No response from provider for 180s (model: kimi-k2.6, context: ~45,862 tokens). Reconnecting...
[subagent-0] ⚠️ API call failed (attempt 1/3): APIConnectionError
[subagent-0] 🔌 Provider: ollama-cloud Model: kimi-k2.6
[subagent-0] 🌐 Endpoint: https://ollama.com/v1
[subagent-0] 📝 Error: Connection error.
[subagent-0] ⏱️ Elapsed: 241.79s Context: 15 msgs, ~47,475 tokens
[subagent-0] ⏳ Retrying in 3.0s (attempt 1/3)...
[subagent-1] ⚠️ API call failed (attempt 2/3): APIConnectionError
[subagent-1] 🔌 Provider: ollama-cloud Model: kimi-k2.6
[subagent-1] 🌐 Endpoint: https://ollama.com/v1
[subagent-1] 📝 Error: Connection error.
[subagent-1] ⏱️ Elapsed: 486.36s Context: 18 msgs, ~45,863 tokens
[subagent-1] ⏳ Retrying in 4.4s (attempt 2/3)...
✗ [1/3] Desk research: Find passive evidence tha (600.02s)
┊ 🔀 delegate 3 parallel tasks 600.6s [error]
[subagent-1] ⚡ Interrupted during API call.
[subagent-0] ⚡ Interrupted during API call.
✗ [3/3] Desk research: Verify who performs waste (600.02s)
✗ [2/3] Desk research: Find passive evidence tha (600.02s)
[subagent-2] ⚡ Interrupt: cancelling 1 pending concurrent tool(s)
@el-analista commented on GitHub (Apr 27, 2026):
same here this is really bad
@jgervais commented on GitHub (Apr 29, 2026):
It's absolutely unusable. Max plan and can't even reliably use a fraction of the usage I've paid for. Even simple prompts to very basic models fail with 503. There's no outward communication on cloud status. Theres no updates on capacity planning. I've paid 100$ for maximum frustration.
@sektro801 commented on GitHub (Apr 29, 2026):
Same here; 90% of this happens even after I bought the 100 USD plan.
@harmssam commented on GitHub (Apr 29, 2026):
Obviously, it seems like regardless of your active plan whether free, pro or max, your usage priority remains the same; only your quota increases. Seems like a simple fix to me.
Also, it seems like they rush out to support the latest model out there, but don't verify that it actually works before publishing it.
@ehnwebmaster commented on GitHub (Apr 30, 2026):
Server overloaded, please retry shortly (ref: 8d2f48b3-c6ca-4e46-a3dc-a5f62c170faa)
Server overloaded, please retry shortly (ref: 2e4e45a7-4861-464d-a445-54b35e27a712)
Server overloaded, please retry shortly (ref: 2b1b087f-42e7-4c7e-8c16-7525088d8d81)
Server overloaded, please retry shortly (ref: 0b6320c9-db83-4bc1-999e-17397010c64b)
Server overloaded, please retry shortly (ref: 50b2d7f1-8d28-4dba-ae5e-7e2020e04b53)
Server overloaded, please retry shortly (ref: ab63c8e2-db11-42f6-bd53-2ddc0af7350e)
Server overloaded, please retry shortly (ref: afa8c463-8379-48d8-85c9-c9bc1c670649)
Server overloaded, please retry shortly (ref: 3dd0c5c1-1171-4ef5-b415-81e39029cc66)
Server overloaded, please retry shortly (ref: 6cf1ddad-b3df-4312-a4be-6bfd7226acc5)
Server overloaded, please retry shortly (ref: 60e38659-e956-4d21-b6d7-ace1138df1ed)
Server overloaded, please retry shortly (ref: a060f16b-3b97-4fbd-bd62-9fb0737f0a59)
Server overloaded, please retry shortly (ref: ad6f801c-77bd-48c4-ab4e-8ac55cc7c833)
Server overloaded, please retry shortly (ref: fb1a536a-eb5a-447d-b3fd-697bc28c98df)
Server overloaded, please retry shortly (ref: 28d18634-1fbc-4a48-9350-506f6706a797)
Server overloaded, please retry shortly (ref: cc25bfa5-70ac-4ee9-9610-2ace6478e519)
@etcircle commented on GitHub (Apr 30, 2026):
I’m seeing Ollama Cloud become unusable from the UK. I’ve tried multiple cloud models, including Kimi, NemoTron, DeepSeek, and Gemma-backed workloads, and requests are either hanging for many minutes or failing to return usable responses.
The same applications and workflows work with other providers, and our local queues / DB sessions are clean, so this does not appear to be a client-side app or context-size issue. It feels like Ollama Cloud capacity, throttling, or routing is currently preventing normal sends from completing.
@etcircle commented on GitHub (May 1, 2026):
There was some usability 3-4 hours ago, but now, the ollama cloud is completely useless again.
@AccidentalJedi commented on GitHub (May 1, 2026):
I'm in the same boat. Waited all month to upgrade to enterprise, and literally after I did it... endless 503 errors.
@AccidentalJedi commented on GitHub (May 1, 2026):
Honestly, if this doesn't improve, and fast, I'm going to be forced to look elsewhere for my daily work models...
and now knowing that ALL tiers are on the same priority levels regardless ... I'm not sure MAX is really worth the extra expense.
@shoehn commented on GitHub (May 1, 2026):
Yes, same for me. I can confirm that it is more less unusable.
Please, implement a status page to transparently inform your paying users about the issues. 100$ a month is quite an amount, if you are not a company!
@AccidentalJedi commented on GitHub (May 1, 2026):
how much attention is paid to these comments on here Vs on discord? Just curious.
@drdozer commented on GitHub (May 2, 2026):
Same problem - to the point that at some times I have to walk away from the service for an hour until it starts working again. FYI that's acceptable on the free tier, but not if I'm paying.
@almustaphasilvester commented on GitHub (May 2, 2026):
Having the same issue on the pro plan.
@jdudley commented on GitHub (May 2, 2026):
I'm a new user on the Pro plan. I'm having the same issue: Server overloaded, please retry shortly (status code: 503). Not a great first experience with Ollama Cloud Pro.
@nrebytes commented on GitHub (May 2, 2026):
IF you have paid for a service, you shouldnt be posting on OSS forums like this one when issues arise. If a product is sold without proper customer support, you dont buy that crap period. Ollama cloud seems to be a hack bolted on by a bunch of amateurs with consistent issues and failures, I would never use it for production projects ever.