[GH-ISSUE #13399] qwen3-vl:235b-cloud errors out with 'Service Temporarily Unavailable' #8848

Open
opened 2026-04-12 21:38:54 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @Catnip24 on GitHub (Dec 9, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13399

Originally assigned to: @dongluochen on GitHub.

What is the issue?

I've been consistently using the cloud models, primarily larger ones like qwen3-vl:235b-cloud, without any issues until about a week ago. I started getting a flood of errors stating
[TURBO DEBUG] client.chat error: ResponseError('Service Temporarily Unavailable')
whenever I try to use the model. Looking at my cloud usage data in Ollama settings shows that I'm well below the daily or weekly limit. Smaller cloud models thankfully work okay, but the main appeal of using the cloud version was to utilize models that I can't feasibly run on my local setup, like a 235b VLM.

Is this a known issue, a change to the subscription models, or something I can adjust on my end? I'd really appreciate any guidance or updates.

Relevant log output

Dec 09 12:50:41 catautonoma systemd[1]: Started Ollama Service.
Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.748-06:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.748-06:00 level=INFO source=images.go:522 msg="total blobs: 10"
Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.748-06:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.748-06:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.2)"
Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.749-06:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.749-06:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 40957"
Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.908-06:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 39781"
Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.962-06:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.962-06:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41259"
Dec 09 12:50:42 catautonoma ollama[2943463]: time=2025-12-09T12:50:42.117-06:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-949de09e-93d5-b538-8975-53a668225d2e filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4060 Ti" libdirs=ollama,cuda_v12 driver=12.8 pci_id=0000:01:00.0 type=discrete total="8.0 GiB" available="7.1 GiB"
Dec 09 12:50:42 catautonoma ollama[2943463]: time=2025-12-09T12:50:42.117-06:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB"
Dec 09 12:50:42 catautonoma ollama[2943463]: [GIN] 2025/12/09 - 12:50:42 | 200 |      33.764µs |       127.0.0.1 | HEAD     "/"
Dec 09 12:50:42 catautonoma ollama[2943463]: [GIN] 2025/12/09 - 12:50:42 | 200 |  195.948317ms |       127.0.0.1 | POST     "/api/pull"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.13.2

Originally created by @Catnip24 on GitHub (Dec 9, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13399 Originally assigned to: @dongluochen on GitHub. ### What is the issue? I've been consistently using the cloud models, primarily larger ones like `qwen3-vl:235b-cloud`, without any issues until about a week ago. I started getting a flood of errors stating `[TURBO DEBUG] client.chat error: ResponseError('Service Temporarily Unavailable')` whenever I try to use the model. Looking at my cloud usage data in Ollama settings shows that I'm well below the daily or weekly limit. Smaller cloud models thankfully work okay, but the main appeal of using the cloud version was to utilize models that I can't feasibly run on my local setup, like a 235b VLM. Is this a known issue, a change to the subscription models, or something I can adjust on my end? I'd really appreciate any guidance or updates. ### Relevant log output ```shell Dec 09 12:50:41 catautonoma systemd[1]: Started Ollama Service. Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.748-06:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.748-06:00 level=INFO source=images.go:522 msg="total blobs: 10" Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.748-06:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.748-06:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.2)" Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.749-06:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.749-06:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 40957" Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.908-06:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 39781" Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.962-06:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" Dec 09 12:50:41 catautonoma ollama[2943463]: time=2025-12-09T12:50:41.962-06:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41259" Dec 09 12:50:42 catautonoma ollama[2943463]: time=2025-12-09T12:50:42.117-06:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-949de09e-93d5-b538-8975-53a668225d2e filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4060 Ti" libdirs=ollama,cuda_v12 driver=12.8 pci_id=0000:01:00.0 type=discrete total="8.0 GiB" available="7.1 GiB" Dec 09 12:50:42 catautonoma ollama[2943463]: time=2025-12-09T12:50:42.117-06:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB" Dec 09 12:50:42 catautonoma ollama[2943463]: [GIN] 2025/12/09 - 12:50:42 | 200 | 33.764µs | 127.0.0.1 | HEAD "/" Dec 09 12:50:42 catautonoma ollama[2943463]: [GIN] 2025/12/09 - 12:50:42 | 200 | 195.948317ms | 127.0.0.1 | POST "/api/pull" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.13.2
GiteaMirror added the cloudbug labels 2026-04-12 21:38:54 -05:00
Author
Owner

@dongluochen commented on GitHub (Dec 10, 2025):

@Catnip24 thanks for reporting this issue! We made an infrastructure change on Dec 2nd and it triggered a sensitive rate limiting setting. I added a mitigation 2 hours ago. Let us know if you continue to see this problem.

<!-- gh-comment-id:3639055255 --> @dongluochen commented on GitHub (Dec 10, 2025): @Catnip24 thanks for reporting this issue! We made an infrastructure change on Dec 2nd and it triggered a sensitive rate limiting setting. I added a mitigation 2 hours ago. Let us know if you continue to see this problem.
Author
Owner

@Catnip24 commented on GitHub (Dec 12, 2025):

Thank you for the quick response! Unfortunately I still see the[TURBO DEBUG] client.chat error: ResponseError('Service Temporarily Unavailable') error pop up when using qwen3-vl:235b. Once again my cloud usage stats on the ollama website don't show that I'm going over the hourly or weekly limit. I also tried some other larger cloud models like gpt-oss:120b and qwen3-coder:480b through the cloud service, and they both worked without issue. It seems to just be qwen3-vl.

<!-- gh-comment-id:3647354830 --> @Catnip24 commented on GitHub (Dec 12, 2025): Thank you for the quick response! Unfortunately I still see the`[TURBO DEBUG] client.chat error: ResponseError('Service Temporarily Unavailable')` error pop up when using qwen3-vl:235b. Once again my cloud usage stats on the ollama website don't show that I'm going over the hourly or weekly limit. I also tried some other larger cloud models like gpt-oss:120b and qwen3-coder:480b through the cloud service, and they both worked without issue. It seems to just be qwen3-vl.
Author
Owner

@dongluochen commented on GitHub (Dec 13, 2025):

Yes we noticed some of the models performance were impacted by the infrastructure change. We are working on it. Will update you.

<!-- gh-comment-id:3649041783 --> @dongluochen commented on GitHub (Dec 13, 2025): Yes we noticed some of the models performance were impacted by the infrastructure change. We are working on it. Will update you.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8848