[GH-ISSUE #12799] ollama cloud models aren't working #34249

Open
opened 2026-04-22 17:41:16 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @PCUser2022 on GitHub (Oct 28, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12799

Originally assigned to: @gr4ceG on GitHub.

What is the issue?

Cloud models are broken for me.

Connecting to 'qwen3-vl:235b' on 'ollama.com' ⚡
>>> What's the best artifact set for Hu Tao?
Error: 500 Internal Server Error: unmarshal: invalid character 'I' looking for beginning of value

Ollama is failing to decode the server response.

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.12.3

Originally created by @PCUser2022 on GitHub (Oct 28, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12799 Originally assigned to: @gr4ceG on GitHub. ### What is the issue? Cloud models are broken for me. ``` Connecting to 'qwen3-vl:235b' on 'ollama.com' ⚡ >>> What's the best artifact set for Hu Tao? Error: 500 Internal Server Error: unmarshal: invalid character 'I' looking for beginning of value ``` Ollama is failing to decode the server response. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.12.3
GiteaMirror added the bug label 2026-04-22 17:41:16 -05:00
Author
Owner

@theuncivilizedarchive commented on GitHub (Oct 28, 2025):

Are you able to update the Ollama version? That might be one of the issues.
But, the error 500 is an internal server error, so it can either be because of some sort of issue or maintenance on Ollama's part, or, maybe a timeout or resource exhaustion.

Also the model your are using, the qwen3-vl:235b is a very large model, and other people have had issues with it. Try using other models such as llama3:8b

<!-- gh-comment-id:3457414875 --> @theuncivilizedarchive commented on GitHub (Oct 28, 2025): Are you able to update the Ollama version? That might be one of the issues. But, the error 500 is an internal server error, so it can either be because of some sort of issue or maintenance on Ollama's part, or, maybe a timeout or resource exhaustion. Also the model your are using, the qwen3-vl:235b is a very large model, and other people have had issues with it. Try using other models such as llama3:8b
Author
Owner

@marabgol commented on GitHub (Oct 29, 2025):

I have the same issue on MacOS M2/3 no mater how small or big the model is the same 500 error, even I downloaded GGUF from successful reported qwen3-VL , they have the issue with current ollama version.

<!-- gh-comment-id:3460411760 --> @marabgol commented on GitHub (Oct 29, 2025): I have the same issue on MacOS M2/3 no mater how small or big the model is the same 500 error, even I downloaded GGUF from successful reported qwen3-VL , they have the issue with current ollama version.
Author
Owner

@pankaj89 commented on GitHub (Oct 29, 2025):

Same for me i am checking on M4 Mac mini when trying to run minimax m2 cloud

ollama run minimax-m2:cloud

failed with

Error: 500 Internal Server Error: unmarshal: invalid character 'I' looking for beginning of value

Edited:
Fixed by loging on ollama as its cloud model

<!-- gh-comment-id:3463598314 --> @pankaj89 commented on GitHub (Oct 29, 2025): Same for me i am checking on M4 Mac mini when trying to run minimax m2 cloud ``` ollama run minimax-m2:cloud ``` failed with ``` Error: 500 Internal Server Error: unmarshal: invalid character 'I' looking for beginning of value ``` Edited: Fixed by loging on ollama as its cloud model
Author
Owner

@jaeyoung0509 commented on GitHub (Oct 30, 2025):

I'm experiencing the same issue with the Qwen series models

<!-- gh-comment-id:3466379621 --> @jaeyoung0509 commented on GitHub (Oct 30, 2025): I'm experiencing the same issue with the Qwen series models
Author
Owner

@theDiverDK commented on GitHub (Oct 30, 2025):

Same issue with minmax-m2:cloud :(

<!-- gh-comment-id:3466519470 --> @theDiverDK commented on GitHub (Oct 30, 2025): Same issue with minmax-m2:cloud :(
Author
Owner

@matthewoates commented on GitHub (Oct 30, 2025):

I had this issue when attempting to run minimax-m2:cloud for free during the short term promotion. I had never logged in before.

The fix for me as mentioned by @pankaj89 was to log in. A more accurate error message would be helpful here.

<!-- gh-comment-id:3470267678 --> @matthewoates commented on GitHub (Oct 30, 2025): I had this issue when attempting to run `minimax-m2:cloud` for free during the short term promotion. I had never logged in before. The fix for me as mentioned by @pankaj89 was to log in. A more accurate error message would be helpful here.
Author
Owner

@theDiverDK commented on GitHub (Oct 31, 2025):

This morning it worked, but 'brew upgrade' did install a new ollama version, maybe the bug is fixed?

<!-- gh-comment-id:3471523717 --> @theDiverDK commented on GitHub (Oct 31, 2025): This morning it worked, but 'brew upgrade' did install a new ollama version, maybe the bug is fixed?
Author
Owner

@PCUser2022 commented on GitHub (Oct 31, 2025):

The issue happens randomly. It sometimes starts working, but after some time it breaks again.

<!-- gh-comment-id:3473657452 --> @PCUser2022 commented on GitHub (Oct 31, 2025): The issue happens randomly. It sometimes starts working, but after some time it breaks again.
Author
Owner

@bosborne commented on GitHub (Nov 1, 2025):

Same issue, with minimax-m2. Latest ollama, 0.12.9 on MacOS Tahoe 26.0.1

<!-- gh-comment-id:3476677928 --> @bosborne commented on GitHub (Nov 1, 2025): Same issue, with minimax-m2. Latest ollama, 0.12.9 on MacOS Tahoe 26.0.1
Author
Owner

@abulka commented on GitHub (Nov 8, 2025):

  • In terminal: ollama run kimi-k2:1t-cloud → Works correctly
  • In Ollama GUI: Select kimi-k2:1t-cloud from model list → Error occurs

Expected Behavior: Model should run in GUI as it does via terminal

<!-- gh-comment-id:3506494588 --> @abulka commented on GitHub (Nov 8, 2025): - In terminal: `ollama run kimi-k2:1t-cloud` → Works correctly - In Ollama GUI: Select `kimi-k2:1t-cloud` from model list → Error occurs Expected Behavior: Model should run in GUI as it does via terminal
Author
Owner

@electroheadfx commented on GitHub (Feb 26, 2026):

ollama --version
ollama version is 0.17.0

ollama run kimi-k2:1t-cloud
Error: ollama cloud is disabled: remote model details are unavailable
<!-- gh-comment-id:3967823973 --> @electroheadfx commented on GitHub (Feb 26, 2026): ```bash ollama --version ollama version is 0.17.0 ollama run kimi-k2:1t-cloud Error: ollama cloud is disabled: remote model details are unavailable ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34249