[GH-ISSUE #14724] [Question] Why is GLM-5 only available as a cloud model and not for local download? #35281

Open
opened 2026-04-22 19:40:39 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @gogo25171 on GitHub (Mar 8, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14724

Hi Ollama team,

I am a regular user of Ollama, and I have a technical question regarding how certain models are listed in your library.

While comparing different open-source models, I noticed a difference in availability. For instance, on the qwen3.5 page, users have the choice between locally downloadable versions (e.g., 27b, 35b, 122b with their respective weights in GB) and a cloud version (qwen3.5:cloud) (Also, as a side question, what size version do you use for the cloud version most of the time? The largest one? For example, 122b in my case, or is there an even larger one that's not accessible?).

However, on the glm-5 page (https://ollama.com/library/glm-5), only the glm-5:cloud version is offered.

Since GLM-5 is also an open-source model, I would love to understand why it isn't available for local download on Ollama.

Out of curiosity, what are the technical criteria or the decision-making process for determining which models can be executed locally versus those that are limited to your cloud integration?

Thank you very much for your time and the fantastic work you do!

Thank you in advance, best regards

Originally created by @gogo25171 on GitHub (Mar 8, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14724 Hi Ollama team, I am a regular user of Ollama, and I have a technical question regarding how certain models are listed in your library. While comparing different open-source models, I noticed a difference in availability. For instance, on the qwen3.5 page, users have the choice between locally downloadable versions (e.g., 27b, 35b, 122b with their respective weights in GB) and a cloud version (qwen3.5:cloud) (Also, as a side question, what size version do you use for the cloud version most of the time? The largest one? For example, 122b in my case, or is there an even larger one that's not accessible?). However, on the glm-5 page (https://ollama.com/library/glm-5), only the glm-5:cloud version is offered. Since GLM-5 is also an open-source model, I would love to understand why it isn't available for local download on Ollama. Out of curiosity, what are the technical criteria or the decision-making process for determining which models can be executed locally versus those that are limited to your cloud integration? Thank you very much for your time and the fantastic work you do! Thank you in advance, best regards
GiteaMirror added the question label 2026-04-22 19:40:40 -05:00
Author
Owner

@ryanmon1 commented on GitHub (Mar 9, 2026):

Hi Ollama team,

I am a regular user of Ollama, and I have a technical question regarding how certain models are listed in your library.

While comparing different open-source models, I noticed a difference in availability. For instance, on the qwen3.5 page, users have the choice between locally downloadable versions (e.g., 27b, 35b, 122b with their respective weights in GB) and a cloud version (qwen3.5:cloud) (Also, as a side question, what size version do you use for the cloud version most of the time? The largest one? For example, 122b in my case, or is there an even larger one that's not accessible?).

However, on the glm-5 page (https://ollama.com/library/glm-5), only the glm-5:cloud version is offered.

Since GLM-5 is also an open-source model, I would love to understand why it isn't available for local download on Ollama.

Out of curiosity, what are the technical criteria or the decision-making process for determining which models can be executed locally versus those that are limited to your cloud integration?

Thank you very much for your time and the fantastic work you do!

Thank you in advance, best regards

Because they want to sell subscriptions.

<!-- gh-comment-id:4020585882 --> @ryanmon1 commented on GitHub (Mar 9, 2026): > Hi Ollama team, > > I am a regular user of Ollama, and I have a technical question regarding how certain models are listed in your library. > > While comparing different open-source models, I noticed a difference in availability. For instance, on the qwen3.5 page, users have the choice between locally downloadable versions (e.g., 27b, 35b, 122b with their respective weights in GB) and a cloud version (qwen3.5:cloud) (Also, as a side question, what size version do you use for the cloud version most of the time? The largest one? For example, 122b in my case, or is there an even larger one that's not accessible?). > > However, on the glm-5 page (https://ollama.com/library/glm-5), only the glm-5:cloud version is offered. > > Since GLM-5 is also an open-source model, I would love to understand why it isn't available for local download on Ollama. > > Out of curiosity, what are the technical criteria or the decision-making process for determining which models can be executed locally versus those that are limited to your cloud integration? > > Thank you very much for your time and the fantastic work you do! > > Thank you in advance, best regards Because they want to sell subscriptions.
Author
Owner

@Igorgro commented on GitHub (Mar 9, 2026):

It is available here: https://ollama.com/frob/glm-5. Don't forget to read readme on the page

<!-- gh-comment-id:4021850598 --> @Igorgro commented on GitHub (Mar 9, 2026): It is available here: https://ollama.com/frob/glm-5. Don't forget to read readme on the page
Author
Owner

@gogo25171 commented on GitHub (Mar 9, 2026):

Hello, indeed I had looked at it incorrectly. But from what I understood, for example in this case, this is not an official support and, moreover, it is not yet usable as stated in the description (the issue to make it work with Ollama is still open).

But this still brings me back to my question: if they deploy it on the cloud, why not also offer the possibility to deploy it locally with the Ollama application?

<!-- gh-comment-id:4022997653 --> @gogo25171 commented on GitHub (Mar 9, 2026): Hello, indeed I had looked at it incorrectly. But from what I understood, for example in this case, this is not an official support and, moreover, it is not yet usable as stated in the description (the issue to make it work with Ollama is still open). But this still brings me back to my question: if they deploy it on the cloud, why not also offer the possibility to deploy it locally with the Ollama application?
Author
Owner

@neofob commented on GitHub (Mar 11, 2026):

Is it possible to import GLM-5 model from Huggingface to our local Ollama?

Is there a script or guide how to do that?

<!-- gh-comment-id:4039960352 --> @neofob commented on GitHub (Mar 11, 2026): Is it possible to import [GLM-5][0] model from Huggingface to our local Ollama? Is there a script or guide how to do that? [0]: https://huggingface.co/zai-org/GLM-5
Author
Owner
<!-- gh-comment-id:4049895406 --> @rick-github commented on GitHub (Mar 12, 2026): https://github.com/ollama/ollama/blob/main/docs/import.mdx#Importing-a-GGUF-based-model-or-adapter
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35281