[GH-ISSUE #15824] Any way to run large model locally instead of cloud? GLM, Deepseek v4, minimax, etc. #56597

Open
opened 2026-04-29 11:04:40 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @berlin2123 on GitHub (Apr 26, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15824

I have a high-performance server equipped with 8x NVIDIA A100 (40GB) GPUs, totaling 320GB of VRAM. I want to run large models like GLM and DeepSeek V4 locally, utilizing my full VRAM capacity, rather than using the cloud mode.

I have downloaded the quantized versions of these models from Hugging Face (HF), which are specifically optimized for local inference.

However, when I try to run them using the command:

ollama run hf.co/<username>/<model-name>
ollama run hf.co/unsloth/Qwen3.6-35B-A3B-GGUF:UD-Q8_K_XL
ollama run hf.co/0xSero/GLM-5-REAP-50pct-UD-IQ2_M-GGUF:UD-IQ2_M

It always results in an error. such as

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'
#or
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'glm-dsa'

My Question:
Is there a way to force Ollama to load these models locally instead of attempting to use cloud mode?

Has anyone successfully run such large models locally using Ollama with Hugging Face checkpoints?

Originally created by @berlin2123 on GitHub (Apr 26, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15824 I have a high-performance server equipped with **8x NVIDIA A100 (40GB)** GPUs, totaling **320GB of VRAM**. I want to run large models like **GLM** and **DeepSeek V4** locally, utilizing my full VRAM capacity, rather than using the cloud mode. I have downloaded the quantized versions of these models from Hugging Face (HF), which are specifically optimized for local inference. However, when I try to run them using the command: ```bash ollama run hf.co/<username>/<model-name> ollama run hf.co/unsloth/Qwen3.6-35B-A3B-GGUF:UD-Q8_K_XL ollama run hf.co/0xSero/GLM-5-REAP-50pct-UD-IQ2_M-GGUF:UD-IQ2_M ``` It always results in an error. such as ``` llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe' #or llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'glm-dsa' ``` **My Question:** Is there a way to force Ollama to load these models locally instead of attempting to use cloud mode? Has anyone successfully run such large models locally using Ollama with Hugging Face checkpoints?
GiteaMirror added the feature request label 2026-04-29 11:04:40 -05:00
Author
Owner

@gotnochill815-web commented on GitHub (Apr 27, 2026):

I checked the current Ollama source.

There are two separate cases here:

  • qwen35moe is already present in the current source tree, so that error may indicate an older Ollama release or outdated local build.
  • glm-dsa does not appear in the current source, so it likely is not supported yet.

It may help to retry qwen35moe on the latest release/nightly, while glm-dsa would likely need new architecture support.

<!-- gh-comment-id:4328325510 --> @gotnochill815-web commented on GitHub (Apr 27, 2026): I checked the current Ollama source. There are two separate cases here: * `qwen35moe` is already present in the current source tree, so that error may indicate an older Ollama release or outdated local build. * `glm-dsa` does not appear in the current source, so it likely is not supported yet. It may help to retry `qwen35moe` on the latest release/nightly, while `glm-dsa` would likely need new architecture support.
Author
Owner

@gotnochill815-web commented on GitHub (Apr 27, 2026):

I also opened a small PR to improve the current error wording from unknown model architecture to unsupported model architecture, which may make cases like this clearer for users: https://github.com/ollama/ollama/pull/15838

<!-- gh-comment-id:4328501639 --> @gotnochill815-web commented on GitHub (Apr 27, 2026): I also opened a small PR to improve the current error wording from unknown model architecture to unsupported model architecture, which may make cases like this clearer for users: https://github.com/ollama/ollama/pull/15838
Author
Owner

@berlin2123 commented on GitHub (Apr 28, 2026):

Thanks
Indeed, the qwen35moe is not suported in the main branch of this repository yet, seeing code: (can't search QWEN35MOE) https://github.com/ollama/ollama/blob/main/llama/llama.cpp/src/llama-model.cpp
But it is surported in the upstream llama.cpp: https://github.com/ggml-org/llama.cpp/blob/master/src/llama-model.cpp
And the same for glm-dsa

So, any way to replace the llama.cpp inside ollama manually? ie, make from the source of ggml-org/llama.cpp, and replace the one inside ollama.

<!-- gh-comment-id:4331789773 --> @berlin2123 commented on GitHub (Apr 28, 2026): Thanks Indeed, the `qwen35moe` is not suported in the main branch of this repository yet, seeing code: (can't search `QWEN35MOE`) https://github.com/ollama/ollama/blob/main/llama/llama.cpp/src/llama-model.cpp But it is surported in the upstream llama.cpp: https://github.com/ggml-org/llama.cpp/blob/master/src/llama-model.cpp And the same for glm-dsa So, any way to replace the llama.cpp inside ollama manually? ie, make from the source of ggml-org/llama.cpp, and replace the one inside ollama.
Author
Owner

@gotnochill815-web commented on GitHub (Apr 28, 2026):

I looked into this and it seems the issue is not related to cloud vs local execution, but to missing architecture support in Ollama’s current llama.cpp version.

For example:

  • qwen35moe and glm-dsa are supported in upstream llama.cpp (e.g. recent commits like 752584d5 for GLM DSA)
  • but Ollama vendors its own llama.cpp, which hasn’t integrated those changes yet

So even with sufficient GPU/VRAM, these models will fail locally because the runtime doesn’t recognize the architecture.

I’ve also opened a small PR to improve the error message so this is clearer for users:
#15859

Proper support would likely require syncing or porting upstream llama.cpp changes into Ollama rather than replacing the folder directly.

<!-- gh-comment-id:4333663377 --> @gotnochill815-web commented on GitHub (Apr 28, 2026): I looked into this and it seems the issue is not related to cloud vs local execution, but to missing architecture support in Ollama’s current llama.cpp version. For example: * `qwen35moe` and `glm-dsa` are supported in upstream llama.cpp (e.g. recent commits like 752584d5 for GLM DSA) * but Ollama vendors its own llama.cpp, which hasn’t integrated those changes yet So even with sufficient GPU/VRAM, these models will fail locally because the runtime doesn’t recognize the architecture. I’ve also opened a small PR to improve the error message so this is clearer for users: #15859 Proper support would likely require syncing or porting upstream llama.cpp changes into Ollama rather than replacing the folder directly.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#56597