[GH-ISSUE #14547] Qwen3.5-27B-GGUF, Qwen3.5-35B-A3B-GGUF, Qwen-Coder-Nex-GGUF #55950

Closed
opened 2026-04-29 10:01:06 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @Eb7CAPJi on GitHub (Mar 2, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14547

What is the issue?

When attempting to load models for use with Qwen3.5-27B-GGUF, Qwen3.5-35B-A3B-GGUF, and Qwen-Coder-Nex-GGUF, I receive the error message: Error 500 Internal Server Error: unable to load model. I am using the latest version of Ollama.

Relevant log output


OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.17.4

Originally created by @Eb7CAPJi on GitHub (Mar 2, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14547 ### What is the issue? When attempting to load models for use with Qwen3.5-27B-GGUF, Qwen3.5-35B-A3B-GGUF, and Qwen-Coder-Nex-GGUF, I receive the error message: Error 500 Internal Server Error: unable to load model. I am using the latest version of Ollama. ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.17.4
GiteaMirror added the bug label 2026-04-29 10:01:06 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55950