[GH-ISSUE #14586] Qwen3.5 GUFF #9457

Closed
opened 2026-04-12 22:23:00 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @Eb7CAPJi on GitHub (Mar 3, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14586

What is the issue?

When trying to download the model hf.co/unsloth/Qwen3.5-4B-GGUF:Q8_0, I receive the error message: Error: 500 Internal Server Error: unable to load model. This happens with all Qwen3.5 models, even though Hugging Face indicates that these models are supported by Ollama.

Relevant log output


OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.17.5

Originally created by @Eb7CAPJi on GitHub (Mar 3, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14586 ### What is the issue? When trying to download the model hf.co/unsloth/Qwen3.5-4B-GGUF:Q8_0, I receive the error message: Error: 500 Internal Server Error: unable to load model. This happens with all Qwen3.5 models, even though Hugging Face indicates that these models are supported by Ollama. ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.17.5
GiteaMirror added the bug label 2026-04-12 22:23:00 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#9457