[GH-ISSUE #14811] Error with huggingface "use the model" link #56077

Closed
opened 2026-04-29 10:14:34 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @JoeYang0406 on GitHub (Mar 13, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14811

What is the issue?

I have verified this in both Windows and macOS environments. Direct execution using the links provided by Hugging Face (eg. ollama run hf.co/unsloth/Qwen3.5-4B-GGUF:Q4_K_M) results in an error(Error: 500 Internal Server Error: llama runner process has terminated: exit status 2),but it works fine if I manually download the GGUF file.

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @JoeYang0406 on GitHub (Mar 13, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14811 ### What is the issue? I have verified this in both Windows and macOS environments. Direct execution using the links provided by Hugging Face (eg. ollama run hf.co/unsloth/Qwen3.5-4B-GGUF:Q4_K_M) results in an error(Error: 500 Internal Server Error: llama runner process has terminated: exit status 2),but it works fine if I manually download the GGUF file. ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-29 10:14:34 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#56077