[GH-ISSUE #7240] Pull Private Huggingface Model #66655

Open
opened 2026-05-04 07:44:03 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @DaddyCodesAlot on GitHub (Oct 17, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7240

Hi, so I believe it's now possible to pull huggingface models directly by prepending hf.co to the pull statement. I would just like to get clarity on how this works with private models? I have my huggingface token set as an environment variable, but I can't seem to pull a private model.

Originally created by @DaddyCodesAlot on GitHub (Oct 17, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7240 Hi, so I believe it's now possible to pull huggingface models directly by prepending hf.co to the pull statement. I would just like to get clarity on how this works with private models? I have my huggingface token set as an environment variable, but I can't seem to pull a private model.
GiteaMirror added the feature request label 2026-05-04 07:44:03 -05:00
Author
Owner

@pongib commented on GitHub (Nov 4, 2024):

same problem.

<!-- gh-comment-id:2455074589 --> @pongib commented on GitHub (Nov 4, 2024): same problem.
Author
Owner

@Inigo-13 commented on GitHub (Nov 22, 2024):

Same problem here for private models!

Launching ollama model 'hf.co/.../...-GGUF'...
pulling manifest 
Error: pull model manifest: Get (...): unsupported protocol scheme ""

I also tried to set a different url using https directly with user/token auth, something like:

https://<user>:<token>@huggingface.co/<site>/<model>-GGUF

But with no luck!

<!-- gh-comment-id:2493592093 --> @Inigo-13 commented on GitHub (Nov 22, 2024): Same problem here for private models! ``` Launching ollama model 'hf.co/.../...-GGUF'... pulling manifest Error: pull model manifest: Get (...): unsupported protocol scheme "" ``` I also tried to set a different url using https directly with user/token auth, something like: ``` https://<user>:<token>@huggingface.co/<site>/<model>-GGUF ``` But with no luck!
Author
Owner

@spolspol commented on GitHub (Feb 5, 2025):

+1

<!-- gh-comment-id:2638159063 --> @spolspol commented on GitHub (Feb 5, 2025): +1
Author
Owner

@Inigo-13 commented on GitHub (Feb 6, 2025):

Well, a few days after my comment, the feature was implemented in the huggingface-hub library!
Look here for more details: https://huggingface.co/docs/hub/en/ollama#run-private-ggufs-from-the-hugging-face-hub

<!-- gh-comment-id:2639106061 --> @Inigo-13 commented on GitHub (Feb 6, 2025): Well, a few days after my comment, the feature was implemented in the huggingface-hub library! Look here for more details: https://huggingface.co/docs/hub/en/ollama#run-private-ggufs-from-the-hugging-face-hub
Author
Owner

@joe-speedboat commented on GitHub (Jan 17, 2026):

@Inigo-13
I hit rate limit even if i followed the guide you mentioned.
I assumed that authenticated user are visible at huggingface, but maybe its different.
Does anyone know some details about this issue or having the same problem?

Thanks
Chris

ollama@spark:# ollama run hf.co/unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF:Q4_K_M
pulling manifest 
Error: pull model manifest: 429: {"error":"We had to rate limit your IP (1.1.1.1). To continue using our service, create a HF account or login to your existing account, and make sure you pass a HF_TOKEN if you're using the API."

ollama@spark:~# ollama -v
ollama version is 0.14.2

<!-- gh-comment-id:3764114508 --> @joe-speedboat commented on GitHub (Jan 17, 2026): @Inigo-13 I hit rate limit even if i followed the guide you mentioned. I assumed that authenticated user are visible at huggingface, but maybe its different. Does anyone know some details about this issue or having the same problem? Thanks Chris ``` ollama@spark:# ollama run hf.co/unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF:Q4_K_M pulling manifest Error: pull model manifest: 429: {"error":"We had to rate limit your IP (1.1.1.1). To continue using our service, create a HF account or login to your existing account, and make sure you pass a HF_TOKEN if you're using the API." ollama@spark:~# ollama -v ollama version is 0.14.2 ```
Author
Owner

@rlutsch18 commented on GitHub (Jan 18, 2026):

Same at me.
a few weeks ago no problems pulling huggingface models.

ollama pull hf.co/bartowski/nvidia_NVIDIA-Nemotron-Nano-12B-v2-GGUF:Q8_0
pulling manifest
Error: pull model manifest: 429: {"error":"We had to rate limit your IP (10.0.182.203). To continue using our service, create a HF account or login to your existing account, and make sure you pass a HF_TOKEN if you're using the API."}

ollama -v
ollama version is 0.14.2

Thanks!
Richi

<!-- gh-comment-id:3765210825 --> @rlutsch18 commented on GitHub (Jan 18, 2026): Same at me. a few weeks ago no problems pulling huggingface models. ollama pull hf.co/bartowski/nvidia_NVIDIA-Nemotron-Nano-12B-v2-GGUF:Q8_0 pulling manifest Error: pull model manifest: 429: {"error":"We had to rate limit your IP (10.0.182.203). To continue using our service, create a HF account or login to your existing account, and make sure you pass a HF_TOKEN if you're using the API."} ollama -v ollama version is 0.14.2 Thanks! Richi
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66655