[GH-ISSUE #8045] Ollama run hf.co - Error 401: Invalid username or password #51655

Closed
opened 2026-04-28 20:42:27 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @bengrau on GitHub (Dec 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/8045

What is the issue?

I am using a private model on hf and try to run it like this:

huggingface-cli login --token hf_xxx
ollama run hf.co/BGR/Llama-3.2-1B-I-p:latest

However I get this error from ollama:

pulling manifest 
Error: pull model manifest: 401: {"error":"Invalid username or password."}

Anyone experienced this issue or knows how to solve this?

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

ollama version is 0.5.1

Originally created by @bengrau on GitHub (Dec 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/8045 ### What is the issue? I am using a private model on hf and try to run it like this: ``` huggingface-cli login --token hf_xxx ollama run hf.co/BGR/Llama-3.2-1B-I-p:latest ``` However I get this error from ollama: ``` pulling manifest Error: pull model manifest: 401: {"error":"Invalid username or password."} ``` Anyone experienced this issue or knows how to solve this? ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version ollama version is 0.5.1
GiteaMirror added the bug label 2026-04-28 20:42:27 -05:00
Author
Owner

@zaidalyafeai commented on GitHub (Dec 20, 2024):

Hey @bengrau, have you solved this? I want to access a gated model on HuggingFace.

<!-- gh-comment-id:2556750583 --> @zaidalyafeai commented on GitHub (Dec 20, 2024): Hey @bengrau, have you solved this? I want to access a gated model on HuggingFace.
Author
Owner

@pdevine commented on GitHub (Dec 20, 2024):

I'm going to close this since there's not really any solution on the Ollama side for this. I'm not sure how HuggingFace implemented that; Ollama has a particular way that it does access control (with ed25519 public/private keys) which I'm guessing they haven't implemented.

As a workaround, you can pull the gguf file directly and create a Modelfile with the line:

FROM Llama-3.2-1B-I-p.gguf # or whatever the file is called

and then running:

ollama create my-model
<!-- gh-comment-id:2557784338 --> @pdevine commented on GitHub (Dec 20, 2024): I'm going to close this since there's not really any solution on the Ollama side for this. I'm not sure how HuggingFace implemented that; Ollama has a particular way that it does access control (with ed25519 public/private keys) which I'm guessing they haven't implemented. As a workaround, you can pull the gguf file directly and create a Modelfile with the line: ``` FROM Llama-3.2-1B-I-p.gguf # or whatever the file is called ``` and then running: ``` ollama create my-model ```
Author
Owner

@bengrau commented on GitHub (Dec 21, 2024):

Couldn't find a solution and did it the way @pdevine described it.

<!-- gh-comment-id:2558000625 --> @bengrau commented on GitHub (Dec 21, 2024): Couldn't find a solution and did it the way @pdevine described it.
Author
Owner

@michetonu commented on GitHub (Jan 2, 2025):

Had the same problem, turns out you need to add your Ollama SSH key to your Huggingface account. You can follow the instructions on this page:

You can run private GGUFs from your personal account or from an associated organisation account in two simple steps:

    - Copy your Ollama SSH key, you can do so via: cat ~/.ollama/id_ed25519.pub | pbcopy
    - Add the corresponding key to your Hugging Face account by going to [your account settings](https://huggingface.co/settings/keys) and clicking on Add new SSH key.
    - That’s it! You can now run private GGUFs from the Hugging Face Hub: ollama run hf.co/{username}/{repository}.
<!-- gh-comment-id:2567852366 --> @michetonu commented on GitHub (Jan 2, 2025): Had the same problem, turns out you need to add your Ollama SSH key to your Huggingface account. You can follow the instructions on [this page](https://huggingface.co/docs/hub/en/ollama#run-private-ggufs-from-the-hugging-face-hub): ``` You can run private GGUFs from your personal account or from an associated organisation account in two simple steps: - Copy your Ollama SSH key, you can do so via: cat ~/.ollama/id_ed25519.pub | pbcopy - Add the corresponding key to your Hugging Face account by going to [your account settings](https://huggingface.co/settings/keys) and clicking on Add new SSH key. - That’s it! You can now run private GGUFs from the Hugging Face Hub: ollama run hf.co/{username}/{repository}. ```
Author
Owner

@Master-Pr0grammer commented on GitHub (Apr 4, 2025):

I did what @michetonu suggested but im still getting the same error. Trying to download the new quantized aware training gemma 3 4b model google released, but keep getting this error.

Edit: as a temporary solution, someone downloaded the gemma 3 QAT models and re-uploaded it here: https://www.reddit.com/r/LocalLLaMA/comments/1jqyfs9/ollama_fix_gemma312bitqatq4_0gguf/

<!-- gh-comment-id:2778555198 --> @Master-Pr0grammer commented on GitHub (Apr 4, 2025): I did what @michetonu suggested but im still getting the same error. Trying to download the new quantized aware training gemma 3 4b model google released, but keep getting this error. Edit: as a temporary solution, someone downloaded the gemma 3 QAT models and re-uploaded it here: https://www.reddit.com/r/LocalLLaMA/comments/1jqyfs9/ollama_fix_gemma312bitqatq4_0gguf/
Author
Owner

@kkishore9891 commented on GitHub (Apr 5, 2025):

I did what @michetonu suggested but im still getting the same error. Trying to download the new quantized aware training gemma 3 4b model google released, but keep getting this error.

Same. The pbcopy command is apparently a MacOS command. We Linux users are ffed.

<!-- gh-comment-id:2780866805 --> @kkishore9891 commented on GitHub (Apr 5, 2025): > I did what [@michetonu](https://github.com/michetonu) suggested but im still getting the same error. Trying to download the new quantized aware training gemma 3 4b model google released, but keep getting this error. Same. The pbcopy command is apparently a MacOS command. We Linux users are ffed.
Author
Owner

@ed7coyne commented on GitHub (Apr 19, 2025):

Just copy the output of that cat and paste it into the box on the huggingface UI. This is similar to the github workflow for ssh keys.

<!-- gh-comment-id:2816782514 --> @ed7coyne commented on GitHub (Apr 19, 2025): Just copy the output of that cat and paste it into the box on the huggingface UI. This is similar to the github workflow for ssh keys.
Author
Owner

@JamesClarke7283 commented on GitHub (Apr 23, 2025):

I'm going to close this since there's not really any solution on the Ollama side for this. I'm not sure how HuggingFace implemented that; Ollama has a particular way that it does access control (with ed25519 public/private keys) which I'm guessing they haven't implemented.

As a workaround, you can pull the gguf file directly and create a Modelfile with the line:

FROM Llama-3.2-1B-I-p.gguf # or whatever the file is called

and then running:

ollama create my-model

It wont work, it doesnt convert the chat template metadata, i tried with glm4-Z1-9b, it just gave the following template in the model file:

TEMPLATE {{ .Prompt }}

that model was a reasoning one, so even if it worked somewhat, it wouldnt have reasoning as it needed something in the template to activate it.

<!-- gh-comment-id:2825428063 --> @JamesClarke7283 commented on GitHub (Apr 23, 2025): > I'm going to close this since there's not really any solution on the Ollama side for this. I'm not sure how HuggingFace implemented that; Ollama has a particular way that it does access control (with ed25519 public/private keys) which I'm guessing they haven't implemented. > > As a workaround, you can pull the gguf file directly and create a Modelfile with the line: > > ``` > FROM Llama-3.2-1B-I-p.gguf # or whatever the file is called > ``` > > and then running: > > ``` > ollama create my-model > ``` It wont work, it doesnt convert the chat template metadata, i tried with glm4-Z1-9b, it just gave the following template in the model file: ``` TEMPLATE {{ .Prompt }} ``` that model was a reasoning one, so even if it worked somewhat, it wouldnt have reasoning as it needed something in the template to activate it.
Author
Owner

@JamesClarke7283 commented on GitHub (Apr 23, 2025):

I did what @michetonu suggested but im still getting the same error. Trying to download the new quantized aware training gemma 3 4b model google released, but keep getting this error.

Same. The pbcopy command is apparently a MacOS command. We Linux users are ffed.

Lol, good one.

Regardless it raises a good point so i will adapt the command for GNU/Linux and while i am at it some other platforms, see below for the guide.

Most free/libre POSIX Systems (Linux, BSD, Hurd (: )

xclip (X11)

cat ~/.ollama/id_ed25519.pub | xclip -sel c

wl-clipboard (Wayland)

cat ~/.ollama/id_ed25519.pub | wl-copy

Windows

Windows Terminal

type %USERPROFILE%\.ollama\id_ed25519.pub | clip

Powershell

Get-Content $env:USERPROFILE\.ollama\id_ed25519.pub | Set-Clipboard

For Copilot+PC/Recall

Recall users have a shortcut.

If you haven't disabled Recall, it might have a copy stored (if you ever opened it). You could ask Copilot to retrieve the pub-key and put it in your clipboard, or anything else ever displayed on-screen... and that's not frightening at all!


Hope someone finds this guide useful.

Good day ;)


Image

Cropped Version of 'This is fine' by KC Green of GunShowComic

<!-- gh-comment-id:2825516129 --> @JamesClarke7283 commented on GitHub (Apr 23, 2025): > > I did what [@michetonu](https://github.com/michetonu) suggested but im still getting the same error. Trying to download the new quantized aware training gemma 3 4b model google released, but keep getting this error. > > Same. The pbcopy command is apparently a MacOS command. We Linux users are ffed. Lol, good one. Regardless it raises a good point so i will adapt the command for GNU/Linux and while i am at it some other platforms, see below for the guide. ## Most free/libre POSIX Systems (Linux, BSD, Hurd (: ) ## xclip (X11) ```sh cat ~/.ollama/id_ed25519.pub | xclip -sel c ``` ## wl-clipboard (Wayland) ```sh cat ~/.ollama/id_ed25519.pub | wl-copy ``` ## Windows ### Windows Terminal ```batch type %USERPROFILE%\.ollama\id_ed25519.pub | clip ``` ### Powershell ```powershell Get-Content $env:USERPROFILE\.ollama\id_ed25519.pub | Set-Clipboard ``` ### For Copilot+PC/Recall Recall users have a shortcut. If you haven't disabled Recall, it might have a copy stored (if you ever opened it). You could ask Copilot to retrieve the pub-key and put it in your clipboard, or anything else ever displayed on-screen... and that's not frightening at all! --- Hope someone finds this guide useful. Good day ;) --- <img src="https://github.com/user-attachments/assets/bdb019e1-678f-4ca1-b06b-b0376fc778a1" alt="Image" alt="'This is fine' by KC Green" width="512" height="288"> ### Cropped Version of **_['This is fine'](https://gunshowcomic.com/648) by [KC Green](https://kcgreendotcom.com/) of [GunShowComic](https://gunshowcomic.com/)_**
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51655