[GH-ISSUE #3748] Import from a HF model directly? #64349

Closed
opened 2026-05-03 17:14:19 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @wennycooper on GitHub (Apr 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3748

Is it possible to import from a huggingface model (given a huggingface card ID) directly?
I don't want to converting it to GGUF.

Originally created by @wennycooper on GitHub (Apr 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3748 Is it possible to import from a huggingface model (given a huggingface card ID) directly? I don't want to converting it to GGUF.
GiteaMirror added the feature request label 2026-05-03 17:14:19 -05:00
Author
Owner

@adrienbrault commented on GitHub (Apr 23, 2024):

Would be nice if the Modelfile supported remote GGUFs:

FROM https://huggingface.co/MaziyarPanahi/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct.Q2_K.gguf
<!-- gh-comment-id:2072072089 --> @adrienbrault commented on GitHub (Apr 23, 2024): Would be nice if the `Modelfile` supported remote GGUFs: ``` FROM https://huggingface.co/MaziyarPanahi/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct.Q2_K.gguf ```
Author
Owner

@ericcurtin commented on GitHub (May 2, 2024):

@adrienbrault agree, if you could just put a .gguf in the FROM statement and it just got pulled it would be ideal.

<!-- gh-comment-id:2090532162 --> @ericcurtin commented on GitHub (May 2, 2024): @adrienbrault agree, if you could just put a .gguf in the FROM statement and it just got pulled it would be ideal.
Author
Owner

@ericcurtin commented on GitHub (May 4, 2024):

It's implemented here now, see bottom of README for usage:

https://github.com/ericcurtin/podman-ollama

<!-- gh-comment-id:2094347227 --> @ericcurtin commented on GitHub (May 4, 2024): It's implemented here now, see bottom of README for usage: https://github.com/ericcurtin/podman-ollama
Author
Owner

@jmorganca commented on GitHub (May 10, 2024):

This is now possible for a small set of architectures:

  • MistralForCausalLM
  • MixtralForCausalLM
  • GemmaForCausalLM

With support for more models (including Llama 3) coming soon!

<!-- gh-comment-id:2103629590 --> @jmorganca commented on GitHub (May 10, 2024): This is now possible for a small set of architectures: * `MistralForCausalLM` * `MixtralForCausalLM` * `GemmaForCausalLM` With support for more models (including [Llama 3](https://github.com/ollama/ollama/pull/4268)) coming soon!
Author
Owner

@jmorganca commented on GitHub (May 10, 2024):

To, use FROM <model directory> in the Modelfile

<!-- gh-comment-id:2103629941 --> @jmorganca commented on GitHub (May 10, 2024): To, use `FROM <model directory>` in the `Modelfile`
Author
Owner

@mr-gorjan commented on GitHub (Jun 9, 2024):

Update

I downloaded and moved tokenizer.model file from base model Mistral 7B to the finetuned model directory. It seems to be converting. Will test it.


I am running ollama on docker . Trying to run HF model with MistralForCausalLM. Modelfile contains FROM <model directory>

It throws

transferring model data
unpacking model metadata
processing tensors
Error: open /root/.ollama/models/blobs/411433471/tokenizer.model: no such file or directory

The model which I pulled doesn't have tokenizer.model file.

<!-- gh-comment-id:2156635572 --> @mr-gorjan commented on GitHub (Jun 9, 2024): Update I downloaded and moved tokenizer.model file from base model Mistral 7B to the finetuned model directory. It seems to be converting. Will test it. ------ I am running ollama on docker . Trying to run HF model with **MistralForCausalLM**. Modelfile contains `FROM <model directory>` It throws transferring model data unpacking model metadata processing tensors Error: open /root/.ollama/models/blobs/411433471/tokenizer.model: no such file or directory The model which I pulled doesn't have tokenizer.model file.
Author
Owner

@rplescia commented on GitHub (Jul 30, 2024):

Update

I downloaded and moved tokenizer.model file from base model Mistral 7B to the finetuned model directory. It seems to be converting. Will test it.

I am running ollama on docker . Trying to run HF model with MistralForCausalLM. Modelfile contains FROM <model directory>

It throws

transferring model data unpacking model metadata processing tensors Error: open /root/.ollama/models/blobs/411433471/tokenizer.model: no such file or directory

The model which I pulled doesn't have tokenizer.model file.

I've got the same issue.

<!-- gh-comment-id:2258760760 --> @rplescia commented on GitHub (Jul 30, 2024): > Update > > I downloaded and moved tokenizer.model file from base model Mistral 7B to the finetuned model directory. It seems to be converting. Will test it. > > I am running ollama on docker . Trying to run HF model with **MistralForCausalLM**. Modelfile contains `FROM <model directory>` > > It throws > > transferring model data unpacking model metadata processing tensors Error: open /root/.ollama/models/blobs/411433471/tokenizer.model: no such file or directory > > The model which I pulled doesn't have tokenizer.model file. I've got the same issue.
Author
Owner

@mr-gorjan commented on GitHub (Jul 30, 2024):

The above solution worked for me. You need the original model's tokenizer.model file.

  1. Just copy it directly inside downloaded finetuned model directory.
  2. And then try to import.

Make sure to match architecture of the finetuned model and the original model. They should be same. You can find it inside config.json file of those models.

<!-- gh-comment-id:2258783146 --> @mr-gorjan commented on GitHub (Jul 30, 2024): The above solution worked for me. You need the **original** model's tokenizer.model file. 1. Just copy it directly inside downloaded **finetuned** model directory. 2. And then try to import. Make sure to match architecture of the **finetuned** model and the **original** model. They should be same. You can find it inside **config.json** file of those models.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64349