[GH-ISSUE #1097] How to create model from Modelfile when the model is splitted into multiple .bin files? #546

Closed
opened 2026-04-12 10:14:02 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @kasperjunge on GitHub (Nov 12, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1097

I have a pretty basic question. I want to run this model with Ollama.

I download it from Hugging Face Hub using this script:

from huggingface_hub import snapshot_download

model_id = "mhenrichsen/hestenettetLM"
snapshot_download(
    repo_id=model_id,
    local_dir="hestenettetLM",
    local_dir_use_symlinks=False,
    revision="main",
)

Then I get a dir that looks like this:

.
├── README.md
├── added_tokens.json
├── config.json
├── generation_config.json
├── pytorch_model-00001-of-00002.bin
├── pytorch_model-00002-of-00002.bin
├── pytorch_model.bin.index.json
├── special_tokens_map.json
├── tokenizer.model
└── tokenizer_config.json

I now want to make a Modelfile so I can run it with Ollama.

This guide instructs me to create a Modelfile with the following content:

FROM ./q4_0.bin

And the run:

ollama create example -f Modelfile

However, my model is splitted into two .bin files. I put my Modelfile in the downloaded model dir and I have tried:

# Modelfile
FROM ./pytorch_model-00001-of-00002.bin ./pytorch_model-00002-of-00002.bin

# Terminal output
pulling manifest  Error: pull model manifest: Get "https://./v2/pytorch_model-00001-of-00002.bin%!/(MISSING)pytorch_model-00002-of-00002.bin/manifests/latest": dial tcp: lookup .: no such host
# Modelfile
FROM ./pytorch_model-00001-of-00002.bin
FROM ./pytorch_model-00002-of-00002.bin

# Terminal output
parsing modelfile    
looking for model    
⠋ creating model layer  Error: invalid file magic
# Modelfile
FROM ./pytorch_model-00001-of-00002.bin

# Terminal output
parsing modelfile    
looking for model    
⠋ creating model layer  Error: invalid file magic
# Modelfile
FROM .

# Terminal output
parsing modelfile    
looking for model    
⠋ creating model layer  Error: invalid file magic

My question:

How can I create a model from Modelfile when the model is splitted into multiple .bin files?

Originally created by @kasperjunge on GitHub (Nov 12, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1097 I have a pretty basic question. I want to run [this model](https://huggingface.co/mhenrichsen/hestenettetLM) with Ollama. I download it from Hugging Face Hub using this script: ````python from huggingface_hub import snapshot_download model_id = "mhenrichsen/hestenettetLM" snapshot_download( repo_id=model_id, local_dir="hestenettetLM", local_dir_use_symlinks=False, revision="main", ) ```` Then I get a dir that looks like this: ```` . ├── README.md ├── added_tokens.json ├── config.json ├── generation_config.json ├── pytorch_model-00001-of-00002.bin ├── pytorch_model-00002-of-00002.bin ├── pytorch_model.bin.index.json ├── special_tokens_map.json ├── tokenizer.model └── tokenizer_config.json ```` I now want to make a Modelfile so I can run it with Ollama. [This guide](https://github.com/jmorganca/ollama/blob/main/docs/import.md) instructs me to create a Modelfile with the following content: ```` FROM ./q4_0.bin ```` And the run: ````bash ollama create example -f Modelfile ```` However, my model is splitted into two .bin files. I put my Modelfile in the downloaded model dir and I have tried: 1. ```` # Modelfile FROM ./pytorch_model-00001-of-00002.bin ./pytorch_model-00002-of-00002.bin # Terminal output pulling manifest Error: pull model manifest: Get "https://./v2/pytorch_model-00001-of-00002.bin%!/(MISSING)pytorch_model-00002-of-00002.bin/manifests/latest": dial tcp: lookup .: no such host ```` 2. ```` # Modelfile FROM ./pytorch_model-00001-of-00002.bin FROM ./pytorch_model-00002-of-00002.bin # Terminal output parsing modelfile looking for model ⠋ creating model layer Error: invalid file magic ```` 3. ```` # Modelfile FROM ./pytorch_model-00001-of-00002.bin # Terminal output parsing modelfile looking for model ⠋ creating model layer Error: invalid file magic ```` 4. ```` # Modelfile FROM . # Terminal output parsing modelfile looking for model ⠋ creating model layer Error: invalid file magic ```` My question: How can I create a model from Modelfile when the model is splitted into multiple .bin files?
Author
Owner

@jmorganca commented on GitHub (Nov 13, 2023):

Hi there, ollama create doesn't yet support loading PyTorch models directly, but you can import using the quantize helper tool: https://github.com/jmorganca/ollama/blob/main/docs/import.md

It will support loading PyTorch models directly in the future, so keep an eye on this issue: https://github.com/jmorganca/ollama/issues/1112

<!-- gh-comment-id:1808703277 --> @jmorganca commented on GitHub (Nov 13, 2023): Hi there, `ollama create` doesn't yet support loading PyTorch models directly, but you can import using the `quantize` helper tool: https://github.com/jmorganca/ollama/blob/main/docs/import.md It will support loading PyTorch models directly in the future, so keep an eye on this issue: https://github.com/jmorganca/ollama/issues/1112
Author
Owner

@kasperjunge commented on GitHub (Nov 14, 2023):

@jmorganca thanks!

<!-- gh-comment-id:1809646379 --> @kasperjunge commented on GitHub (Nov 14, 2023): @jmorganca thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#546