[GH-ISSUE #9580] Getting gathering model components Error: invalid model name when trying to create model from local GGUF #6249

Closed
opened 2026-04-12 17:40:59 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @phiwi on GitHub (Mar 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9580

What is the issue?

Ollama version: 0.5.13

I have a Modelfile located at ~/model_cards/Modelfile that points to a finetuned model with huggingface & unsloth. Inside the Modelfile:

FROM /xxx/outputs/finetunedmodels/unsloth/Meta-Llama-3.1-8B.GGUF/unsloth.Q8_0.gguf

The model directory /prj/LINDA_LLM/outputs/finetunedmodels/unsloth/Meta-Llama-3.1-8B.GGUF/ looks the following (there is also a Modelfile, but it doesn't matter if I use this or the above self-created one):

├── config.json
├── generation_config.json
├── model-00001-of-00004.safetensors
├── model-00002-of-00004.safetensors
├── model-00003-of-00004.safetensors
├── model-00004-of-00004.safetensors
├── Modelfile
├── model.safetensors.index.json
├── special_tokens_map.json
├── tokenizer_config.json
├── tokenizer.json
└── unsloth.Q8_0.gguf

Now, when running

ollama create -f ./Modelfile  llama3.1-128k-regu:8b

or

ollama create -f Modelfile  llama3.1-128k-regu:8b

inside ~/model_cards/

I get

gathering model components 
Error: invalid model name

I also tried with the pre-made Modelfile from unsloth - same result.

I changed the command FROM ... to

  • /xxx/outputs/finetunedmodels/unsloth/Meta-Llama-3.1-8B.GGUF/ (just the folder)
  • .../outputs/finetunedmodels/unsloth/Meta-Llama-3.1-8B.GGUF/ (relative pathing)
  • /xxx/outputs/finetunedmodels/unsloth/Meta-Llama-3.1-8B_merged_16_bit/ (folder with just safetensors)

and same result.

Relevant log output

gathering model components 
Error: invalid model name

OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @phiwi on GitHub (Mar 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9580 ### What is the issue? Ollama version: `0.5.13` I have a `Modelfile` located at `~/model_cards/Modelfile` that points to a finetuned model with huggingface & unsloth. Inside the Modelfile: ``` FROM /xxx/outputs/finetunedmodels/unsloth/Meta-Llama-3.1-8B.GGUF/unsloth.Q8_0.gguf ``` The model directory `/prj/LINDA_LLM/outputs/finetunedmodels/unsloth/Meta-Llama-3.1-8B.GGUF/` looks the following (there is also a Modelfile, but it doesn't matter if I use this or the above self-created one): ``` ├── config.json ├── generation_config.json ├── model-00001-of-00004.safetensors ├── model-00002-of-00004.safetensors ├── model-00003-of-00004.safetensors ├── model-00004-of-00004.safetensors ├── Modelfile ├── model.safetensors.index.json ├── special_tokens_map.json ├── tokenizer_config.json ├── tokenizer.json └── unsloth.Q8_0.gguf ``` Now, when running ``` ollama create -f ./Modelfile llama3.1-128k-regu:8b ``` or ``` ollama create -f Modelfile llama3.1-128k-regu:8b ``` inside `~/model_cards/` I get ``` gathering model components Error: invalid model name ``` I also tried with the pre-made Modelfile from unsloth - same result. I changed the command `FROM ...` to - `/xxx/outputs/finetunedmodels/unsloth/Meta-Llama-3.1-8B.GGUF/` (just the folder) - `.../outputs/finetunedmodels/unsloth/Meta-Llama-3.1-8B.GGUF/` (relative pathing) - `/xxx/outputs/finetunedmodels/unsloth/Meta-Llama-3.1-8B_merged_16_bit/` (folder with just safetensors) and same result. ### Relevant log output ```shell gathering model components Error: invalid model name ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-12 17:40:59 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 7, 2025):

You can try tracing the program execution to see if it's actually trying to access the files:

strace  -f --trace=newfstatat --signal='!SIGURG' ollama create -f ~/model_cards/Modelfile  llama3.1-128k-regu:8b

The other common reason for the invalid model name error is that the name of the new model is failing a check, eg too many ':'s (llama3.1-128k:regu:8b). But the model name you've chosen looks fine. Unless maybe there's some embedded UTF-8 characters that are not being rendered in the post.

<!-- gh-comment-id:2706776946 --> @rick-github commented on GitHub (Mar 7, 2025): You can try tracing the program execution to see if it's actually trying to access the files: ``` strace -f --trace=newfstatat --signal='!SIGURG' ollama create -f ~/model_cards/Modelfile llama3.1-128k-regu:8b ``` The other common reason for the `invalid model name` error is that the name of the new model is failing a check, eg too many ':'s (llama3.1-128k:regu:8b). But the model name you've chosen looks fine. Unless maybe there's some embedded UTF-8 characters that are not being rendered in the post.
Author
Owner

@phiwi commented on GitHub (Mar 10, 2025):

I couls solve it, although the error message was pretty misleading. In my case, as I was using Ollama via singularity, I missed to bind the paths both of the model file and the actual model.

<!-- gh-comment-id:2710976418 --> @phiwi commented on GitHub (Mar 10, 2025): I couls solve it, although the error message was pretty misleading. In my case, as I was using Ollama via singularity, I missed to bind the paths both of the model file and the actual model.
Author
Owner

@divinehui commented on GitHub (Mar 27, 2025):

I also meet this problem:
root@49fb19dc80a4:~/.ollama/models/qwq32b# ollama create -f /root/.ollama/models/qwq32b/Modelfile qwq:32b
gathering model components
copying file sha256:e242184c48437802c44fee7defc33dc26f5d121c72698bce2baf4c706d791c47 100%
Error: path or modelfile are required

The Modelfile as blow:
FROM /root/.ollama/models/qwq32b/qwq-32b-q5_0.gguf

TEMPLATE """{{- if .System }}
<|im_start|>system {{ .System }}<|im_end|>
{{- end }}
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""

SYSTEM """"""

PARAMETER stop <|im_start|>
PARAMETER stop <|im_end|>

How to solve this problem? anyone can help me?

<!-- gh-comment-id:2756674150 --> @divinehui commented on GitHub (Mar 27, 2025): I also meet this problem: root@49fb19dc80a4:~/.ollama/models/qwq32b# ollama create -f /root/.ollama/models/qwq32b/Modelfile qwq:32b gathering model components copying file sha256:e242184c48437802c44fee7defc33dc26f5d121c72698bce2baf4c706d791c47 100% Error: path or modelfile are required The Modelfile as blow: FROM /root/.ollama/models/qwq32b/qwq-32b-q5_0.gguf TEMPLATE """{{- if .System }} <|im_start|>system {{ .System }}<|im_end|> {{- end }} <|im_start|>user {{ .Prompt }}<|im_end|> <|im_start|>assistant """ SYSTEM """""" PARAMETER stop <|im_start|> PARAMETER stop <|im_end|> How to solve this problem? anyone can help me?
Author
Owner

@SamuelLarkin commented on GitHub (May 8, 2025):

I faced the same issue.
It looks like ollama create -f wants a file that is in the current directory.

ollama create --file Unbabel/TowerInstruct-7B-v0.1/sft/lora/en2fr/123581/merged/ollama.Modelfile Unbabel--TowerInstruct-7B-v0.1-f16.ollama
gathering model components
Error: invalid model name

Moved the Modelfile in the current directory.

cp Unbabel/TowerInstruct-7B-v0.1/sft/lora/en2fr/123581/merged/ollama.Modelfile .

Now it works

ollama create --file ollama.Modelfile Unbabel--TowerInstruct-7B-v0.1-f16
gathering model components
copying file sha256:0d8a2dbde1f27637b336424090db1ac5854233d63888fe46e3ebe5113192e26b 100%
parsing GGUF
using existing layer sha256:0d8a2dbde1f27637b336424090db1ac5854233d63888fe46e3ebe5113192e26b
using existing layer sha256:a602670d1b49da353a389dffff4b771933a46be7f2162267404808095b9fb9a1
writing manifest
success

ollama --version
ollama version is 0.5.7

<!-- gh-comment-id:2863408598 --> @SamuelLarkin commented on GitHub (May 8, 2025): I faced the same issue. It looks like `ollama create -f` wants a file that is in the current directory. ```sh ollama create --file Unbabel/TowerInstruct-7B-v0.1/sft/lora/en2fr/123581/merged/ollama.Modelfile Unbabel--TowerInstruct-7B-v0.1-f16.ollama ``` ``` gathering model components Error: invalid model name ``` Moved the `Modelfile` in the current directory. ```sh cp Unbabel/TowerInstruct-7B-v0.1/sft/lora/en2fr/123581/merged/ollama.Modelfile . ``` Now it works ```sh ollama create --file ollama.Modelfile Unbabel--TowerInstruct-7B-v0.1-f16 ``` ``` gathering model components copying file sha256:0d8a2dbde1f27637b336424090db1ac5854233d63888fe46e3ebe5113192e26b 100% parsing GGUF using existing layer sha256:0d8a2dbde1f27637b336424090db1ac5854233d63888fe46e3ebe5113192e26b using existing layer sha256:a602670d1b49da353a389dffff4b771933a46be7f2162267404808095b9fb9a1 writing manifest success ``` `ollama --version` ollama version is 0.5.7
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6249