[GH-ISSUE #13314] cant install adapter, says 'Error: no Modelfile or safetensors files found' eventhough the safetensors are there #8794

Open
opened 2026-04-12 21:34:02 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @nendraharyo on GitHub (Dec 3, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13314

What is the issue?

So I already generated the safetensors adapter inside the final_model directory inside my project, but when I wanted to Install it using ollama create mymodel -f Modelfile and the Modelfile looks like this

FROM llama3.2:1b
ADAPTER C:\Users\me\my-project\final_model

It throws the error:
Error: no Modelfile or safetensors files found

Interestingly when I tried to change the Modelfile ADAPTER syntax to directly open the safetensor file like this:

FROM llama3.2:1b
ADAPTER C:\Users\me\my-project\final_model\adapter_model.safetensors

it detected the safetensor file, but it still wants the config file, which is there in the same folder but not detected:

gathering model components
copying file sha256:807531c5d5f8643e910b44dc2e7f4f89e11c46013b9de72bebaa8f00724d1eec  100%
converting adapter
Error: open adapter_config.json: The system cannot find the file specified.

Relevant log output


OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.13.1

Originally created by @nendraharyo on GitHub (Dec 3, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13314 ### What is the issue? So I already generated the safetensors adapter inside the `final_model` directory inside my project, but when I wanted to Install it using `ollama create mymodel -f Modelfile` and the Modelfile looks like this ``` FROM llama3.2:1b ADAPTER C:\Users\me\my-project\final_model ``` It throws the error: **`Error: no Modelfile or safetensors files found`** Interestingly when I tried to change the Modelfile `ADAPTER` syntax to directly open the safetensor file like this: ``` FROM llama3.2:1b ADAPTER C:\Users\me\my-project\final_model\adapter_model.safetensors ``` it detected the safetensor file, but it still wants the config file, which is there in the same folder but not detected: ``` gathering model components copying file sha256:807531c5d5f8643e910b44dc2e7f4f89e11c46013b9de72bebaa8f00724d1eec 100% converting adapter Error: open adapter_config.json: The system cannot find the file specified. ``` ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.13.1
GiteaMirror added the bug label 2026-04-12 21:34:03 -05:00
Author
Owner

@molbal commented on GitHub (Dec 3, 2025):

Try to put the adapter line in double quotes:

FROM llama3.2:1b
ADAPTER "C:\Users\me\my-project\final_model\adapter_model.safetensors"

<!-- gh-comment-id:3606644451 --> @molbal commented on GitHub (Dec 3, 2025): Try to put the adapter line in double quotes: FROM llama3.2:1b ADAPTER "C:\Users\me\my-project\final_model\adapter_model.safetensors"
Author
Owner

@nendraharyo commented on GitHub (Dec 3, 2025):

Try to put the adapter line in double quotes:

FROM llama3.2:1b ADAPTER "C:\Users\me\my-project\final_model\adapter_model.safetensors"

thanks, I already did this but the behavior is still the same

<!-- gh-comment-id:3606722205 --> @nendraharyo commented on GitHub (Dec 3, 2025): > Try to put the adapter line in double quotes: > > FROM llama3.2:1b ADAPTER "C:\Users\me\my-project\final_model\adapter_model.safetensors" thanks, I already did this but the behavior is still the same
Author
Owner

@molbal commented on GitHub (Dec 3, 2025):

Oh wait I just read the message - it looks for adapter_config.json, not just the adapter file. What files did you get in the output directory of the adapter, which you have trained? Some files need to be in the same directory of the adapter.

What may be easier for you is to convert the adapter to a GGUF itself, as that generally does not require the adapter config to be present.

The other option is to save the merged weights and just run inference on the meged GGUF file.

You can do both with llama.cpp helpers

<!-- gh-comment-id:3607037714 --> @molbal commented on GitHub (Dec 3, 2025): Oh wait I just read the message - it looks for adapter_config.json, not just the adapter file. What files did you get in the output directory of the adapter, which you have trained? Some files need to be in the same directory of the adapter. What may be easier for you is to convert the adapter to a GGUF itself, as that generally does not require the adapter config to be present. The other option is to save the merged weights and just run inference on the meged GGUF file. You can do both with llama.cpp helpers
Author
Owner

@nendraharyo commented on GitHub (Dec 3, 2025):

oh yeah sorry I forgot to tell that I trained the adapter using LoRA and the output directory does have adapter_config.json in it besides adapter_model.safetensors. I'll try reinstalling llama and if that doesn't fix it then I'll try converting it to GGUF.

<!-- gh-comment-id:3607416992 --> @nendraharyo commented on GitHub (Dec 3, 2025): oh yeah sorry I forgot to tell that I trained the adapter using LoRA and the output directory does have adapter_config.json in it besides adapter_model.safetensors. I'll try reinstalling llama and if that doesn't fix it then I'll try converting it to GGUF.
Author
Owner

@artemavrin commented on GitHub (Dec 6, 2025):

Same issue

  • Ollama version: 0.13.1
  • Base model: gpt-oss:20b (pulled via ollama pull gpt-oss:20b)
  • Environment: macOS

I created a Modelfile in the working directory:

FROM gpt-oss:20b
ADAPTER ./adapters

SYSTEM "test agent"

In the same directory I have:

  • Modelfile
  • adapters/adapters.safetensors
  • adapters/adapter_config.json

When I run:

ollama create kaskad:20b -f Modelfile

I consistently get:

Error: no Modelfile or safetensors files found

The file is named exactly Modelfile (no extension), and the relative path ./adapters is correct. From the docs, MLX safetensors adapters should be supported, but they are not detected in this setup.

<!-- gh-comment-id:3621363558 --> @artemavrin commented on GitHub (Dec 6, 2025): Same issue - Ollama version: 0.13.1 - Base model: `gpt-oss:20b` (pulled via `ollama pull gpt-oss:20b`) - Environment: macOS I created a `Modelfile` in the working directory: ```text FROM gpt-oss:20b ADAPTER ./adapters SYSTEM "test agent" ``` In the same directory I have: - `Modelfile` - `adapters/adapters.safetensors` - `adapters/adapter_config.json` When I run: ```bash ollama create kaskad:20b -f Modelfile ``` I consistently get: ```text Error: no Modelfile or safetensors files found ``` The file is named exactly `Modelfile` (no extension), and the relative path `./adapters` is correct. From the docs, MLX `safetensors` adapters should be supported, but they are not detected in this setup.
Author
Owner

@artemavrin commented on GitHub (Dec 7, 2025):

adapter file name should be model*.safetensors

<!-- gh-comment-id:3622317296 --> @artemavrin commented on GitHub (Dec 7, 2025): adapter file name should be model*.safetensors
Author
Owner

@nendraharyo commented on GitHub (Dec 7, 2025):

that works, thankyou so much. I dont have to convert my adapter into GGUF again. But is there any source of this? If not they should add it to docs

<!-- gh-comment-id:3622455713 --> @nendraharyo commented on GitHub (Dec 7, 2025): that works, thankyou so much. I dont have to convert my adapter into GGUF again. But is there any source of this? If not they should add it to docs
Author
Owner

@KiranEswaran commented on GitHub (Dec 13, 2025):

Thanks artemavrin worked for me too!

<!-- gh-comment-id:3649895159 --> @KiranEswaran commented on GitHub (Dec 13, 2025): Thanks artemavrin worked for me too!
Author
Owner

@fellipgomes commented on GitHub (Dec 30, 2025):

very very thanks @artemavrin .

I'm use Windows 11, ollama, i changed the name adapter_model.safetensors to model.safetensors and worker.

<!-- gh-comment-id:3700604774 --> @fellipgomes commented on GitHub (Dec 30, 2025): very very thanks @artemavrin . I'm use Windows 11, ollama, i changed the name adapter_model.safetensors to model.safetensors and worker.
Author
Owner

@DaGeRe commented on GitHub (Mar 16, 2026):

The same happens with linux.

All files of the adapter are present:

ls my_adapter
adapter_config.json        chat_template.jinja       processor_config.json  special_tokens_map.json  tokenizer.json
adapter_model.safetensors  preprocessor_config.json  README.md              tokenizer_config.json    tokenizer.model

Using

FROM gemma3:4b
ADAPTER ./my_adapter/

yields

gathering model components
Error: no Modelfile or safetensors files found

And using

FROM gemma3:4b
ADAPTER ./my_adapter/adapter_model.safetensors

yields

gathering model components
copying file sha256:e676fd418bf596931082ce59e27ae09aee5aff1700ace3f3e93a6891605d9d71 100%
converting adapter
Error: open adapter_config.json: no such file or directory

<!-- gh-comment-id:4067690175 --> @DaGeRe commented on GitHub (Mar 16, 2026): The same happens with linux. All files of the adapter are present: ``` ls my_adapter adapter_config.json chat_template.jinja processor_config.json special_tokens_map.json tokenizer.json adapter_model.safetensors preprocessor_config.json README.md tokenizer_config.json tokenizer.model ``` Using ``` FROM gemma3:4b ADAPTER ./my_adapter/ ``` yields > gathering model components > Error: no Modelfile or safetensors files found And using ``` FROM gemma3:4b ADAPTER ./my_adapter/adapter_model.safetensors ``` yields > gathering model components > copying file sha256:e676fd418bf596931082ce59e27ae09aee5aff1700ace3f3e93a6891605d9d71 100% > converting adapter > Error: open adapter_config.json: no such file or directory
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8794