[GH-ISSUE #6858] Unable to load adapter_model.safetensors for Phi3-Medium-128k #30088

Closed
opened 2026-04-22 09:33:09 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @AAndersn on GitHub (Sep 18, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6858

What is the issue?

Trying to load a safetensors adapter file for phi3-medium-128k using a .modelfile. I generated an adapter_config.json and adapter_model.safetensors files using lora training and copied them into the ollama docker container.

I have generated the modelfile with ollama show phi3:medium --modelfile > phi3_med_cim.modelfile and edited it to add

# Modelfile generated by "ollama show"
FROM phi3:medium
ADAPTER /home/medium_128k
TEMPLATE "{{ if .System }}<|system|>
...

When trying to create the model, I am calling ollama create phi3_med_cim --file /home/phi3_med_cim.modelfile, which throws the following error:

converting model 
Error: unsupported architecture

These steps work fine for phi3-mini-4k and llama3.1, but not for phi3-medium

OS

WSL2

GPU

Nvidia

CPU

Intel

Ollama version

0.3.10 (docker)

Originally created by @AAndersn on GitHub (Sep 18, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6858 ### What is the issue? Trying to load a safetensors adapter file for phi3-medium-128k using a .modelfile. I generated an adapter_config.json and adapter_model.safetensors files using lora training and copied them into the ollama docker container. I have generated the modelfile with `ollama show phi3:medium --modelfile > phi3_med_cim.modelfile` and edited it to add ``` # Modelfile generated by "ollama show" FROM phi3:medium ADAPTER /home/medium_128k TEMPLATE "{{ if .System }}<|system|> ... ``` When trying to create the model, I am calling `ollama create phi3_med_cim --file /home/phi3_med_cim.modelfile`, which throws the following error: ```transferring model data 100% converting model Error: unsupported architecture ``` These steps work fine for phi3-mini-4k and llama3.1, but not for phi3-medium ### OS WSL2 ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.10 (docker)
GiteaMirror added the bug label 2026-04-22 09:33:09 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 19, 2024):

The model conversion in ollama supports a limted set of architectures. You may make more progress by using llama.cpp to convert the base model to GGUF and using that in your FROM line.

<!-- gh-comment-id:2359701121 --> @rick-github commented on GitHub (Sep 19, 2024): The model conversion in ollama supports a limted set of architectures. You may make more progress by using llama.cpp to [convert the base model](https://github.com/ollama/ollama/issues/6628#issuecomment-2329341148) to GGUF and using that in your `FROM` line.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30088