[GH-ISSUE #4217] how to load adapter #64665

Closed
opened 2026-05-03 18:28:00 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @taozhiyuai on GitHub (May 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4217

What is the issue?

how to load adapter

modelfile is the following:

FROM ./sha256-b6f248eff2d0c4f85d2f6369a27d99fc75686d67314a0b5d35a93c5aee5dcb14
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ .Response }}<|eot_id|>"""
PARAMETER num_keep 24
PARAMETER num_ctx 1040000
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
ADAPTER ./adapter_model.safetensors
ERROR info is :
'taozhiyu@603e5f4a42f1 Llama-3-70B-Gradient-1048k-adapter % ollama create llama3:70b-instruct-1mb-q8_0 -f modelfile
transferring model data
creating model layer
creating template layer
creating adapter layer
Error: invalid file magic'

adapter is from https://hf-mirror.com/cognitivecomputations/Llama-3-70B-Gradient-1048k-adapter

anyone can help for this issue?

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.1.32

Originally created by @taozhiyuai on GitHub (May 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4217 ### What is the issue? how to load adapter modelfile is the following: FROM ./sha256-b6f248eff2d0c4f85d2f6369a27d99fc75686d67314a0b5d35a93c5aee5dcb14 TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|>""" PARAMETER num_keep 24 PARAMETER num_ctx 1040000 PARAMETER stop "<|start_header_id|>" PARAMETER stop "<|end_header_id|>" PARAMETER stop "<|eot_id|>" ADAPTER ./adapter_model.safetensors ERROR info is : 'taozhiyu@603e5f4a42f1 Llama-3-70B-Gradient-1048k-adapter % ollama create llama3:70b-instruct-1mb-q8_0 -f modelfile transferring model data creating model layer creating template layer creating adapter layer Error: invalid file magic' adapter is from https://hf-mirror.com/cognitivecomputations/Llama-3-70B-Gradient-1048k-adapter anyone can help for this issue? ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.1.32
GiteaMirror added the bug label 2026-05-03 18:28:00 -05:00
Author
Owner

@jmorganca commented on GitHub (May 7, 2024):

Hi @taozhiyuai adapters must be GGUF files at the moment, this is something we'll improve in the future!

<!-- gh-comment-id:2098880982 --> @jmorganca commented on GitHub (May 7, 2024): Hi @taozhiyuai adapters must be GGUF files at the moment, this is something we'll improve in the future!
Author
Owner

@taozhiyuai commented on GitHub (May 10, 2024):

Hi @taozhiyuai adapters must be GGUF files at the moment, this is something we'll improve in the future!

so I just convert a lora on HF by llama.cpp to GGUF, then I can attach it on ollama model file? @jmorganca

<!-- gh-comment-id:2103787803 --> @taozhiyuai commented on GitHub (May 10, 2024): > Hi @taozhiyuai adapters must be GGUF files at the moment, this is something we'll improve in the future! so I just convert a lora on HF by llama.cpp to GGUF, then I can attach it on ollama model file? @jmorganca
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64665