[GH-ISSUE #3246] Error: invalid file magic when importing Safetensors models #48513

Open
opened 2026-04-28 08:46:30 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @amnweb on GitHub (Mar 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3246

Originally assigned to: @pdevine on GitHub.

What is the issue?

ollama create test -f Modelfile
transferring model data
creating model layer
Error: invalid file magic

This happens for all the Safetensors models I try to import.

Modelfile content
FROM ./model.safetensors

Screenshot 2024-03-19 141058

What did you expect to see?

expect to working :)

Steps to reproduce

No response

Are there any recent changes that introduced the issue?

No response

OS

Windows

Architecture

amd64

Platform

No response

Ollama version

0.1.29

GPU

Nvidia

GPU info

No response

CPU

Intel

Other software

No response

Originally created by @amnweb on GitHub (Mar 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3246 Originally assigned to: @pdevine on GitHub. ### What is the issue? > ollama create test -f Modelfile transferring model data creating model layer Error: invalid file magic This happens for all the **Safetensors** models I try to import. Modelfile content `FROM ./model.safetensors` ![Screenshot 2024-03-19 141058](https://github.com/ollama/ollama/assets/16545063/a807e4f9-dfee-4ff4-bc10-cec44167bf9f) ### What did you expect to see? expect to working :) ### Steps to reproduce _No response_ ### Are there any recent changes that introduced the issue? _No response_ ### OS Windows ### Architecture amd64 ### Platform _No response_ ### Ollama version 0.1.29 ### GPU Nvidia ### GPU info _No response_ ### CPU Intel ### Other software _No response_
GiteaMirror added the bug label 2026-04-28 08:46:30 -05:00
Author
Owner

@pdevine commented on GitHub (Mar 19, 2024):

What safetensors model were you trying to import? Right now only Mistral and Mistral fine tunes are supported. More are coming soon though!

<!-- gh-comment-id:2007158925 --> @pdevine commented on GitHub (Mar 19, 2024): What safetensors model were you trying to import? Right now only Mistral and Mistral fine tunes are supported. More are coming soon though!
Author
Owner

@amnweb commented on GitHub (Mar 19, 2024):

Oh, okay, I didn't know that. I was just trying to test random models to see how they're working :D

I tried three and four text models, but none of them worked, anyway none of them were Mistral.

<!-- gh-comment-id:2007173778 --> @amnweb commented on GitHub (Mar 19, 2024): Oh, okay, I didn't know that. I was just trying to test random models to see how they're working :D I tried three and four text models, but none of them worked, anyway none of them were Mistral.
Author
Owner

@pdevine commented on GitHub (Mar 19, 2024):

Sorry about that! I have Gemma now working, but haven't yet sent out the PR. I'll add an error message saying that the other models aren't yet supported.

<!-- gh-comment-id:2007240900 --> @pdevine commented on GitHub (Mar 19, 2024): Sorry about that! I have Gemma now working, but haven't yet sent out the PR. I'll add an error message saying that the other models aren't yet supported.
Author
Owner

@pdevine commented on GitHub (Mar 19, 2024):

@amnweb can you list which models you tried? I just realized there should be code to catch that.

<!-- gh-comment-id:2007376734 --> @pdevine commented on GitHub (Mar 19, 2024): @amnweb can you list which models you tried? I just realized there should be code to catch that.
Author
Owner

@amnweb commented on GitHub (Mar 19, 2024):

I think this is the last one I tried
https://huggingface.co/google-bert/bert-base-uncased/tree/main
or this
https://huggingface.co/pysentimiento/robertuito-sentiment-analysis/tree/main

Already delete from disk and can't remember what I was downloaded

Edit: Btw this one also, just found it in history
https://huggingface.co/stabilityai/stable-diffusion-2-1/tree/main

<!-- gh-comment-id:2007472007 --> @amnweb commented on GitHub (Mar 19, 2024): I think this is the last one I tried `https://huggingface.co/google-bert/bert-base-uncased/tree/main` or this `https://huggingface.co/pysentimiento/robertuito-sentiment-analysis/tree/main` Already delete from disk and can't remember what I was downloaded Edit: Btw this one also, just found it in history `https://huggingface.co/stabilityai/stable-diffusion-2-1/tree/main`
Author
Owner

@pdevine commented on GitHub (Mar 19, 2024):

Can you post one of the modelfiles? I'm trying to figure out if you had converted/quantized these yourself or got ollama to convert the safetensors files.

<!-- gh-comment-id:2007478643 --> @pdevine commented on GitHub (Mar 19, 2024): Can you post one of the modelfiles? I'm trying to figure out if you had converted/quantized these yourself or got ollama to convert the safetensors files.
Author
Owner

@amnweb commented on GitHub (Mar 19, 2024):

I have downloaded model.safetensors https://huggingface.co/google-bert/bert-base-uncased/resolve/main/model.safetensors
Created Modelfile with content FROM ./model.safetensors and run command from terminal. If I understand Ollama should be convert safetensors or I'm thinking wrong ?

<!-- gh-comment-id:2007498292 --> @amnweb commented on GitHub (Mar 19, 2024): I have downloaded **model.safetensors** `https://huggingface.co/google-bert/bert-base-uncased/resolve/main/model.safetensors` Created Modelfile with content `FROM ./model.safetensors` and run command from terminal. If I understand Ollama should be convert safetensors or I'm thinking wrong ?
Author
Owner

@mroark1m commented on GitHub (Apr 23, 2024):

I'm having this same issue with ollama in docker and using https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF codellama-7b-instruct.Q4_0.gguf file.. here is my modelfile


FROM /models/CodeLlama-7B-Instruct-GGUF/codellama-7b-instruct.Q4_0.gguf
TEMPLATE """[INST] <<SYS>>{{ .System }}<</SYS>>

{{ .Prompt }} [/INST]"""
PARAMETER rope_frequency_base 1000000
PARAMETER stop [INST]
PARAMETER stop [/INST]
PARAMETER stop <<SYS>>
PARAMETER stop <</SYS>>
<!-- gh-comment-id:2071292639 --> @mroark1m commented on GitHub (Apr 23, 2024): I'm having this same issue with ollama in docker and using https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF codellama-7b-instruct.Q4_0.gguf file.. here is my modelfile ``` FROM /models/CodeLlama-7B-Instruct-GGUF/codellama-7b-instruct.Q4_0.gguf TEMPLATE """[INST] <<SYS>>{{ .System }}<</SYS>> {{ .Prompt }} [/INST]""" PARAMETER rope_frequency_base 1000000 PARAMETER stop [INST] PARAMETER stop [/INST] PARAMETER stop <<SYS>> PARAMETER stop <</SYS>> ```
Author
Owner

@mroark1m commented on GitHub (Apr 23, 2024):

I'm having this same issue with ollama in docker and using https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF codellama-7b-instruct.Q4_0.gguf file.. here is my modelfile

I had a corrupt file.

<!-- gh-comment-id:2071351711 --> @mroark1m commented on GitHub (Apr 23, 2024): > I'm having this same issue with ollama in docker and using https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF codellama-7b-instruct.Q4_0.gguf file.. here is my modelfile > I had a corrupt file.
Author
Owner

@pdevine commented on GitHub (Apr 23, 2024):

@amnweb sorry for the slow response! I somehow lost track of this. I don't believe any of the models you converted will work inside of the llama.cpp runner unfortunately.

<!-- gh-comment-id:2071363062 --> @pdevine commented on GitHub (Apr 23, 2024): @amnweb sorry for the slow response! I somehow lost track of this. I don't believe any of the models you converted will work inside of the llama.cpp runner unfortunately.
Author
Owner

@ninghairong commented on GitHub (Jun 14, 2024):

I'm having this same issue with ollama, I used it on linux 20.04

<!-- gh-comment-id:2167396353 --> @ninghairong commented on GitHub (Jun 14, 2024): I'm having this same issue with ollama, I used it on linux 20.04
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48513