[GH-ISSUE #5939] Error: invalid file magic when trying to import gte-Qwen2-7B-instruct gguf model to ollama instance #3705

Closed
opened 2026-04-12 14:31:08 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @CHNVigny on GitHub (Jul 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5939

What is the issue?

I got this error:
root@bccf6f1eb00f:/data/models# ollama create gte_qwen2:7b -f Modelfile
transferring model data
Error: invalid file magic
This is my ModelFile:
FROM gte_qwen2.gguf
TEMPLATE "{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ .Response }}<|im_end|>
"
PARAMETER stop <|im_start|>
PARAMETER stop <|im_end|>

I transferred and quantized by the latest llama.cpp.
How can I import this model to ollama?

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.48

Originally created by @CHNVigny on GitHub (Jul 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5939 ### What is the issue? **I got this error:** root@bccf6f1eb00f:/data/models# ollama create gte_qwen2:7b -f Modelfile transferring model data Error: invalid file magic **This is my ModelFile:** FROM gte_qwen2.gguf TEMPLATE "{{ if .Prompt }}<|im_start|>user {{ .Prompt }}<|im_end|> {{ end }}<|im_start|>assistant {{ .Response }}<|im_end|> " PARAMETER stop <|im_start|> PARAMETER stop <|im_end|> I transferred and quantized by the latest llama.cpp. How can I import this model to ollama? ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.48
GiteaMirror added the bug label 2026-04-12 14:31:08 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 25, 2024):

What commands did you run to quantize the model?

<!-- gh-comment-id:2251393552 --> @rick-github commented on GitHub (Jul 25, 2024): What commands did you run to quantize the model?
Author
Owner

@rick-github commented on GitHub (Jul 26, 2024):

I can confirm that the quantized model fails to load.

I downloaded the model from Alibaba-NLP/gte-Qwen2-7B-instruct, converted to GGUF with ghcr.io/ggerganov/llama.cpp:full-cuda--b1-de28008 and then quantized to Q4_K_M. When trying to create the ollama model from the quantized model. ollama spends some time in 'transferring model data', and then fails with 'invalid file magic'. Creating an ollama model from the unquantized model succeeds, although the quality of response is not great:

$ ollama show gte_qwen2:7b 
  Model                   
  	arch            	qwen2 	  
  	parameters      	7.6B  	  
  	quantization    	F16   	  
  	context length  	131072	  
  	embedding length	3584  	  
  	                        
  Parameters              
  	stop	"<|im_start|>"	      
  	stop	"<|im_end|>"  	      
  	                        
$ ollama run gte_qwen2:7b "why is the sky blue?"
 Sky blue color sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky 
blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky^C
$ ollama ps
NAME        	ID          	SIZE 	PROCESSOR      	UNTIL   
gte_qwen2:7b	d0099597f2b1	15 GB	30%/70% CPU/GPU	Forever
$ ollama -v
ollama version is 0.2.8

The 'invalid file magic' error comes from a header check in llm/ggml.go: ae27d9dcfd/llm/ggml.go (L311)

A brief reading of this function indicates that the error occurs because the first four bytes of the file don't match various magic numbers, but the source files match FILE_MAGIC_GGUF_BE:

$ hd Models-7.6B-F16.gguf | head -1
00000000  47 47 55 46 03 00 00 00  53 01 00 00 00 00 00 00  |GGUF....S.......|
$ hd ggml-model-Q4_K_M.gguf | head -1
00000000  47 47 55 46 03 00 00 00  53 01 00 00 00 00 00 00  |GGUF....S.......|

So it seems there's a transform happening during the 'transferring model data' phase which results in the file not being recognized.

<!-- gh-comment-id:2251791396 --> @rick-github commented on GitHub (Jul 26, 2024): I can confirm that the quantized model fails to load. I downloaded the model from [Alibaba-NLP/gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct), converted to GGUF with ghcr.io/ggerganov/llama.cpp:full-cuda--b1-de28008 and then quantized to Q4_K_M. When trying to create the ollama model from the quantized model. ollama spends some time in 'transferring model data', and then fails with 'invalid file magic'. Creating an ollama model from the unquantized model succeeds, although the quality of response is not great: ``` $ ollama show gte_qwen2:7b Model arch qwen2 parameters 7.6B quantization F16 context length 131072 embedding length 3584 Parameters stop "<|im_start|>" stop "<|im_end|>" $ ollama run gte_qwen2:7b "why is the sky blue?" Sky blue color sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky blue sky^C $ ollama ps NAME ID SIZE PROCESSOR UNTIL gte_qwen2:7b d0099597f2b1 15 GB 30%/70% CPU/GPU Forever $ ollama -v ollama version is 0.2.8 ``` The 'invalid file magic' error comes from a header check in llm/ggml.go: https://github.com/ollama/ollama/blob/ae27d9dcfd32b7fbaa0d5a1fb0126106873332bf/llm/ggml.go#L311 A brief reading of this function indicates that the error occurs because the first four bytes of the file don't match various magic numbers, but the source files match FILE_MAGIC_GGUF_BE: ``` $ hd Models-7.6B-F16.gguf | head -1 00000000 47 47 55 46 03 00 00 00 53 01 00 00 00 00 00 00 |GGUF....S.......| $ hd ggml-model-Q4_K_M.gguf | head -1 00000000 47 47 55 46 03 00 00 00 53 01 00 00 00 00 00 00 |GGUF....S.......| ``` So it seems there's a transform happening during the 'transferring model data' phase which results in the file not being recognized.
Author
Owner

@CHNVigny commented on GitHub (Jul 26, 2024):

What commands did you run to quantize the model?

i do make in llama.cpp and use this command:
llm/llama.cpp/llama-quantize converted.bin gte_qwen2.gguf q4_0
converted.bin is converted by this command:
python llm/llama.cpp/convert_hf_to_gguf.py ./model --outtype f16 --outfile converted.bin
that I clone gte_qwen2_7b_instruct into the model folder

<!-- gh-comment-id:2251880339 --> @CHNVigny commented on GitHub (Jul 26, 2024): > What commands did you run to quantize the model? i do make in llama.cpp and use this command: **llm/llama.cpp/llama-quantize converted.bin gte_qwen2.gguf q4_0** converted.bin is converted by this command: **python llm/llama.cpp/convert_hf_to_gguf.py ./model --outtype f16 --outfile converted.bin** that I clone gte_qwen2_7b_instruct into the model folder
Author
Owner

@arkohut commented on GitHub (Oct 21, 2024):

I got similar error here: https://github.com/OpenBMB/MiniCPM-V/issues/634

<!-- gh-comment-id:2425540729 --> @arkohut commented on GitHub (Oct 21, 2024): I got similar error here: https://github.com/OpenBMB/MiniCPM-V/issues/634
Author
Owner

@rick-github commented on GitHub (Nov 11, 2024):

The problem here is that the llama.cpp quantizer pads the output with null bytes until it's a multiple of 32 bytes long. The llama.cpp inference engine doesn't care about the trailing bytes, but ollama does because it wants to be able to process concatenated GGUF files, and the pad bytes confuse it.

The solution is to remove the null bytes until the import succeeds. In the case of this model, I had to remove 24 bytes. I did this in three steps of removing 8 bytes each time, on the assumption that 64 bits is a natural underlying data size for a 64 bit machine:

# try importing immediately after converting with llama.cpp
$ ollama create gte_qwen2:7b-q4_k_m
transferring model data 100% 
Error: invalid file magic
# check trailing bytes
$ xxd -s -32 ggml-model-Q4_K_M.gguf
117023ce0: a9a5 a552 9eac 6980 0000 0000 0000 0000  ...R..i.........
117023cf0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
# remove some bytes and check
$ truncate -s -8 ggml-model-Q4_K_M.gguf
$ ollama create gte_qwen2:7b-q4_k_m
transferring model data 100% 
Error: invalid file magic
# didn't work, remove more bytes and check again
$ truncate -s -8 ggml-model-Q4_K_M.gguf
$ ollama create gte_qwen2:7b-q4_k_m
transferring model data 100% 
Error: invalid file magic
# still no luck, try again
$ truncate -s -8 ggml-model-Q4_K_M.gguf
$ ollama create gte_qwen2:7b-q4_k_m
transferring model data 100% 
using existing layer sha256:b403e0ff4ee7bfd31e8ca8a15fb175905be4765a52d6b5eb507016e0010c9f5d 
using existing layer sha256:1d7b30221ae85347af6055c4b0a783cc6e3de68ffb6e665eb29eeaa6a332d8cd 
using existing layer sha256:f02dd72bb2423204352eabc5637b44d79d17f109fdb510a7c51455892aa2d216 
creating new layer sha256:5d60404947d0451b8a2150ebf065bd247eb15564f1a16b3b827bc09c7f9c8378 
writing manifest 
success 
# see if the model works
$ curl -s localhost:11434/api/embed -d '{"model":"gte_qwen2:7b-q4_k_m","input":"Great success!"}' | jq '.embeddings=[.embeddings[]|length]'
{
  "model": "gte_qwen2:7b-q4_k_m",
  "embeddings": [
    3584
  ],
  "total_duration": 247745703,
  "load_duration": 2109291,
  "prompt_eval_count": 3
}

On Windows, you can use fsutil to remove bytes:

FSUTIL file seteof ggml-model-Q4_K_M.gguf <size>
<!-- gh-comment-id:2468626194 --> @rick-github commented on GitHub (Nov 11, 2024): The problem here is that the llama.cpp quantizer pads the output with null bytes until it's a multiple of 32 bytes long. The llama.cpp inference engine doesn't care about the trailing bytes, but ollama does because it wants to be able to process concatenated GGUF files, and the pad bytes confuse it. The solution is to remove the null bytes until the import succeeds. In the case of this model, I had to remove 24 bytes. I did this in three steps of removing 8 bytes each time, on the assumption that 64 bits is a natural underlying data size for a 64 bit machine: ```console # try importing immediately after converting with llama.cpp $ ollama create gte_qwen2:7b-q4_k_m transferring model data 100% Error: invalid file magic # check trailing bytes $ xxd -s -32 ggml-model-Q4_K_M.gguf 117023ce0: a9a5 a552 9eac 6980 0000 0000 0000 0000 ...R..i......... 117023cf0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ # remove some bytes and check $ truncate -s -8 ggml-model-Q4_K_M.gguf $ ollama create gte_qwen2:7b-q4_k_m transferring model data 100% Error: invalid file magic # didn't work, remove more bytes and check again $ truncate -s -8 ggml-model-Q4_K_M.gguf $ ollama create gte_qwen2:7b-q4_k_m transferring model data 100% Error: invalid file magic # still no luck, try again $ truncate -s -8 ggml-model-Q4_K_M.gguf $ ollama create gte_qwen2:7b-q4_k_m transferring model data 100% using existing layer sha256:b403e0ff4ee7bfd31e8ca8a15fb175905be4765a52d6b5eb507016e0010c9f5d using existing layer sha256:1d7b30221ae85347af6055c4b0a783cc6e3de68ffb6e665eb29eeaa6a332d8cd using existing layer sha256:f02dd72bb2423204352eabc5637b44d79d17f109fdb510a7c51455892aa2d216 creating new layer sha256:5d60404947d0451b8a2150ebf065bd247eb15564f1a16b3b827bc09c7f9c8378 writing manifest success # see if the model works $ curl -s localhost:11434/api/embed -d '{"model":"gte_qwen2:7b-q4_k_m","input":"Great success!"}' | jq '.embeddings=[.embeddings[]|length]' { "model": "gte_qwen2:7b-q4_k_m", "embeddings": [ 3584 ], "total_duration": 247745703, "load_duration": 2109291, "prompt_eval_count": 3 } ``` On Windows, you can use [`fsutil`](https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/fsutil-file) to remove bytes: ``` FSUTIL file seteof ggml-model-Q4_K_M.gguf <size> ```
Author
Owner

@rick-github commented on GitHub (May 21, 2025):

Fixed via #10722

<!-- gh-comment-id:2899286435 --> @rick-github commented on GitHub (May 21, 2025): Fixed via #10722
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3705