[GH-ISSUE #4075] invalid file magic while importing llama3 70b into ollama #28290

Closed
opened 2026-04-22 06:17:20 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @SakuraEntropia on GitHub (May 1, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4075

What is the issue?

the model i used is from https://hf-mirror.com/mradermacher/llama-3-70B-instruct-uncensored-i1-GGUF,
the issues are like this
PS D:\Ollama> ollama create llama3:70b -f Modelfile transferring model data creating model layer Error: invalid file magic
the model couldn't be successfully booted into ollama.
Is llama3 supported to be imported?

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.1.32

Originally created by @SakuraEntropia on GitHub (May 1, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4075 ### What is the issue? the model i used is from https://hf-mirror.com/mradermacher/llama-3-70B-instruct-uncensored-i1-GGUF, the issues are like this `PS D:\Ollama> ollama create llama3:70b -f Modelfile transferring model data creating model layer Error: invalid file magic` the model couldn't be successfully booted into ollama. Is llama3 supported to be imported? ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.32
GiteaMirror added the bug label 2026-04-22 06:17:20 -05:00
Author
Owner

@pdevine commented on GitHub (May 1, 2024):

Hey @David20080125, usually this happens because the GGUF file is corrupt, or you didn't actually download the file. Can you verify the checksum on the file is correct?

<!-- gh-comment-id:2088462282 --> @pdevine commented on GitHub (May 1, 2024): Hey @David20080125, usually this happens because the GGUF file is corrupt, or you didn't actually download the file. Can you verify the checksum on the file is correct?
Author
Owner

@SakuraEntropia commented on GitHub (May 1, 2024):

let me check

<!-- gh-comment-id:2088602606 --> @SakuraEntropia commented on GitHub (May 1, 2024): let me check
Author
Owner

@oldgithubman commented on GitHub (May 21, 2024):

I am getting the same error with an IQ3_XXS quant I just made. All my other quants work fine. I can try requanting it...

Edit - Requant doesn't work either. IQ3_XXS support seems to be broken

<!-- gh-comment-id:2121663906 --> @oldgithubman commented on GitHub (May 21, 2024): I am getting the same error with an IQ3_XXS quant I just made. All my other quants work fine. I can try requanting it... Edit - Requant doesn't work either. IQ3_XXS support seems to be broken
Author
Owner

@xwbxxx commented on GitHub (May 26, 2024):

I have the same problem, with ollama version is 0.1.31, and my llm is codeqwen-1_5-7b-chat-q2_k.gguf.
I have verifid the sha256, and it's the same with official readme file

<!-- gh-comment-id:2132290376 --> @xwbxxx commented on GitHub (May 26, 2024): I have the same problem, with ollama version is **0.1.31**, and my llm is **codeqwen-1_5-7b-chat-q2_k.gguf**. I have verifid the sha256, and it's the same with official readme file
Author
Owner

@joshyan1 commented on GitHub (Jun 25, 2024):

Hey, I-quants have been supported since https://github.com/ollama/ollama/pull/4322. I also cannot seem to reproduce this issue locally. Please re-open this issue if it appears again. Thanks!

<!-- gh-comment-id:2190205755 --> @joshyan1 commented on GitHub (Jun 25, 2024): Hey, I-quants have been supported since https://github.com/ollama/ollama/pull/4322. I also cannot seem to reproduce this issue locally. Please re-open this issue if it appears again. Thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28290