[GH-ISSUE #2935] Ollama returns: Error: error loading model when importing a fined-tuned converted and quantized model #63837

Closed
opened 2026-05-03 15:08:17 -05:00 by GiteaMirror · 20 comments
Owner

Originally created by @FotieMConstant on GitHub (Mar 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2935

Hi everyone, i am having an issue running a fine-tuned quantized version of llama2 on ollama. i followed all the stops at: https://github.com/ollama/ollama/blob/main/docs/import.md

however after quantizing and creating my model on ollama. i can see my model on the list however when i run it i get this error

Error: error loading model /Users/🤓.ollama/models/blobs/sha256:1c75cbd55211b7505be15c897b3ca1766708e5808558139e1531e182

can someone help with this? as i am not sure what is going on. technically it should work.

ollama version is 0.1.27
OS: Mac OS Sonama, version 14.3.1 on Apple M1 chip

Originally created by @FotieMConstant on GitHub (Mar 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2935 Hi everyone, i am having an issue running a fine-tuned quantized version of llama2 on ollama. i followed all the stops at: https://github.com/ollama/ollama/blob/main/docs/import.md however after quantizing and creating my model on ollama. i can see my model on the list however when i run it i get this error ```bash Error: error loading model /Users/🤓.ollama/models/blobs/sha256:1c75cbd55211b7505be15c897b3ca1766708e5808558139e1531e182 ``` can someone help with this? as i am not sure what is going on. technically it should work. **ollama version is 0.1.27** **OS: Mac OS Sonama, version 14.3.1 on Apple M1 chip**
Author
Owner

@V4G4X commented on GitHub (Mar 6, 2024):

Same here.

Mine were running a couple of days ago. Not anymore.

<!-- gh-comment-id:1980062313 --> @V4G4X commented on GitHub (Mar 6, 2024): Same here. Mine were running a couple of days ago. Not anymore.
Author
Owner

@mxyng commented on GitHub (Mar 6, 2024):

Can you share the model? Without knowing the specifics, there's not much we can do to help troubleshoot

<!-- gh-comment-id:1981927270 --> @mxyng commented on GitHub (Mar 6, 2024): Can you share the model? Without knowing the specifics, there's not much we can do to help troubleshoot
Author
Owner

@bmizerany commented on GitHub (Mar 6, 2024):

@FotieMConstant Can you please share a step-by-step minimal reproduction (Modelfile included) by chance?

<!-- gh-comment-id:1981930100 --> @bmizerany commented on GitHub (Mar 6, 2024): @FotieMConstant Can you please share a step-by-step minimal reproduction (Modelfile included) by chance?
Author
Owner

@FotieMConstant commented on GitHub (Mar 7, 2024):

Sure, so i have fine-tuned a version of Llama2 7b here. and follwing these instructions here, i cloned, converted and quantized the model and got a "quantized.bin" of about 3.8 GB. i then created a Modelfile and imported the quantized version from there with which i used to create my new model. which is not running.

few things to note. while converting the model, i encountered an error related to vocab size, here is the error:

vocab size mismatch (model has 32000, but jarvis-hf/tokenizer.model has 32001).

then i followed the fix here-> https://github.com/ggerganov/llama.cpp/issues/3900
went into my hugging face model folder and edited the config.json file.
from:
image

to:

image

After this, i wasn then able to convert and quantize, however the model won't run on ollama. by the way, i am able to create and list the model on ollama but it doesn't run. also, all other downloaded models from olllama with the ollama command works perfectly.

please let me know if there is anything else you want me to add.

<!-- gh-comment-id:1982413417 --> @FotieMConstant commented on GitHub (Mar 7, 2024): Sure, so i have fine-tuned a version of Llama2 7b [here](https://huggingface.co/fotiecodes/Llama-2-7b-chat-jarvis). and follwing these instructions [here](https://github.com/ollama/ollama/blob/main/docs/import.md), i cloned, converted and quantized the model and got a "quantized.bin" of about 3.8 GB. i then created a Modelfile and imported the quantized version from there with which i used to create my new model. which is not running. few things to note. while converting the model, i encountered an error related to vocab size, here is the error: ```bash vocab size mismatch (model has 32000, but jarvis-hf/tokenizer.model has 32001). ``` then i followed the fix here-> https://github.com/ggerganov/llama.cpp/issues/3900 went into my hugging face model folder and edited the `config.json` file. from: <img width="385" alt="image" src="https://github.com/ollama/ollama/assets/42372656/11318671-f2de-4f3e-bc42-5dec79617be6"> to: <img width="472" alt="image" src="https://github.com/ollama/ollama/assets/42372656/6f958ba3-3bde-4a54-9611-268879b1fcae"> After this, i wasn then able to convert and quantize, however the model won't run on ollama. by the way, i am able to create and list the model on ollama but it doesn't run. also, all other downloaded models from olllama with the ollama command works perfectly. please let me know if there is anything else you want me to add.
Author
Owner

@V4G4X commented on GitHub (Mar 7, 2024):

Okay weird behaviour.

It's working now. Hmmm.
I don't remember changing anything.

<!-- gh-comment-id:1984109505 --> @V4G4X commented on GitHub (Mar 7, 2024): Okay weird behaviour. It's working now. Hmmm. I don't remember changing anything.
Author
Owner

@bmizerany commented on GitHub (Mar 7, 2024):

@FotieMConstant Do you mind sharing your Modelfile?

<!-- gh-comment-id:1984128373 --> @bmizerany commented on GitHub (Mar 7, 2024): @FotieMConstant Do you mind sharing your Modelfile?
Author
Owner

@FotieMConstant commented on GitHub (Mar 7, 2024):

Sure here: FROM ./ollama/quantized.bin

<!-- gh-comment-id:1984240800 --> @FotieMConstant commented on GitHub (Mar 7, 2024): Sure here: `FROM ./ollama/quantized.bin`
Author
Owner

@FotieMConstant commented on GitHub (Mar 7, 2024):

Okay weird behaviour.

It's working now. Hmmm. I don't remember changing anything.

the thing is when i try to convert model with llama.cpp i get this error:

Writing converted.bin, format 1
Traceback (most recent call last):
  File "/Users/🤓/opensource/jarvis/ollama/llm/llama.cpp/convert.py", line 1483, in <module>
    main()
  File "/Users/🤓/jarvis/ollama/llm/llama.cpp/convert.py", line 1477, in main
    OutputFile.write_all(outfile, ftype, params, model, vocab, special_vocab,
  File "/Users/🤓/jarvis/ollama/llm/llama.cpp/convert.py", line 1117, in write_all
    check_vocab_size(params, vocab, pad_vocab=pad_vocab)
  File "/Users/🤓/jarvis/ollama/llm/llama.cpp/convert.py", line 963, in check_vocab_size
    raise Exception(msg)
Exception: Vocab size mismatch (model has 32000, but ../jarvis-hf/tokenizer.model has 32001).

Which is why i change it actually. but now i am starting to think that i shouldn't as this might be the issue preventing it to work when i run in ollama

<!-- gh-comment-id:1984246291 --> @FotieMConstant commented on GitHub (Mar 7, 2024): > Okay weird behaviour. > > It's working now. Hmmm. I don't remember changing anything. the thing is when i try to convert model with llama.cpp i get this error: ```bash Writing converted.bin, format 1 Traceback (most recent call last): File "/Users/🤓/opensource/jarvis/ollama/llm/llama.cpp/convert.py", line 1483, in <module> main() File "/Users/🤓/jarvis/ollama/llm/llama.cpp/convert.py", line 1477, in main OutputFile.write_all(outfile, ftype, params, model, vocab, special_vocab, File "/Users/🤓/jarvis/ollama/llm/llama.cpp/convert.py", line 1117, in write_all check_vocab_size(params, vocab, pad_vocab=pad_vocab) File "/Users/🤓/jarvis/ollama/llm/llama.cpp/convert.py", line 963, in check_vocab_size raise Exception(msg) Exception: Vocab size mismatch (model has 32000, but ../jarvis-hf/tokenizer.model has 32001). ``` Which is why i change it actually. but now i am starting to think that i shouldn't as this might be the issue preventing it to work when i run in ollama
Author
Owner

@FotieMConstant commented on GitHub (Mar 7, 2024):

@bmizerany i noticed this too, the hash has something at the end, is it normal.

image
<!-- gh-comment-id:1984345838 --> @FotieMConstant commented on GitHub (Mar 7, 2024): @bmizerany i noticed this too, the hash has something at the end, is it normal. <img width="1089" alt="image" src="https://github.com/ollama/ollama/assets/42372656/a8e9386f-54da-435d-aef8-e2cacad78567">
Author
Owner

@FotieMConstant commented on GitHub (Mar 12, 2024):

Hey, any update one this issue? anyone?

<!-- gh-comment-id:1991461217 --> @FotieMConstant commented on GitHub (Mar 12, 2024): Hey, any update one this issue? anyone?
Author
Owner

@FotieMConstant commented on GitHub (Mar 15, 2024):

Hi there @FYYHU, thanks for your comment, i used an open Jupyter notebook for the fine-tuning, here you can see what i did, any thoughts?

https://colab.research.google.com/drive/1FTt_Z1eGOsl2VgPVb8pnM4yUTczhSutM?usp=sharing

<!-- gh-comment-id:2000439329 --> @FotieMConstant commented on GitHub (Mar 15, 2024): Hi there @FYYHU, thanks for your comment, i used an open Jupyter notebook for the fine-tuning, here you can see what i did, any thoughts? https://colab.research.google.com/drive/1FTt_Z1eGOsl2VgPVb8pnM4yUTczhSutM?usp=sharing
Author
Owner

@FotieMConstant commented on GitHub (Mar 15, 2024):

I did same here, but seems not to work, also here is a notebook to download and run my custom model, which works fine on Jupyter notebook

https://colab.research.google.com/drive/19ZuropXXc2_jMC_qxqa8MO4mHHxOqxxe?usp=sharing

<!-- gh-comment-id:2000443321 --> @FotieMConstant commented on GitHub (Mar 15, 2024): I did same here, but seems not to work, also here is a notebook to download and run my custom model, which works fine on Jupyter notebook https://colab.research.google.com/drive/19ZuropXXc2_jMC_qxqa8MO4mHHxOqxxe?usp=sharing
Author
Owner

@FYYHU commented on GitHub (Mar 15, 2024):

Ok, I'm not sure if this is the bug but if you look at the model you pull from the NousResearch/Llama-2-7b-chat-hf
image
you see in the config.json that they have vocab_size of 32000. But what is weird is that when you load the tokenizer it says it has a length of 32001

image

I don't know exactly where the bug is, but you could maybe try using a different pre-trained llama chat model?

<!-- gh-comment-id:2000481029 --> @FYYHU commented on GitHub (Mar 15, 2024): Ok, I'm not sure if this is the bug but if you look at the model you pull from the NousResearch/Llama-2-7b-chat-hf ![image](https://github.com/ollama/ollama/assets/44072346/48c76818-8ac3-4af7-9b72-dfef0090d26a) you see in the config.json that they have vocab_size of 32000. But what is weird is that when you load the tokenizer it says it has a length of 32001 ![image](https://github.com/ollama/ollama/assets/44072346/dc0a11a3-ade4-49d8-bd8d-99fb5293389a) I don't know exactly where the bug is, but you could maybe try using a different pre-trained llama chat model?
Author
Owner

@FotieMConstant commented on GitHub (Mar 15, 2024):

You right @V4G4X i thought as much, any recommendations of another 7B pre-trained llama chat model I can use? I’m pretty new to this and it’s been giving me some headaches for a while now

<!-- gh-comment-id:2000507173 --> @FotieMConstant commented on GitHub (Mar 15, 2024): You right @V4G4X i thought as much, any recommendations of another 7B pre-trained llama chat model I can use? I’m pretty new to this and it’s been giving me some headaches for a while now
Author
Owner

@FotieMConstant commented on GitHub (Mar 15, 2024):

The model from the link you shared above is same actually: https://huggingface.co/georgesung/llama2_7b_openorca_35k/blob/main/config.json

<!-- gh-comment-id:2000519000 --> @FotieMConstant commented on GitHub (Mar 15, 2024): The model from the link you shared above is same actually: https://huggingface.co/georgesung/llama2_7b_openorca_35k/blob/main/config.json
Author
Owner

@V4G4X commented on GitHub (Mar 17, 2024):

@FotieMConstant I'm new to local OS models as well.
My laptop only has 8GB RAM, so I use the 1B tier of models. xD

But if you're taking suggestions, I use starcoder and deepseek-coder for code autocomplete.
And found Gemma the best for chatting.

<!-- gh-comment-id:2002241156 --> @V4G4X commented on GitHub (Mar 17, 2024): @FotieMConstant I'm new to local OS models as well. My laptop only has 8GB RAM, so I use the 1B tier of models. xD But if you're taking suggestions, I use starcoder and deepseek-coder for code autocomplete. And found Gemma the best for chatting.
Author
Owner

@saul-jb commented on GitHub (Mar 17, 2024):

I'm getting the same issue with one of the hosted models:

$ ollama pull yarn-llama2:13b-128k-q5_K_M
pulling manifest 
pulling 6768c57cf9ca... 100% ▕████████████████▏ 9.2 GB                         
pulling 1639d5c1f004... 100% ▕████████████████▏   18 B                         
pulling 4d4cf0639ed3... 100% ▕████████████████▏  310 B                         
verifying sha256 digest 
writing manifest 
removing any unused layers 
success
$ ollama run yarn-llama2:13b-128k-q5_K_M
Error: error loading model /usr/share/ollama/.ollama/models/blobs/sha256:6768c57cf9ca3415c5ba91c6483fe5b2938d660520f1fb8dddeb74e5bae91
<!-- gh-comment-id:2002619994 --> @saul-jb commented on GitHub (Mar 17, 2024): I'm getting the same issue with one of the hosted models: ``` $ ollama pull yarn-llama2:13b-128k-q5_K_M pulling manifest pulling 6768c57cf9ca... 100% ▕████████████████▏ 9.2 GB pulling 1639d5c1f004... 100% ▕████████████████▏ 18 B pulling 4d4cf0639ed3... 100% ▕████████████████▏ 310 B verifying sha256 digest writing manifest removing any unused layers success ``` ``` $ ollama run yarn-llama2:13b-128k-q5_K_M Error: error loading model /usr/share/ollama/.ollama/models/blobs/sha256:6768c57cf9ca3415c5ba91c6483fe5b2938d660520f1fb8dddeb74e5bae91 ```
Author
Owner

@FotieMConstant commented on GitHub (Mar 18, 2024):

Update: I finally was able to fix my issue, it wasn’t me nor the way I trained my model but an issue with the base model I used for fine-tuning. After a couple tweaks here and there and with some help from the llama.cpp community I could fix the issue.

PS: though it might not be a guaranteed solution however, it works:)

<!-- gh-comment-id:2003021524 --> @FotieMConstant commented on GitHub (Mar 18, 2024): Update: I finally was able to fix my issue, it wasn’t me nor the way I trained my model but an issue with the base model I used for fine-tuning. After a couple tweaks here and there and with some help from the llama.cpp community I could fix the issue. PS: though it might not be a guaranteed solution however, it works:)
Author
Owner

@FotieMConstant commented on GitHub (Mar 18, 2024):

I'm getting the same issue with one of the hosted models:

$ ollama pull yarn-llama2:13b-128k-q5_K_M
pulling manifest 
pulling 6768c57cf9ca... 100% ▕████████████████▏ 9.2 GB                         
pulling 1639d5c1f004... 100% ▕████████████████▏   18 B                         
pulling 4d4cf0639ed3... 100% ▕████████████████▏  310 B                         
verifying sha256 digest 
writing manifest 
removing any unused layers 
success
$ ollama run yarn-llama2:13b-128k-q5_K_M
Error: error loading model /usr/share/ollama/.ollama/models/blobs/sha256:6768c57cf9ca3415c5ba91c6483fe5b2938d660520f1fb8dddeb74e5bae91

this could occur for various reasons, you can check the ollama logs for more information however, I'd say contact the developer who pushed the model.

<!-- gh-comment-id:2003023213 --> @FotieMConstant commented on GitHub (Mar 18, 2024): > I'm getting the same issue with one of the hosted models: > > ``` > $ ollama pull yarn-llama2:13b-128k-q5_K_M > pulling manifest > pulling 6768c57cf9ca... 100% ▕████████████████▏ 9.2 GB > pulling 1639d5c1f004... 100% ▕████████████████▏ 18 B > pulling 4d4cf0639ed3... 100% ▕████████████████▏ 310 B > verifying sha256 digest > writing manifest > removing any unused layers > success > ``` > > ``` > $ ollama run yarn-llama2:13b-128k-q5_K_M > Error: error loading model /usr/share/ollama/.ollama/models/blobs/sha256:6768c57cf9ca3415c5ba91c6483fe5b2938d660520f1fb8dddeb74e5bae91 > ``` this could occur for various reasons, you can check the ollama logs for more information however, I'd say contact the developer who pushed the model.
Author
Owner

@pdevine commented on GitHub (May 10, 2024):

@FotieMConstant thanks for being persistent here and sorry about not updating the issue. There have been a number of changes for the create command which should make it somewhat easier to do conversions, but it's definitely far from being perfect at this point.

Since you were able to resolve the issue, I'll go ahead and close it.

<!-- gh-comment-id:2105228957 --> @pdevine commented on GitHub (May 10, 2024): @FotieMConstant thanks for being persistent here and sorry about not updating the issue. There have been a number of changes for the `create` command which should make it somewhat easier to do conversions, but it's definitely far from being perfect at this point. Since you were able to resolve the issue, I'll go ahead and close it.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#63837