[GH-ISSUE #6357] Error: unknown data type: U8 #66027

Closed
opened 2026-05-03 23:39:56 -05:00 by GiteaMirror · 28 comments
Owner

Originally created by @YaBoyBigPat on GitHub (Aug 14, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6357

I'm having trouble converting my llama 3.1 model to ollama, here's the error I get:
PS C:\Users\ljjx> ollama create -q Q4_K_M llama3.1q4 -f "C:\Users\ljjx\HFModels\Modelfile"
transferring model data
converting model
Error: unknown data type: U8

here's how I set up the modfile:

FROM C:\Users\ljjx\HFModels\Meta-Llama-3.1-8B-Instruct-4bit
TEMPLATE """{{ if .Messages }}
{{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|>
{{- if .System }}

{{ .System }}
{{- end }}
{{- if .Tools }}

You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the orginal use question.
{{- end }}<|eot_id|>
{{- end }}
{{- range $i, $_ := .Messages }}
{{- last := eq (len (slice .Messages $i)) 1 }}
{{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|>
{{- if and $.Tools $last }}

Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.

Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables.

{{ $.Tools }}
{{- end }}

{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}
{{- else if eq .Role "assistant" }}<|start_header_id|>assistant<|end_header_id|>
{{- if .ToolCalls }}

{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }}
{{- else }}

{{ .Content }}{{ if not $last }}<|eot_id|>{{ end }}
{{- end }}
{{- else if eq .Role "tool" }}<|start_header_id|>ipython<|end_header_id|>

{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}
{{- end }}
{{- end }}
{{- else }}
{{- if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}"""
PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>

I don't know what I'm doing wrong here. I also tried it with a mostly blank template and without quantization, but got the same error.

Originally created by @YaBoyBigPat on GitHub (Aug 14, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6357 I'm having trouble converting my llama 3.1 model to ollama, here's the error I get: PS C:\Users\ljjx> ollama create -q Q4_K_M llama3.1q4 -f "C:\Users\ljjx\HFModels\Modelfile" transferring model data converting model Error: unknown data type: U8 here's how I set up the modfile: FROM C:\Users\ljjx\HFModels\Meta-Llama-3.1-8B-Instruct-4bit TEMPLATE """{{ if .Messages }} {{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|> {{- if .System }} {{ .System }} {{- end }} {{- if .Tools }} You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the orginal use question. {{- end }}<|eot_id|> {{- end }} {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|> {{- if and $.Tools $last }} Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt. Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables. {{ $.Tools }} {{- end }} {{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|> {{ end }} {{- else if eq .Role "assistant" }}<|start_header_id|>assistant<|end_header_id|> {{- if .ToolCalls }} {{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }} {{- else }} {{ .Content }}{{ if not $last }}<|eot_id|>{{ end }} {{- end }} {{- else if eq .Role "tool" }}<|start_header_id|>ipython<|end_header_id|> {{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|> {{ end }} {{- end }} {{- end }} {{- else }} {{- if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}""" PARAMETER stop <|start_header_id|> PARAMETER stop <|end_header_id|> PARAMETER stop <|eot_id|> I don't know what I'm doing wrong here. I also tried it with a mostly blank template and without quantization, but got the same error.
Author
Owner

@rick-github commented on GitHub (Aug 14, 2024):

Where did you download Meta-Llama-3.1-8B-Instruct-4bit from?

<!-- gh-comment-id:2288639829 --> @rick-github commented on GitHub (Aug 14, 2024): Where did you download Meta-Llama-3.1-8B-Instruct-4bit from?
Author
Owner

@YaBoyBigPat commented on GitHub (Aug 14, 2024):

Oh sorry, I should have clarified that part. It's the original Meta-Llama-3.1-8B-Instruct that's from Meta on hugging face I added the 4bit part to it because I quantized it down to test it out. https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct

<!-- gh-comment-id:2289606277 --> @YaBoyBigPat commented on GitHub (Aug 14, 2024): Oh sorry, I should have clarified that part. It's the original Meta-Llama-3.1-8B-Instruct that's from Meta on hugging face I added the 4bit part to it because I quantized it down to test it out. https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct
Author
Owner

@rick-github commented on GitHub (Aug 14, 2024):

Just to clarify: is the model in C:\Users\ljjx\HFModels\Meta-Llama-3.1-8B-Instruct-4bit already quantized? If that's the case, you don't need -q Q4_K_M in your ollama create command.

<!-- gh-comment-id:2289613631 --> @rick-github commented on GitHub (Aug 14, 2024): Just to clarify: is the model in `C:\Users\ljjx\HFModels\Meta-Llama-3.1-8B-Instruct-4bit` already quantized? If that's the case, you don't need `-q Q4_K_M` in your `ollama create` command.
Author
Owner

@YaBoyBigPat commented on GitHub (Aug 14, 2024):

Yes, that model is the one that's been quantized and where the safetensors files are. But even if I don't quantize it I get an error

" ollama create llama3.1personal -f "C:\Users\ljjx\HFModels\Modelfile"
transferring model data
converting model
Error: unknown data type: U8"

Here are some screenshots.

image

image

<!-- gh-comment-id:2289708359 --> @YaBoyBigPat commented on GitHub (Aug 14, 2024): Yes, that model is the one that's been quantized and where the safetensors files are. But even if I don't quantize it I get an error " ollama create llama3.1personal -f "C:\Users\ljjx\HFModels\Modelfile" transferring model data converting model Error: unknown data type: U8" Here are some screenshots. ![image](https://github.com/user-attachments/assets/6d2c1c00-3120-40b3-8135-1c500449dbb1) ![image](https://github.com/user-attachments/assets/52d344bd-d316-486c-8fa4-1b44e33283d4)
Author
Owner

@rick-github commented on GitHub (Aug 14, 2024):

I'm curious, how did you quantize the 4 safetensor files from the original model? Most quantization methods end up with a GGUF format file.

I think what's happening here is that ollama is expecting unquantized safetensors, so if you change the FROM line in your modelfile to point to the original safetensors, the import may work. Alternatively, generate a GGUF with convert_hf_to_gguf.py from llama.cpp and use that instead.

<!-- gh-comment-id:2289742728 --> @rick-github commented on GitHub (Aug 14, 2024): I'm curious, how did you quantize the 4 safetensor files from the original model? Most quantization methods end up with a GGUF format file. I think what's happening here is that ollama is expecting unquantized safetensors, so if you change the FROM line in your modelfile to point to the original safetensors, the import may work. Alternatively, generate a GGUF with `convert_hf_to_gguf.py` from llama.cpp and use that instead.
Author
Owner

@YaBoyBigPat commented on GitHub (Aug 15, 2024):

You can use hugging face to quantize the models too, but they won't be as fast as the gguf format I couldn't get llama cpp to work so I quantized it using bitsandbytes from hugging face and just did "quantize_config = BitsAndBytesConfig(load_in_4bit=True)". Then I saved the tensors and tokenizer to make the safetensors.

image

I may give llama cpp a try again, but this was the problem I ran into there, I could be doing it wrong though.

image

<!-- gh-comment-id:2290365105 --> @YaBoyBigPat commented on GitHub (Aug 15, 2024): You can use hugging face to quantize the models too, but they won't be as fast as the gguf format I couldn't get llama cpp to work so I quantized it using bitsandbytes from hugging face and just did "quantize_config = BitsAndBytesConfig(load_in_4bit=True)". Then I saved the tensors and tokenizer to make the safetensors. ![image](https://github.com/user-attachments/assets/0f89d0df-32ea-4201-a858-9ffdd41dc5f8) I may give llama cpp a try again, but this was the problem I ran into there, I could be doing it wrong though. ![image](https://github.com/user-attachments/assets/0653017f-59b5-4019-b018-defbf5f6b54d)
Author
Owner

@rick-github commented on GitHub (Aug 15, 2024):

It looks like you are running convert_hf_to_gguf.py on the the quantized safetensors. You need to run it on the original tensors, which will create a f16 GGUF file, then you quantize that file to q4_K_M with ollama. If you have the original tensors, then you just point the FROM line in your modelfile to that directory and do ollama create -q q4_K_M: ollama will build the f16 GGUF file and then quantize in one step.

<!-- gh-comment-id:2291275463 --> @rick-github commented on GitHub (Aug 15, 2024): It looks like you are running convert_hf_to_gguf.py on the the quantized safetensors. You need to run it on the original tensors, which will create a f16 GGUF file, then you quantize that file to q4_K_M with ollama. If you have the original tensors, then you just point the FROM line in your modelfile to that directory and do `ollama create -q q4_K_M`: ollama will build the f16 GGUF file and then quantize in one step.
Author
Owner

@YaBoyBigPat commented on GitHub (Aug 15, 2024):

Nice, thank you. I'll try what you suggested and let you know how it goes.

<!-- gh-comment-id:2292299068 --> @YaBoyBigPat commented on GitHub (Aug 15, 2024): Nice, thank you. I'll try what you suggested and let you know how it goes.
Author
Owner

@YaBoyBigPat commented on GitHub (Aug 16, 2024):

Okay so I tried it with the original model without any quantification and got an error "AttributeError: 'GGUFWriter' object has no attribute 'get_total_parameter_count'". I know I'm doing something wrong, just don't know what it is. I cloned llama cpp from the github if it helps.

image

image

<!-- gh-comment-id:2292646908 --> @YaBoyBigPat commented on GitHub (Aug 16, 2024): Okay so I tried it with the original model without any quantification and got an error "AttributeError: 'GGUFWriter' object has no attribute 'get_total_parameter_count'". I know I'm doing something wrong, just don't know what it is. I cloned llama cpp from the github if it helps. ![image](https://github.com/user-attachments/assets/54d3c1eb-949d-435d-8e07-aacaa1c061a5) ![image](https://github.com/user-attachments/assets/8f43f191-b503-489a-8dbd-9ae7d240be48)
Author
Owner

@YaBoyBigPat commented on GitHub (Aug 17, 2024):

I also just tried it with the updated convert hf to gguf python files and I'm getting this error instead:

"PS C:\Users\ljjx\HFModels\llama-cpp-python\llama_cpp> python convert_hf_to_gguf.py "C:\Users\ljjx\HFModels\Meta-Llama-3.1-8B-Instruct" --outfile freemodel.gguf
Traceback (most recent call last):
File "C:\Users\ljjx\HFModels\llama-cpp-python\llama_cpp\convert_hf_to_gguf.py", line 3337, in
class T5EncoderModel(Model):
File "C:\Users\ljjx\HFModels\llama-cpp-python\llama_cpp\convert_hf_to_gguf.py", line 3338, in T5EncoderModel
model_arch = gguf.MODEL_ARCH.T5ENCODER
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ljjx\AppData\Local\Programs\Python\Python311\Lib\enum.py", line 786, in getattr
raise AttributeError(name) from None
AttributeError: T5ENCODER"

<!-- gh-comment-id:2294740006 --> @YaBoyBigPat commented on GitHub (Aug 17, 2024): I also just tried it with the updated convert hf to gguf python files and I'm getting this error instead: "PS C:\Users\ljjx\HFModels\llama-cpp-python\llama_cpp> python convert_hf_to_gguf.py "C:\Users\ljjx\HFModels\Meta-Llama-3.1-8B-Instruct" --outfile freemodel.gguf Traceback (most recent call last): File "C:\Users\ljjx\HFModels\llama-cpp-python\llama_cpp\convert_hf_to_gguf.py", line 3337, in <module> class T5EncoderModel(Model): File "C:\Users\ljjx\HFModels\llama-cpp-python\llama_cpp\convert_hf_to_gguf.py", line 3338, in T5EncoderModel model_arch = gguf.MODEL_ARCH.T5ENCODER ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ljjx\AppData\Local\Programs\Python\Python311\Lib\enum.py", line 786, in __getattr__ raise AttributeError(name) from None AttributeError: T5ENCODER"
Author
Owner

@YaBoyBigPat commented on GitHub (Aug 17, 2024):

I was able to transfer the original llama 3.1 to ollama, not through llama ccp, but just through the ollama model file and "ollama create". I'll still try to figure out llama cpp though since the gguf format is way more flexible.

<!-- gh-comment-id:2294805722 --> @YaBoyBigPat commented on GitHub (Aug 17, 2024): I was able to transfer the original llama 3.1 to ollama, not through llama ccp, but just through the ollama model file and "ollama create". I'll still try to figure out llama cpp though since the gguf format is way more flexible.
Author
Owner

@pdevine commented on GitHub (Aug 23, 2024):

@YaBoyBigPat the -q Q4_K_M quantize variable is to quantize a non-quantized model to that particular quantization level (i.e. fp16 or fp32). You don't need to specify it when loading in an already quantized model.

Ollama is capable of converting models without using the convert_hf_to_gguf.py script (including llama3.1) directly from safetensors. It actually will also work w/ pytorch models, but that may be removed in the future. There's a problem in the Meta-Llama-3.1-8B-Instruct repo from HF though which includes both copies of the safetensors and pytorch files (the pytorch files are in a directory called original) which confuses ollama create and it'll spit out a confusing warning. Just remove original/ directory and it should work.

To make this work though, create a Modelfile which looks like:

FROM \path\to\the\original\Meta-Llama-3.1-8B-Instruct

I think we should be able to autodetect the correct template, so you don't technically need to include that, but you can include whichever other parameters you want. Then run:

ollama create -q Q4_K_M my-llama3.1

That should make your own quantized version of the llama3.1 model (just make certain you remove the original directory as I mentioned above).

<!-- gh-comment-id:2307935224 --> @pdevine commented on GitHub (Aug 23, 2024): @YaBoyBigPat the `-q Q4_K_M` quantize variable is to quantize a non-quantized model to that particular quantization level (i.e. fp16 or fp32). You don't need to specify it when loading in an already quantized model. Ollama is capable of converting models *without* using the `convert_hf_to_gguf.py` script (including llama3.1) directly from safetensors. It actually will also work w/ pytorch models, but that may be removed in the future. There's a problem in the `Meta-Llama-3.1-8B-Instruct` repo from HF though which includes *both* copies of the safetensors and pytorch files (the pytorch files are in a directory called `original`) which confuses `ollama create` and it'll spit out a confusing warning. Just remove `original/` directory and it should work. To make this work though, create a Modelfile which looks like: ``` FROM \path\to\the\original\Meta-Llama-3.1-8B-Instruct ``` I think we should be able to autodetect the correct template, so you don't _technically_ need to include that, but you can include whichever other parameters you want. Then run: ``` ollama create -q Q4_K_M my-llama3.1 ``` That should make your own quantized version of the llama3.1 model (just make certain you remove the `original` directory as I mentioned above).
Author
Owner

@YaBoyBigPat commented on GitHub (Aug 24, 2024):

@YaBoyBigPat the -q Q4_K_M quantize variable is to quantize a non-quantized model to that particular quantization level (i.e. fp16 or fp32). You don't need to specify it when loading in an already quantized model.

Ollama is capable of converting models without using the convert_hf_to_gguf.py script (including llama3.1) directly from safetensors. It actually will also work w/ pytorch models, but that may be removed in the future. There's a problem in the Meta-Llama-3.1-8B-Instruct repo from HF though which includes both copies of the safetensors and pytorch files (the pytorch files are in a directory called original) which confuses ollama create and it'll spit out a confusing warning. Just remove original/ directory and it should work.

To make this work though, create a Modelfile which looks like:

FROM \path\to\the\original\Meta-Llama-3.1-8B-Instruct

I think we should be able to autodetect the correct template, so you don't technically need to include that, but you can include whichever other parameters you want. Then run:

ollama create -q Q4_K_M my-llama3.1

That should make your own quantized version of the llama3.1 model (just make certain you remove the original directory as I mentioned above).

Yes, thank you I was able to quantize and convert the model without loading in quantized in 4bit or 8bit with the bitsandbytes library, just using the regular model. I guess I'm slow cuz I was using the llama-cpp-python that I cloned from github and not the llama.cpp github clone. After I did that it converted the OG model to a gguf file. I'll also try it with other tensor types. Thanks for helping me through this.

<!-- gh-comment-id:2307947996 --> @YaBoyBigPat commented on GitHub (Aug 24, 2024): > @YaBoyBigPat the `-q Q4_K_M` quantize variable is to quantize a non-quantized model to that particular quantization level (i.e. fp16 or fp32). You don't need to specify it when loading in an already quantized model. > > Ollama is capable of converting models _without_ using the `convert_hf_to_gguf.py` script (including llama3.1) directly from safetensors. It actually will also work w/ pytorch models, but that may be removed in the future. There's a problem in the `Meta-Llama-3.1-8B-Instruct` repo from HF though which includes _both_ copies of the safetensors and pytorch files (the pytorch files are in a directory called `original`) which confuses `ollama create` and it'll spit out a confusing warning. Just remove `original/` directory and it should work. > > To make this work though, create a Modelfile which looks like: > > ``` > FROM \path\to\the\original\Meta-Llama-3.1-8B-Instruct > ``` > > I think we should be able to autodetect the correct template, so you don't _technically_ need to include that, but you can include whichever other parameters you want. Then run: > > ``` > ollama create -q Q4_K_M my-llama3.1 > ``` > > That should make your own quantized version of the llama3.1 model (just make certain you remove the `original` directory as I mentioned above). Yes, thank you I was able to quantize and convert the model without loading in quantized in 4bit or 8bit with the bitsandbytes library, just using the regular model. I guess I'm slow cuz I was using the llama-cpp-python that I cloned from github and not the llama.cpp github clone. After I did that it converted the OG model to a gguf file. I'll also try it with other tensor types. Thanks for helping me through this.
Author
Owner

@Timelessprod commented on GitHub (Aug 27, 2024):

I'm getting the same unknown data type: U8 errors from ollama when creating an assistant from Llama-3.1 after LoRA quantization and fine-tuning with custom data. If I try to convert the exported directory to GGUF with Llama.cpp I also get the same ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.weight.absmax' error. What's weird is that I loaded the model from HF with a 4 bits quantization and not 8 so I don't know where this U8 is coming from.

<!-- gh-comment-id:2312016506 --> @Timelessprod commented on GitHub (Aug 27, 2024): I'm getting the same `unknown data type: U8` errors from ollama when creating an assistant from Llama-3.1 after LoRA quantization and fine-tuning with custom data. If I try to convert the exported directory to GGUF with Llama.cpp I also get the same `ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.weight.absmax'` error. What's weird is that I loaded the model from HF with a 4 bits quantization and not 8 so I don't know where this U8 is coming from.
Author
Owner

@pdevine commented on GitHub (Aug 27, 2024):

@Timelessprod what framework did you use to create the model? Can you provide the ollama create line and the Modelfile, and is it possible to get access to the weights you're using?

<!-- gh-comment-id:2312994897 --> @pdevine commented on GitHub (Aug 27, 2024): @Timelessprod what framework did you use to create the model? Can you provide the `ollama create` line and the Modelfile, and is it possible to get access to the weights you're using?
Author
Owner

@YaBoyBigPat commented on GitHub (Aug 27, 2024):

I'm getting the same unknown data type: U8 errors from ollama when creating an assistant from Llama-3.1 after LoRA quantization and fine-tuning with custom data. If I try to convert the exported directory to GGUF with Llama.cpp I also get the same ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.weight.absmax' error. What's weird is that I loaded the model from HF with a 4 bits quantization and not 8 so I don't know where this U8 is coming from.

Yeah something's wrong where it won't recognize weights after you've either loaded in a model in any quantization and try to convert it. Did you use the bitsandbytes package to convert your quantized LoRA?

<!-- gh-comment-id:2313167752 --> @YaBoyBigPat commented on GitHub (Aug 27, 2024): > I'm getting the same `unknown data type: U8` errors from ollama when creating an assistant from Llama-3.1 after LoRA quantization and fine-tuning with custom data. If I try to convert the exported directory to GGUF with Llama.cpp I also get the same `ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.weight.absmax'` error. What's weird is that I loaded the model from HF with a 4 bits quantization and not 8 so I don't know where this U8 is coming from. Yeah something's wrong where it won't recognize weights after you've either loaded in a model in any quantization and try to convert it. Did you use the bitsandbytes package to convert your quantized LoRA?
Author
Owner

@pdevine commented on GitHub (Aug 28, 2024):

This will throw an error now which will say unsupported safetensors model. You can just use the unquantized model directly in ollama specify the --quantize flag to quantize it to 8 bits.

There are no plans to support the bitsandbytes quantization format.

<!-- gh-comment-id:2313878556 --> @pdevine commented on GitHub (Aug 28, 2024): This will throw an error now which will say `unsupported safetensors model`. You can just use the unquantized model directly in ollama specify the `--quantize` flag to quantize it to 8 bits. There are no plans to support the `bitsandbytes` quantization format.
Author
Owner

@YaBoyBigPat commented on GitHub (Aug 28, 2024):

This will throw an error now which will say unsupported safetensors model. You can just use the unquantized model directly in ollama specify the --quantize flag to quantize it to 8 bits.

There are no plans to support the bitsandbytes quantization format.

Sad, would've been cool to transfer an already quantized and more accurate fine-tuned model.

<!-- gh-comment-id:2313893737 --> @YaBoyBigPat commented on GitHub (Aug 28, 2024): > This will throw an error now which will say `unsupported safetensors model`. You can just use the unquantized model directly in ollama specify the `--quantize` flag to quantize it to 8 bits. > > There are no plans to support the `bitsandbytes` quantization format. > Sad, would've been cool to transfer an already quantized and more accurate fine-tuned model.
Author
Owner

@pdevine commented on GitHub (Aug 28, 2024):

@YaBoyBigPat that's a lot of work to support Yet Another Quantization Format.

<!-- gh-comment-id:2314414576 --> @pdevine commented on GitHub (Aug 28, 2024): @YaBoyBigPat that's a lot of work to support Yet Another Quantization Format.
Author
Owner

@YaBoyBigPat commented on GitHub (Aug 28, 2024):

@pdevine Yeah I get it, I was just sayin. Appreciate the information.

<!-- gh-comment-id:2314467288 --> @YaBoyBigPat commented on GitHub (Aug 28, 2024): @pdevine Yeah I get it, I was just sayin. Appreciate the information.
Author
Owner

@Timelessprod commented on GitHub (Aug 28, 2024):

@Timelessprod what framework did you use to create the model? Can you provide the ollama create line and the Modelfile, and is it possible to get access to the weights you're using?

I used Hugging Face SFTTrainer to fine tune Meta Llama 3.1 8B with the below function :

from peft import LoraConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, GenerationConfig, \
    LlamaForCausalLM, PreTrainedTokenizerFast, TrainingArguments
from trl import SFTConfig, SFTTrainer

def get_model_and_tokenizer(model_id: str) -> Tuple[LlamaForCausalLM, PreTrainedTokenizerFast]:
    """
    Args:
        model_id: The model id to load from the Hugging Face model hub
    Returns:
        A tuple containing the model and tokenizer
    """
    tokenizer: PreTrainedTokenizerFast = AutoTokenizer.from_pretrained(model_id)
    tokenizer.pad_token = tokenizer.eos_token

    bnb_config: BitsAndBytesConfig = BitsAndBytesConfig(
        load_in_4bit=True,
        bnb_4bit_quant_type="nf4",
        bnb_4bit_compute_dtype="float16",
        bnb_4bit_use_double_quant=True,
        bnb_4bit_quant_storage="float16"
    )

    model = AutoModelForCausalLM.from_pretrained(
        model_id,
        quantization_config=bnb_config,
        device_map="auto"
    )

    model.config.use_cache = False
    model.config.pretraining_tp = 1

    return model, tokenizer
    
def fine_tune(model: LlamaForCausalLM, tokenizer: PreTrainedTokenizerFast) -> None:
    train_dataset, validation_dataset = prepare_dataset(
        path="data/all.jsonl",
        test_ratio=0.1
    )

    peft_config: LoraConfig = LoraConfig(
        r=8,
        lora_alpha=16,
        lora_dropout=0.05,
        bias="none",
        task_type="CAUSAL_LM",
    )

    training_arguments: TrainingArguments = TrainingArguments(
        output_dir=OUTPUT_MODEL_NAME,
        overwrite_output_dir=True,
        per_device_train_batch_size=4,
        gradient_accumulation_steps=16,
        optim="paged_adamw_32bit",
        learning_rate=5e-5,
        lr_scheduler_type="cosine",
        save_strategy="epoch",
        logging_steps=10,
        num_train_epochs=EPOCHS,
        max_steps=250,
        fp16=True
    )

    sft_config: SFTConfig = SFTConfig(**training_arguments.to_dict())

    trainer: SFTTrainer = SFTTrainer(
        model=model,
        train_dataset=train_dataset,
        eval_dataset=validation_dataset,
        peft_config=peft_config,
        dataset_text_field="text",
        args=sft_config,
        tokenizer=tokenizer,
        max_seq_length=1024
    )

    trainer.train()

    # Convert the model to float16  <=  I tried this but it doesn't have any effect
    print(trainer.model.parameters())
    for param in trainer.model.parameters():
        if param.dtype == torch.uint8 or param.dtype == torch.int8:
            param.data = param.data.to(torch.float16)

    # Merge the fine-tuned model with the base model
    merged_model = trainer.model.merge_and_unload()

    # Save the merged model
    merged_model.save_pretrained(f'output_models/{OUTPUT_MODEL_NAME}')
    tokenizer.save_pretrained(f'output_models/{OUTPUT_MODEL_NAME}')

To create the ollama instance I ran ollama create <name> -f Modelfile with the model file below:

FROM ./output_models/meta-llama-3.1-8b-finetuned

TEMPLATE """{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
"""

PARAMETER stop "<|im_end|>"
PARAMETER stop "<|im_start|>"

PARAMETER temperature 0
PARAMETER top_k 20
PARAMETER top_p 0.7

SYSTEM <Some context>
<!-- gh-comment-id:2314659751 --> @Timelessprod commented on GitHub (Aug 28, 2024): > @Timelessprod what framework did you use to create the model? Can you provide the `ollama create` line and the Modelfile, and is it possible to get access to the weights you're using? I used Hugging Face SFTTrainer to fine tune Meta Llama 3.1 8B with the below function : ```py from peft import LoraConfig from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, GenerationConfig, \ LlamaForCausalLM, PreTrainedTokenizerFast, TrainingArguments from trl import SFTConfig, SFTTrainer def get_model_and_tokenizer(model_id: str) -> Tuple[LlamaForCausalLM, PreTrainedTokenizerFast]: """ Args: model_id: The model id to load from the Hugging Face model hub Returns: A tuple containing the model and tokenizer """ tokenizer: PreTrainedTokenizerFast = AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token = tokenizer.eos_token bnb_config: BitsAndBytesConfig = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype="float16", bnb_4bit_use_double_quant=True, bnb_4bit_quant_storage="float16" ) model = AutoModelForCausalLM.from_pretrained( model_id, quantization_config=bnb_config, device_map="auto" ) model.config.use_cache = False model.config.pretraining_tp = 1 return model, tokenizer def fine_tune(model: LlamaForCausalLM, tokenizer: PreTrainedTokenizerFast) -> None: train_dataset, validation_dataset = prepare_dataset( path="data/all.jsonl", test_ratio=0.1 ) peft_config: LoraConfig = LoraConfig( r=8, lora_alpha=16, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", ) training_arguments: TrainingArguments = TrainingArguments( output_dir=OUTPUT_MODEL_NAME, overwrite_output_dir=True, per_device_train_batch_size=4, gradient_accumulation_steps=16, optim="paged_adamw_32bit", learning_rate=5e-5, lr_scheduler_type="cosine", save_strategy="epoch", logging_steps=10, num_train_epochs=EPOCHS, max_steps=250, fp16=True ) sft_config: SFTConfig = SFTConfig(**training_arguments.to_dict()) trainer: SFTTrainer = SFTTrainer( model=model, train_dataset=train_dataset, eval_dataset=validation_dataset, peft_config=peft_config, dataset_text_field="text", args=sft_config, tokenizer=tokenizer, max_seq_length=1024 ) trainer.train() # Convert the model to float16 <= I tried this but it doesn't have any effect print(trainer.model.parameters()) for param in trainer.model.parameters(): if param.dtype == torch.uint8 or param.dtype == torch.int8: param.data = param.data.to(torch.float16) # Merge the fine-tuned model with the base model merged_model = trainer.model.merge_and_unload() # Save the merged model merged_model.save_pretrained(f'output_models/{OUTPUT_MODEL_NAME}') tokenizer.save_pretrained(f'output_models/{OUTPUT_MODEL_NAME}') ``` To create the ollama instance I ran `ollama create <name> -f Modelfile` with the model file below: ``` FROM ./output_models/meta-llama-3.1-8b-finetuned TEMPLATE """{{ if .System }}<|im_start|>system {{ .System }}<|im_end|> {{ end }}{{ if .Prompt }}<|im_start|>user {{ .Prompt }}<|im_end|> {{ end }}<|im_start|>assistant """ PARAMETER stop "<|im_end|>" PARAMETER stop "<|im_start|>" PARAMETER temperature 0 PARAMETER top_k 20 PARAMETER top_p 0.7 SYSTEM <Some context> ```
Author
Owner

@Timelessprod commented on GitHub (Aug 28, 2024):

@pdevine In my case I am limited by resources that's why I need to rely on quantization and BitsAndBytes seemed to be quite standard (it's the default quantization used when loading models with HF for example) that's why I go with it. What quantization algorithm should we use instead? I tried to look at the repo and doc but the only info I can get is that tensors should be F16, F32 or BF16 which I tried to accomplish with my above code but still got me U8. Thank you!

<!-- gh-comment-id:2314669004 --> @Timelessprod commented on GitHub (Aug 28, 2024): @pdevine In my case I am limited by resources that's why I need to rely on quantization and BitsAndBytes seemed to be quite standard (it's the default quantization used when loading models with HF for example) that's why I go with it. What quantization algorithm should we use instead? I tried to look at the repo and doc but the only info I can get is that tensors should be F16, F32 or BF16 which I tried to accomplish with my above code but still got me U8. Thank you!
Author
Owner

@pdevine commented on GitHub (Aug 28, 2024):

@Timelessprod take a look at the import docs which explain how to quantize a model. To get an 8 bit quantized model create a modelfile which looks like:

FROM /path/to/the/fp16/model

(you can make it FROM . if you just put the Modelfile in the same directory as your model). Then use the command:

ollama create --quantize q8_0 my-model

which will quantize the model into 8 bits. Hopefully that helps. LMK if it works.

<!-- gh-comment-id:2315851129 --> @pdevine commented on GitHub (Aug 28, 2024): @Timelessprod take a look at the [import docs](https://github.com/ollama/ollama/blob/main/docs/import.md#quantizing-a-model) which explain how to quantize a model. To get an 8 bit quantized model create a modelfile which looks like: ``` FROM /path/to/the/fp16/model ``` (you can make it `FROM .` if you just put the `Modelfile` in the same directory as your model). Then use the command: ``` ollama create --quantize q8_0 my-model ``` which will quantize the model into 8 bits. Hopefully that helps. LMK if it works.
Author
Owner

@Timelessprod commented on GitHub (Aug 29, 2024):

@pdevine What I meant is that BitsAndBytes is encoding my fine-tuned model with U8 tensors which are not supported by Ollama. Ollama only handle F16, FB16 and F32 dtypes, cf. convert/reader_safetensors.go#L109. I tried to change any setting possible in the BitsAndBytesConfig of my fine-tuning script but always end up with U8 tensors and thus the ollama create command fails to read the safetensors of the model.

Since for the BF16 dtype the code is the following:

        case "BF16":
		u8s := make([]uint8, st.size)
		if err = binary.Read(f, binary.LittleEndian, u8s); err != nil {
			return 0, err
		}

		f32s = bfloat16.DecodeFloat32(u8s)

Adding the following the the switch could work to read U8:

        case "U8":
		u8s := make([]uint8, st.size)
		if err = binary.Read(f, binary.LittleEndian, u8s); err != nil {
			return 0, err
		}

		f32s = # Encode the uint8 to float32

But I'm not familiar with Go and prefer not to make a custom version of Ollama with a hotfix like this.

So I will try to fine-tune my model with Unsloth instead of Hugging Face to see if I can manage to get rid of that U8 dtype.

<!-- gh-comment-id:2316941694 --> @Timelessprod commented on GitHub (Aug 29, 2024): @pdevine What I meant is that BitsAndBytes is encoding my fine-tuned model with U8 tensors which are not supported by Ollama. Ollama only handle F16, FB16 and F32 dtypes, cf. [convert/reader_safetensors.go#L109](https://github.com/ollama/ollama/blob/8e4e509fa4e8e1c49cedfc2754e9a0c9ed0f2fae/convert/reader_safetensors.go#L109). I tried to change any setting possible in the BitsAndBytesConfig of my fine-tuning script but always end up with U8 tensors and thus the ollama create command fails to read the safetensors of the model. Since for the BF16 dtype the code is the following: ```go case "BF16": u8s := make([]uint8, st.size) if err = binary.Read(f, binary.LittleEndian, u8s); err != nil { return 0, err } f32s = bfloat16.DecodeFloat32(u8s) ``` Adding the following the the switch could work to read U8: ```go case "U8": u8s := make([]uint8, st.size) if err = binary.Read(f, binary.LittleEndian, u8s); err != nil { return 0, err } f32s = # Encode the uint8 to float32 ``` But I'm not familiar with Go and prefer not to make a custom version of Ollama with a hotfix like this. So I will try to fine-tune my model with Unsloth instead of Hugging Face to see if I can manage to get rid of that U8 dtype.
Author
Owner

@pdevine commented on GitHub (Aug 29, 2024):

@Timelessprod Thanks for the clarification. The problem isn't reading in unsigned 8bit ints (that should be pretty easy to convert the uint8 values into whatever), it's more that if those values are quantized a certain way, can the backend (in this case llama.cpp) "interpret" the numbers correctly. I'm just not sure how the bitsandbytes quantization works.

That said, Unsloth should work great. I've added some documentation around importing LoRAs. Just make certain (for now) that everything is unquantized.

<!-- gh-comment-id:2318481307 --> @pdevine commented on GitHub (Aug 29, 2024): @Timelessprod Thanks for the clarification. The problem isn't reading in unsigned 8bit ints (that should be pretty easy to convert the uint8 values into whatever), it's more that if those values are quantized a certain way, can the backend (in this case llama.cpp) "interpret" the numbers correctly. I'm just not sure how the bitsandbytes quantization works. That said, Unsloth should work great. I've added some [documentation](https://github.com/ollama/ollama/blob/main/docs/import.md#importing-a-fine-tuned-adapter-from-safetensors-weights) around importing LoRAs. Just make certain (for now) that everything is unquantized.
Author
Owner

@Timelessprod commented on GitHub (Aug 30, 2024):

Okay I understand thank you. Indeed with Unsloth I have no problem with the dtype used for tensors.

<!-- gh-comment-id:2320324364 --> @Timelessprod commented on GitHub (Aug 30, 2024): Okay I understand thank you. Indeed with Unsloth I have no problem with the dtype used for tensors.
Author
Owner

@Huziyou commented on GitHub (Nov 28, 2024):

Okay I understand thank you. Indeed with Unsloth I have no problem with the dtype used for tensors.

I am having the same issue, and I just directly used a model from unsloth/Llama-3.2-1B-Instruct-bnb-4bit, and still says Error: unknown data type: U8, do you have any clue how this works?

<!-- gh-comment-id:2505887659 --> @Huziyou commented on GitHub (Nov 28, 2024): > Okay I understand thank you. Indeed with Unsloth I have no problem with the dtype used for tensors. I am having the same issue, and I just directly used a model from unsloth/Llama-3.2-1B-Instruct-bnb-4bit, and still says Error: unknown data type: U8, do you have any clue how this works?
Author
Owner

@rick-github commented on GitHub (Nov 28, 2024):

Ollama doesn't support the U8 datatype from bitsandbytes, try unsloth/Llama-3.2-1B-Instruct instead.

<!-- gh-comment-id:2505952417 --> @rick-github commented on GitHub (Nov 28, 2024): Ollama doesn't support the U8 datatype from bitsandbytes, try unsloth/Llama-3.2-1B-Instruct instead.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66027