[GH-ISSUE #7167] Fine-tuned Llama 3.2 1B safe_serialized: Error: json: cannot unmarshal array into Go struct field .model.merges of type string #4549

Open
opened 2026-04-12 15:29:19 -05:00 by GiteaMirror · 22 comments
Owner

Originally created by @brunopistone on GitHub (Oct 10, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7167

What is the issue?

Modelfile:

FROM ./model

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 0.2
PARAMETER top_p 0.9
PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>

./model:

´´´
model-00001-of-00003.safetensors
config.json
generation_config.json
model-00002-of-00003.safetensors
model-00003-of-00003.safetensors
model.safetensors.index.json
special_tokens_map.json
tokenizer_config.json
tokenizer.json
´´´

Command:

ollama create llama3.2-1B -f ./Modelfile

Error:

transferring model data 100% 
converting model 
Error: json: cannot unmarshal array into Go struct field .model.merges of type string

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.3.12

Originally created by @brunopistone on GitHub (Oct 10, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7167 ### What is the issue? Modelfile: ``` FROM ./model # set the temperature to 1 [higher is more creative, lower is more coherent] PARAMETER temperature 0.2 PARAMETER top_p 0.9 PARAMETER stop <|start_header_id|> PARAMETER stop <|end_header_id|> PARAMETER stop <|eot_id|> ``` ./model: ´´´ model-00001-of-00003.safetensors config.json generation_config.json model-00002-of-00003.safetensors model-00003-of-00003.safetensors model.safetensors.index.json special_tokens_map.json tokenizer_config.json tokenizer.json ´´´ Command: ``` ollama create llama3.2-1B -f ./Modelfile ``` Error: ``` transferring model data 100% converting model Error: json: cannot unmarshal array into Go struct field .model.merges of type string ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.3.12
GiteaMirror added the createbug labels 2026-04-12 15:29:20 -05:00
Author
Owner

@ShanLing-cool commented on GitHub (Oct 11, 2024):

I also encountered this problem, and I used the latest version of llama factory to fine-tune model

<!-- gh-comment-id:2406979594 --> @ShanLing-cool commented on GitHub (Oct 11, 2024): I also encountered this problem, and I used the latest version of llama factory to fine-tune model
Author
Owner

@brunopistone commented on GitHub (Oct 11, 2024):

I also encountered this problem, and I used the latest version of llama factory to fine-tune model

Thanks for the heads up. Fine-tuning with Hugging Face, PEFT, and safetensors is widely used so it's quite important that ollama can work with binaries generated with these libraries. llama factory is another option but not the solution

<!-- gh-comment-id:2407173543 --> @brunopistone commented on GitHub (Oct 11, 2024): > I also encountered this problem, and I used the latest version of llama factory to fine-tune model Thanks for the heads up. Fine-tuning with Hugging Face, PEFT, and safetensors is widely used so it's quite important that ollama can work with binaries generated with these libraries. llama factory is another option but not the solution
Author
Owner

@rick-github commented on GitHub (Oct 11, 2024):

What did you use to create the model?

<!-- gh-comment-id:2407916905 --> @rick-github commented on GitHub (Oct 11, 2024): What did you use to create the model?
Author
Owner

@farlistener commented on GitHub (Oct 14, 2024):

What did you use to create the model?

I've got the same issue after a fine tuning with llamafactory :

  • llama 3.2 (1b or 3b same issue)
  • SFT
  • any size of the training data set

The problem is in the resulted tokenizer.json

As I can see :

in llama 3.2 tokenizer :

   "merges": [
      "Ġ Ġ",
      "Ġ ĠĠĠ",
      "ĠĠ ĠĠ",
      "ĠĠĠ Ġ",
      "i n",
      "Ġ t",
      "Ġ ĠĠĠĠĠĠĠ",

in fine-tuned tokenizer :

    "merges": [
      [
        "Ġ",
        "Ġ"
      ],
      [
        "Ġ",
        "ĠĠĠ"
      ],
      [
        "ĠĠ",
        "ĠĠ"
      ],

the space separated strings are now two-value arrays

(I'll try to create a script to repare MY tokenizer to go further, but now It seems that the solution is nearer)

<!-- gh-comment-id:2410851048 --> @farlistener commented on GitHub (Oct 14, 2024): > What did you use to create the model? I've got the same issue after a fine tuning with llamafactory : - llama 3.2 (1b or 3b same issue) - SFT - any size of the training data set The problem is in the resulted tokenizer.json As I can see : in `llama 3.2 tokenizer` : ``` "merges": [ "Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "ĠĠĠ Ġ", "i n", "Ġ t", "Ġ ĠĠĠĠĠĠĠ", ``` in `fine-tuned tokenizer` : ``` "merges": [ [ "Ġ", "Ġ" ], [ "Ġ", "ĠĠĠ" ], [ "ĠĠ", "ĠĠ" ], ``` the space separated strings are now two-value arrays (I'll try to create a script to repare MY tokenizer to go further, but now It seems that the solution is nearer)
Author
Owner

@brunopistone commented on GitHub (Oct 14, 2024):

I performed a fine-tuning of the LoRA adapter:

  • Distribution: FSDP
  • No mixed prec: params loaded with float32
  • No quantization

These is my requirements.txt:

transformers==4.45.2
peft==0.13.1
accelerate==0.34.2
datasets==2.20.0
evaluate==0.4.1
safetensors>=0.4.3
sentencepiece==0.2.0
scikit-learn==1.5.1
tokenizers>=0.19.1
py7zr

Quick update: The error still exist. I was able to run the fine-tuned model with ollama ONLY by creating the GGUF version, by using https://github.com/ggerganov/llama.cpp

Then I changed my Modelfile as following:

FROM ./llama32-1B.gguf

PARAMETER temperature 0.2
PARAMETER top_p 0.9
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>
<!-- gh-comment-id:2410932416 --> @brunopistone commented on GitHub (Oct 14, 2024): I performed a fine-tuning of the LoRA adapter: * Distribution: FSDP * No mixed prec: params loaded with float32 * No quantization These is my requirements.txt: ``` transformers==4.45.2 peft==0.13.1 accelerate==0.34.2 datasets==2.20.0 evaluate==0.4.1 safetensors>=0.4.3 sentencepiece==0.2.0 scikit-learn==1.5.1 tokenizers>=0.19.1 py7zr ``` Quick update: The error still exist. I was able to run the fine-tuned model with ollama ONLY by creating the GGUF version, by using https://github.com/ggerganov/llama.cpp Then I changed my Modelfile as following: ``` FROM ./llama32-1B.gguf PARAMETER temperature 0.2 PARAMETER top_p 0.9 PARAMETER stop <|end_header_id|> PARAMETER stop <|eot_id|> ```
Author
Owner

@farlistener commented on GitHub (Oct 14, 2024):

(I'll try to create a script to repare MY tokenizer to go further, but now It seems that the solution is nearer)

here a little php script to "repare" the tokenizer.json file :

<?php

$content = file_get_contents("tokenizer.json");
$json = json_decode($content, true);

foreach($json["model"]["merges"] as $index => $values) {
        $json["model"]["merges"][$index] = implode(" ", $values);
}

$content = json_encode($json, JSON_PRETTY_PRINT);
file_put_contents("repared_tokenizer.json", $content);

?>

the repared tokenizer file is ... repared_tokenizer.json

Ollama is now glad to create my model, but for the moment the generation of content doesn't stop. Don't know if it's my fine-tuning that break the model or a bad tokenizer configuration (llamafactory) or ollama. But I trained few days ago "by hand" this same model and the fine tuning worked fine (seems the llamafactory fault)

<!-- gh-comment-id:2410976446 --> @farlistener commented on GitHub (Oct 14, 2024): > (I'll try to create a script to repare MY tokenizer to go further, but now It seems that the solution is nearer) here a little php script to "repare" the tokenizer.json file : ```php <?php $content = file_get_contents("tokenizer.json"); $json = json_decode($content, true); foreach($json["model"]["merges"] as $index => $values) { $json["model"]["merges"][$index] = implode(" ", $values); } $content = json_encode($json, JSON_PRETTY_PRINT); file_put_contents("repared_tokenizer.json", $content); ?> ``` the repared tokenizer file is ... `repared_tokenizer.json` Ollama is now glad to create my model, but for the moment the generation of content doesn't stop. Don't know if it's my fine-tuning that break the model or a bad tokenizer configuration (llamafactory) or ollama. But I trained few days ago "by hand" this same model and the fine tuning worked fine (seems the llamafactory fault)
Author
Owner

@prideout commented on GitHub (Oct 15, 2024):

I performed fine-tuning using mlx_lm and I'm running into this issue as well. The fine-tuned model works fine when I test it with mlx_generate.

I wrote a Python script (https://gist.github.com/prideout/292c62334d59875cb3507782bc28c122) that "repairs" the tokenizer JSON, similar to @farlistener, but it doesn't really work because the model spews nonsense.

<!-- gh-comment-id:2413812683 --> @prideout commented on GitHub (Oct 15, 2024): I performed fine-tuning using `mlx_lm` and I'm running into this issue as well. The fine-tuned model works fine when I test it with `mlx_generate`. I wrote a Python script (https://gist.github.com/prideout/292c62334d59875cb3507782bc28c122) that "repairs" the tokenizer JSON, similar to @farlistener, but it doesn't really work because the model spews nonsense.
Author
Owner

@240db commented on GitHub (Oct 23, 2024):

I performed fine-tuning using mlx_lm and I'm running into this issue as well. The fine-tuned model works fine when I test it with mlx_generate.

I wrote a Python script (https://gist.github.com/prideout/292c62334d59875cb3507782bc28c122) that "repairs" the tokenizer JSON, similar to @farlistener, but it doesn't really work because the model spews nonsense.

Sorry do you mean your script does not work? I was looking into exactly a solution like that.

<!-- gh-comment-id:2432956072 --> @240db commented on GitHub (Oct 23, 2024): > I performed fine-tuning using `mlx_lm` and I'm running into this issue as well. The fine-tuned model works fine when I test it with `mlx_generate`. > > I wrote a Python script (https://gist.github.com/prideout/292c62334d59875cb3507782bc28c122) that "repairs" the tokenizer JSON, similar to @farlistener, but it doesn't really work because the model spews nonsense. Sorry do you mean your script does not work? I was looking into exactly a solution like that.
Author
Owner

@240db commented on GitHub (Oct 23, 2024):

I also encountered this problem, and I used the latest version of llama factory to fine-tune model

Yes. Same here. I am trying to load fine tuned models i did with LLama-Factory... it would be awesome if we could open them in ollama! I managed to transfer 100% of the model data after fixing some issues, but now the Error: json: cannot unmarshal array into Go struct field .model.merges of type string seems to be the new impediment

<!-- gh-comment-id:2432962353 --> @240db commented on GitHub (Oct 23, 2024): > I also encountered this problem, and I used the latest version of llama factory to fine-tune model Yes. Same here. I am trying to load fine tuned models i did with LLama-Factory... it would be awesome if we could open them in ollama! I managed to transfer 100% of the model data after fixing some issues, but now the `Error: json: cannot unmarshal array into Go struct field .model.merges of type string` seems to be the new impediment
Author
Owner

@prideout commented on GitHub (Oct 23, 2024):

My Python script seemed to work at first but the model starting to spew gibberish so it's not quite right.

<!-- gh-comment-id:2433063945 --> @prideout commented on GitHub (Oct 23, 2024): My Python script seemed to work at first but the model starting to spew gibberish so it's not quite right.
Author
Owner

@240db commented on GitHub (Oct 23, 2024):

My Python script seemed to work at first but the model starting to spew gibberish so it's not quite right.

I see. I tried it but it didn't solve it. But I trained the model with LLama Factory.

<!-- gh-comment-id:2433521860 --> @240db commented on GitHub (Oct 23, 2024): > My Python script seemed to work at first but the model starting to spew gibberish so it's not quite right. I see. I tried it but it didn't solve it. But I trained the model with LLama Factory.
Author
Owner

@240db commented on GitHub (Oct 23, 2024):

I was able to run the fine-tuned model with ollama ONLY by creating the GGUF version, by using https://github.com/ggerganov/llama.cpp

Other users at LLaMa Factory suggested the same

ShanLing-cool commented 2 weeks ago
@yuehua-sI directly imported the .gguf file into ollama, and it is OK not to import the model folder.
For example: FROM /root/.gpt_train/gguf/Megred-Model-Path-8.0B-F16.gguf
https://github.com/hiyouga/LLaMA-Factory/issues/5610

<!-- gh-comment-id:2433525111 --> @240db commented on GitHub (Oct 23, 2024): > I was able to run the fine-tuned model with ollama ONLY by creating the GGUF version, by using https://github.com/ggerganov/llama.cpp > Other users at LLaMa Factory suggested the same > [ShanLing-cool](https://github.com/ShanLing-cool) commented [2 weeks ago](https://github.com/hiyouga/LLaMA-Factory/issues/5610#issuecomment-2407077854) @yuehua-sI directly imported the .gguf file into ollama, and it is OK not to import the model folder. For example: FROM /root/.gpt_train/gguf/Megred-Model-Path-8.0B-F16.gguf https://github.com/hiyouga/LLaMA-Factory/issues/5610
Author
Owner

@hschaeufler commented on GitHub (Oct 28, 2024):

Got same issue. Two months ago, i could import the model with Ollama from mlx. Now i got the same issue:

Error: json: cannot unmarshal array into Go struct field .model.merges of type string.

My model is a fintuend version of: meta-llama/Meta-Llama-3.1-8B-Instruct".

<!-- gh-comment-id:2440247406 --> @hschaeufler commented on GitHub (Oct 28, 2024): Got same issue. Two months ago, i could import the model with Ollama from mlx. Now i got the same issue: `Error: json: cannot unmarshal array into Go struct field .model.merges of type string`. My model is a fintuend version of: meta-llama/Meta-Llama-3.1-8B-Instruct".
Author
Owner

@hschaeufler commented on GitHub (Oct 28, 2024):

My Python script seemed to work at first but the model starting to spew gibberish so it's not quite right.

I think I have found a temporary workaround for mlx. It worked for me by not fusing the models with MLX, but loading the model with the adapter in Ollama.

For example:
ollama create hschaeufler/dartgen-llama-3.1-8B-Instruct:8b-instruct-bf16-v10 -f Modelfile

FROM /Users/admin/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B-Instruct/snapshots/0e9e39f249a16976918f6564b8830bc894c89659
ADAPTER /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/adapters

Instead of

FROM /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/lora_fused_model
<!-- gh-comment-id:2440278879 --> @hschaeufler commented on GitHub (Oct 28, 2024): > My Python script seemed to work at first but the model starting to spew gibberish so it's not quite right. I think I have found a temporary workaround for mlx. It worked for me by not fusing the models with MLX, but loading the model with the adapter in Ollama. For example: ` ollama create hschaeufler/dartgen-llama-3.1-8B-Instruct:8b-instruct-bf16-v10 -f Modelfile` ``` FROM /Users/admin/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B-Instruct/snapshots/0e9e39f249a16976918f6564b8830bc894c89659 ADAPTER /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/adapters ``` Instead of ``` FROM /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/lora_fused_model ```
Author
Owner

@hschaeufler commented on GitHub (Oct 28, 2024):

My Python script seemed to work at first but the model starting to spew gibberish so it's not quite right.

I think I have found a temporary workaround for mlx. It worked for me by not fusing the models with MLX, but loading the model with the adapter in Ollama.

For example: ollama create hschaeufler/dartgen-llama-3.1-8B-Instruct:8b-instruct-bf16-v10 -f Modelfile

FROM /Users/admin/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B-Instruct/snapshots/0e9e39f249a16976918f6564b8830bc894c89659
ADAPTER /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/adapters

Instead of

FROM /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/lora_fused_model

Was able to narrow down the problem further. MLX and co. seem to use the Transformer libraries for fusing the model. If I downgrade the transformers library to version 4.44.2 (pipenv install transformers==4.44.2 ) before I fuse the model, it can be imported into Ollama again as described.

<!-- gh-comment-id:2442207590 --> @hschaeufler commented on GitHub (Oct 28, 2024): > > My Python script seemed to work at first but the model starting to spew gibberish so it's not quite right. > > I think I have found a temporary workaround for mlx. It worked for me by not fusing the models with MLX, but loading the model with the adapter in Ollama. > > For example: ` ollama create hschaeufler/dartgen-llama-3.1-8B-Instruct:8b-instruct-bf16-v10 -f Modelfile` > > ``` > FROM /Users/admin/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B-Instruct/snapshots/0e9e39f249a16976918f6564b8830bc894c89659 > ADAPTER /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/adapters > ``` > > Instead of > > ``` > FROM /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/lora_fused_model > ``` Was able to narrow down the problem further. MLX and co. seem to use the Transformer libraries for fusing the model. If I downgrade the transformers library to version 4.44.2 (pipenv install transformers==4.44.2 ) before I fuse the model, it can be imported into Ollama again as described.
Author
Owner

@sd3ntato commented on GitHub (Oct 29, 2024):

My Python script seemed to work at first but the model starting to spew gibberish so it's not quite right.

I think I have found a temporary workaround for mlx. It worked for me by not fusing the models with MLX, but loading the model with the adapter in Ollama.
For example: ollama create hschaeufler/dartgen-llama-3.1-8B-Instruct:8b-instruct-bf16-v10 -f Modelfile

FROM /Users/admin/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B-Instruct/snapshots/0e9e39f249a16976918f6564b8830bc894c89659
ADAPTER /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/adapters

Instead of

FROM /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/lora_fused_model

Was able to narrow down the problem further. MLX and co. seem to use the Transformer libraries for fusing the model. If I downgrade the transformers library to version 4.44.2 (pipenv install transformers==4.44.2 ) before I fuse the model, it can be imported into Ollama again as described.

sorry, i installed ollama from the training script; how downgrade its transformers package? or maybe are you referring to downgrading transformers before running mlx_lm.lora?
I'm really struggling to understand your solution
thanks in advance!

<!-- gh-comment-id:2445482407 --> @sd3ntato commented on GitHub (Oct 29, 2024): > > > My Python script seemed to work at first but the model starting to spew gibberish so it's not quite right. > > > > > > I think I have found a temporary workaround for mlx. It worked for me by not fusing the models with MLX, but loading the model with the adapter in Ollama. > > For example: ` ollama create hschaeufler/dartgen-llama-3.1-8B-Instruct:8b-instruct-bf16-v10 -f Modelfile` > > ``` > > FROM /Users/admin/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B-Instruct/snapshots/0e9e39f249a16976918f6564b8830bc894c89659 > > ADAPTER /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/adapters > > ``` > > > > > > > > > > > > > > > > > > > > > > > > Instead of > > ``` > > FROM /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/lora_fused_model > > ``` > > Was able to narrow down the problem further. MLX and co. seem to use the Transformer libraries for fusing the model. If I downgrade the transformers library to version 4.44.2 (pipenv install transformers==4.44.2 ) before I fuse the model, it can be imported into Ollama again as described. sorry, i installed ollama from the training script; how downgrade its transformers package? or maybe are you referring to downgrading transformers before running `mlx_lm.lora`? I'm really struggling to understand your solution thanks in advance!
Author
Owner

@sd3ntato commented on GitHub (Oct 29, 2024):

My Python script seemed to work at first but the model starting to spew gibberish so it's not quite right.

I think I have found a temporary workaround for mlx. It worked for me by not fusing the models with MLX, but loading the model with the adapter in Ollama.
For example: ollama create hschaeufler/dartgen-llama-3.1-8B-Instruct:8b-instruct-bf16-v10 -f Modelfile

FROM /Users/admin/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B-Instruct/snapshots/0e9e39f249a16976918f6564b8830bc894c89659
ADAPTER /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/adapters

Instead of

FROM /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/lora_fused_model

Was able to narrow down the problem further. MLX and co. seem to use the Transformer libraries for fusing the model. If I downgrade the transformers library to version 4.44.2 (pipenv install transformers==4.44.2 ) before I fuse the model, it can be imported into Ollama again as described.

this neither is clear to me at all

<!-- gh-comment-id:2445482733 --> @sd3ntato commented on GitHub (Oct 29, 2024): > > > My Python script seemed to work at first but the model starting to spew gibberish so it's not quite right. > > > > > > I think I have found a temporary workaround for mlx. It worked for me by not fusing the models with MLX, but loading the model with the adapter in Ollama. > > For example: ` ollama create hschaeufler/dartgen-llama-3.1-8B-Instruct:8b-instruct-bf16-v10 -f Modelfile` > > ``` > > FROM /Users/admin/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B-Instruct/snapshots/0e9e39f249a16976918f6564b8830bc894c89659 > > ADAPTER /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/adapters > > ``` > > > > > > > > > > > > > > > > > > > > > > > > Instead of > > ``` > > FROM /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/lora_fused_model > > ``` > > Was able to narrow down the problem further. MLX and co. seem to use the Transformer libraries for fusing the model. If I downgrade the transformers library to version 4.44.2 (pipenv install transformers==4.44.2 ) before I fuse the model, it can be imported into Ollama again as described. this neither is clear to me at all
Author
Owner

@hschaeufler commented on GitHub (Oct 29, 2024):

Error: llama runner process has terminated: GGML_ASSERT(src1t == GGML_TYPE_F32) failed ml-explore/mlx-examples#1043

You need to downgrade the transformers-Library before running mlx_lm.fuse. Hope this issue comment from me describe it better: https://github.com/ml-explore/mlx-examples/issues/1043#issuecomment-2442305327 ?

<!-- gh-comment-id:2445493291 --> @hschaeufler commented on GitHub (Oct 29, 2024): > Error: llama runner process has terminated: GGML_ASSERT(src1t == GGML_TYPE_F32) failed ml-explore/mlx-examples#1043 You need to downgrade the transformers-Library before running` mlx_lm.fuse`. Hope this issue comment from me describe it better: https://github.com/ml-explore/mlx-examples/issues/1043#issuecomment-2442305327 ?
Author
Owner

@hschaeufler commented on GitHub (Oct 29, 2024):

My Python script seemed to work at first but the model starting to spew gibberish so it's not quite right.

I think I have found a temporary workaround for mlx. It worked for me by not fusing the models with MLX, but loading the model with the adapter in Ollama.
For example: ollama create hschaeufler/dartgen-llama-3.1-8B-Instruct:8b-instruct-bf16-v10 -f Modelfile

FROM /Users/admin/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B-Instruct/snapshots/0e9e39f249a16976918f6564b8830bc894c89659
ADAPTER /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/adapters

Instead of

FROM /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/lora_fused_model

Was able to narrow down the problem further. MLX and co. seem to use the Transformer libraries for fusing the model. If I downgrade the transformers library to version 4.44.2 (pipenv install transformers==4.44.2 ) before I fuse the model, it can be imported into Ollama again as described.

this neither is clear to me at all

What exactly is not clear to you? When you finetune a model in MLX, you get an adapter. To import the model into Ollama, you may not have to fuse the model with mlx_lm.fuse. You can also specify the model and the adapter folder directly in the Ollama model file before importing it. To do this, you need to find out exactly where your base model is located. This is usually somewhere under ~/.cache/huggingface/hub/

<!-- gh-comment-id:2445503431 --> @hschaeufler commented on GitHub (Oct 29, 2024): > > > > My Python script seemed to work at first but the model starting to spew gibberish so it's not quite right. > > > > > > > > > I think I have found a temporary workaround for mlx. It worked for me by not fusing the models with MLX, but loading the model with the adapter in Ollama. > > > For example: ` ollama create hschaeufler/dartgen-llama-3.1-8B-Instruct:8b-instruct-bf16-v10 -f Modelfile` > > > ``` > > > FROM /Users/admin/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B-Instruct/snapshots/0e9e39f249a16976918f6564b8830bc894c89659 > > > ADAPTER /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/adapters > > > ``` > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Instead of > > > ``` > > > FROM /Volumes/Extreme SSD/dartgen/results/llama3_1_8B_instruct_lora/tuning_10/lora_fused_model > > > ``` > > > > > > Was able to narrow down the problem further. MLX and co. seem to use the Transformer libraries for fusing the model. If I downgrade the transformers library to version 4.44.2 (pipenv install transformers==4.44.2 ) before I fuse the model, it can be imported into Ollama again as described. > > this neither is clear to me at all What exactly is not clear to you? When you finetune a model in MLX, you get an adapter. To import the model into Ollama, you may not have to fuse the model with mlx_lm.fuse. You can also specify the model and the adapter folder directly in the Ollama model file before importing it. To do this, you need to find out exactly where your base model is located. This is usually somewhere under ~/.cache/huggingface/hub/
Author
Owner

@240db commented on GitHub (Nov 3, 2024):

Hey, just an update, specifically for Llama-3.1-8B fine-tuned using LLama-Factory..

https://github.com/hiyouga/LLaMA-Factory/issues/5834

Instead of opening the trained model in ollama, Llama-Factory provides a way to use the openAI client for your local model. The template is different for 8B or these smaller models I assume, the author of the project gave some technical explanation.

llama3 base model's <|im_end|> token is not correctly initialized in llama3 template, so we just use the <|endoftext|> token in default template

So to all of you that had tokenizer issues, but then the model would not stop and would just be outputting gibberish, the problem might be with the <|im_end|> token as it is not correctly initialized. A lot of people on huggingface also reported issues described in this thread. It seems that Meta's Llama 3.x 8B models or even smaller models might have a different tokenizer structure. When using the llamafactory-cli api you need to use the template default instead of llama3 then it will work as expected.

<!-- gh-comment-id:2453454081 --> @240db commented on GitHub (Nov 3, 2024): Hey, just an update, specifically for `Llama-3.1-8B` fine-tuned using LLama-Factory.. > https://github.com/hiyouga/LLaMA-Factory/issues/5834 Instead of opening the trained model in ollama, Llama-Factory provides a way to use the openAI client for your local model. The template is different for 8B or these smaller models I assume, the author of the project gave some technical explanation. > llama3 base model's <|im_end|> token is not correctly initialized in llama3 template, so we just use the <|endoftext|> token in default template So to all of you that had tokenizer issues, but then the model would not stop and would just be outputting gibberish, the problem might be with the <|im_end|> token as it is not correctly initialized. A lot of people on huggingface also reported issues described in this thread. It seems that Meta's Llama 3.x 8B models or even smaller models might have a different tokenizer structure. When using the `llamafactory-cli api` you need to use the template `default` instead of `llama3` then it will work as expected.
Author
Owner

@240db commented on GitHub (Nov 3, 2024):

Hey, just an update, specifically for Llama-3.1-8B fine-tuned using LLama-Factory..

hiyouga/LLaMA-Factory#5834

Instead of opening the trained model in ollama, Llama-Factory provides a way to use the openAI client for your local model. The template is different for 8B or these smaller models I assume, the author of the project gave some technical explanation.

llama3 base model's <|im_end|> token is not correctly initialized in llama3 template, so we just use the <|endoftext|> token in default template

So to all of you that had tokenizer issues, but then the model would not stop and would just be outputting gibberish, the problem might be with the <|im_end|> token as it is not correctly initialized. A lot of people on huggingface also reported issues described in this thread. It seems that Meta's Llama 3.x 8B models or even smaller models might have a different tokenizer structure. When using the llamafactory-cli api you need to use the template default instead of llama3 then it will work as expected.

This might provide a hint as to how one should modify the tokenizer to be able to import it to ollama.

For my use, since I only trained classifiers, it not worth running them on ollama and using them with say, open-webui... But others might be building fine-tuned models for text summary like tasks, so a more integrated solution might be desired.

<!-- gh-comment-id:2453455342 --> @240db commented on GitHub (Nov 3, 2024): > Hey, just an update, specifically for `Llama-3.1-8B` fine-tuned using LLama-Factory.. > > > [hiyouga/LLaMA-Factory#5834](https://github.com/hiyouga/LLaMA-Factory/issues/5834) > > Instead of opening the trained model in ollama, Llama-Factory provides a way to use the openAI client for your local model. The template is different for 8B or these smaller models I assume, the author of the project gave some technical explanation. > > > llama3 base model's <|im_end|> token is not correctly initialized in llama3 template, so we just use the <|endoftext|> token in default template > > So to all of you that had tokenizer issues, but then the model would not stop and would just be outputting gibberish, the problem might be with the <|im_end|> token as it is not correctly initialized. A lot of people on huggingface also reported issues described in this thread. It seems that Meta's Llama 3.x 8B models or even smaller models might have a different tokenizer structure. When using the `llamafactory-cli api` you need to use the template `default` instead of `llama3` then it will work as expected. This might provide a hint as to how one should modify the tokenizer to be able to import it to ollama. For my use, since I only trained classifiers, it not worth running them on ollama and using them with say, open-webui... But others might be building fine-tuned models for text summary like tasks, so a more integrated solution might be desired.
Author
Owner

@AuroraLHL commented on GitHub (Nov 9, 2024):

oh, I also have this problem with the llama3.1 model finetuned by Unsloth

Error: json: cannot unmarshal array into Go struct field .model.merges of type string

<!-- gh-comment-id:2466004490 --> @AuroraLHL commented on GitHub (Nov 9, 2024): oh, I also have this problem with the llama3.1 model finetuned by **Unsloth** > Error: json: cannot unmarshal array into Go struct field .model.merges of type string
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4549