[GH-ISSUE #2096] How is Tinyllama on Ollama trained? #1201

Closed
opened 2026-04-12 10:58:44 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @oliverbob on GitHub (Jan 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2096

Hi everyone, as always, thank you for the great work you have done with this project for the good of humanity. I have tried importing gguf file using tintyllama on huggingface, but when I chat with it using ollama, it returns gibberish talk. But when I download the one from Ollama with ollama pull/run tinyllama, it works great!

Question:

Can I possibly request access to how training data is fed into this tinyllama ollama model since it is open source? One of the reasons I'm interested is on the research on function calling.

Also, there has been a lot of tests and tutorials out there about finetuning this model, but your model at https://ollama.ai/library/tinyllama/tags outperforms them all examples that I find on the internet about tinyllama.

If the source is closed, I want to at least have the idea of how to train it on a custom dataset. I guess, in lay man's term, I want to understand how the Ollama team is able to train this model into the kind of model that it is currently available to ollama users and I want to know why its very different and outperforms the original gguf model found at https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6.

I'd like to be able to use this as a sample to my students as well as to practically teach my own children how a powerful language model such as tinyllama works. I'm also working on a curriculum thesis in collaboration with teachers and school owners and testing whether its practical to integrate AI training and datascience into the field of education, so, your input will be of very great benefit to this little community to advance our research in the field.

I want to highlight the difference that importing the raw gguf, has a fine difference in size of the model, which could explain the valid reason of why the ollama version is smarter. In the following screenshot, I called this gguf from hf "baby." This is an indication to me that someone has done a better job of finetuning it and I want to know how to do it, if someone would be kind enough to give us some guide.

image

Thank you very much.

Originally created by @oliverbob on GitHub (Jan 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2096 Hi everyone, as always, thank you for the great work you have done with this project for the good of humanity. I have tried importing gguf file using tintyllama on huggingface, but when I chat with it using ollama, it returns gibberish talk. But when I download the one from Ollama with ollama pull/run tinyllama, it works great! Question: Can I possibly request access to how training data is fed into this tinyllama ollama model since it is open source? One of the reasons I'm interested is on the research on function calling. Also, there has been a lot of tests and tutorials out there about finetuning this model, but your model at https://ollama.ai/library/tinyllama/tags outperforms them all examples that I find on the internet about tinyllama. If the source is closed, I want to at least have the idea of how to train it on a custom dataset. I guess, in lay man's term, I want to understand how the Ollama team is able to train this model into the kind of model that it is currently available to ollama users and I want to know why its very different and outperforms the original gguf model found at https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6. I'd like to be able to use this as a sample to my students as well as to practically teach my own children how a powerful language model such as tinyllama works. I'm also working on a curriculum thesis in collaboration with teachers and school owners and testing whether its practical to integrate AI training and datascience into the field of education, so, your input will be of very great benefit to this little community to advance our research in the field. I want to highlight the difference that importing the raw gguf, has a fine difference in size of the model, which could explain the valid reason of why the ollama version is smarter. In the following screenshot, I called this gguf from hf "baby." This is an indication to me that someone has done a better job of finetuning it and I want to know how to do it, if someone would be kind enough to give us some guide. ![image](https://github.com/jmorganca/ollama/assets/23272429/36d33715-95c3-496d-bd3e-0a9b7da6bfea) Thank you very much.
Author
Owner

@easp commented on GitHub (Jan 19, 2024):

The version of tinyllama you linked to on Hugging Face is two months old and v0.6. The version in the Ollama library is labelled v1, which should correspond to this on Hugging Face: https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0/tree/main.

To my knowledge the Ollama team hasn't done any additional training on any of the models in the ollama.ai/library.

The Hugging Face modelcard for v1-chat provides an overview of the fine-tuning applied. I don't think there is a paper, yet, on training of the base model. Their GitHub has some info: https://github.com/jzhang38/TinyLlama.

<!-- gh-comment-id:1901009549 --> @easp commented on GitHub (Jan 19, 2024): The version of tinyllama you linked to on Hugging Face is two months old and v0.6. The version in the Ollama library is labelled v1, which should correspond to this on Hugging Face: https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0/tree/main. To my knowledge the Ollama team hasn't done any additional training on any of the models in the ollama.ai/library. The Hugging Face modelcard for v1-chat provides an overview of the fine-tuning applied. I don't think there is a paper, yet, on training of the base model. Their GitHub has some info: https://github.com/jzhang38/TinyLlama.
Author
Owner

@oliverbob commented on GitHub (Jan 20, 2024):

@easp, I can't seem to find the gguf file of v1-chat which you're referring to. The only gguf files I can find pertaining to that version are the ones made by TheBloke. They all return garbage responses.

Makes me wonder where Ollama got its version.

<!-- gh-comment-id:1902074267 --> @oliverbob commented on GitHub (Jan 20, 2024): @easp, I can't seem to find the gguf file of v1-chat which you're referring to. The only gguf files I can find pertaining to that version are the ones made by TheBloke. They all return garbage responses. Makes me wonder where Ollama got its version.
Author
Owner

@easp commented on GitHub (Jan 20, 2024):

What's the modelfile for the GGUFs you've imported yourself?

<!-- gh-comment-id:1902186455 --> @easp commented on GitHub (Jan 20, 2024): What's the modelfile for the GGUFs you've imported yourself?
Author
Owner

@oliverbob commented on GitHub (Jan 21, 2024):

I've tried all of v1 also by TheBloke. They are not as good as ollama's version published 2 weeks ago.

Id like to know what system prompts they have given it to make it as it is. Can someone perhaps point me to a paper of ollama about how they collect and organize their models at ollama.ai?

This is the only paper I can find about TinyLLama https://arxiv.org/abs/2401.02385 but although this is useful, this is not what Im looking for.

Thanks.

<!-- gh-comment-id:1902512496 --> @oliverbob commented on GitHub (Jan 21, 2024): I've tried all of v1 also by TheBloke. They are not as good as ollama's version published 2 weeks ago. Id like to know what system prompts they have given it to make it as it is. Can someone perhaps point me to a paper of ollama about how they collect and organize their models at ollama.ai? This is the only paper I can find about TinyLLama https://arxiv.org/abs/2401.02385 but although this is useful, this is not what Im looking for. Thanks.
Author
Owner

@easp commented on GitHub (Jan 21, 2024):

What's the modelfile for the GGUFs you've imported yourself?

<!-- gh-comment-id:1902702678 --> @easp commented on GitHub (Jan 21, 2024): What's the modelfile for the GGUFs you've imported yourself?
Author
Owner

@tmceld commented on GitHub (Jan 21, 2024):

There is actually a thread on /r/locallama on this thread! https://www.reddit.com/r/LocalLLaMA/comments/19c75cp/what_magic_does_ollama_do_to_models_tinyllama/

the advice there is good: ollama show --modelfile tinyllama:1.1b

will return something like:

# Modelfile generated by "ollama show"    
# To build a new Modelfile based on this one, replace the FROM line with:    
# FROM tinyllama:1.1b    
    
FROM /usr/share/ollama/.ollama/models/blobs/sha256:2af3b81862c6be03c769683af1
ab6f83e42c043d6c7816        
TEMPLATE """<|system|>    
{{ .System }}</s>    
<|user|>    
{{ .Prompt }}</s>    
<|assistant|>    
"""    
SYSTEM """You are a helpful AI assistant."""    
PARAMETER stop "<|system|>"    
PARAMETER stop "<|user|>"    
PARAMETER stop "<|assistant|>"    
PARAMETER stop "</s>"    

Using that modelfile (obv with the FROM: changed accordingly) will (maybe) get better results from your HF tinylllama ?

<!-- gh-comment-id:1902723495 --> @tmceld commented on GitHub (Jan 21, 2024): There is actually a thread on /r/locallama on this thread! https://www.reddit.com/r/LocalLLaMA/comments/19c75cp/what_magic_does_ollama_do_to_models_tinyllama/ the advice there is good: `ollama show --modelfile tinyllama:1.1b` will return something like: ``` # Modelfile generated by "ollama show" # To build a new Modelfile based on this one, replace the FROM line with: # FROM tinyllama:1.1b FROM /usr/share/ollama/.ollama/models/blobs/sha256:2af3b81862c6be03c769683af1 ab6f83e42c043d6c7816 TEMPLATE """<|system|> {{ .System }}</s> <|user|> {{ .Prompt }}</s> <|assistant|> """ SYSTEM """You are a helpful AI assistant.""" PARAMETER stop "<|system|>" PARAMETER stop "<|user|>" PARAMETER stop "<|assistant|>" PARAMETER stop "</s>" ``` Using that modelfile (obv with the FROM: changed accordingly) will (maybe) get better results from your HF tinylllama ?
Author
Owner

@oliverbob commented on GitHub (Feb 5, 2024):

Hi all, I thank you for your invaluable support to my inquiry. I have recently followed the tutorial made available by unsloth regarding tinyllama.

However, I realize that it ignores the following template:


TEMPLATE """<|system|>    
{{ .System }}</s>    
<|user|>    
{{ .Prompt }}</s>    
<|assistant|>    
"""    
SYSTEM """You are a helpful AI assistant."""    
PARAMETER stop "<|system|>"    
PARAMETER stop "<|user|>"    
PARAMETER stop "<|assistant|>"    
PARAMETER stop "</s>"    

How can I add/integrate this template into this colab notebook so that I'll be able to run GGUF with the same template magic like the one Ollama has for Tinyllama?

The finetuning code that needed changing is:

if False:
    from unsloth import FastLanguageModel
    model, tokenizer = FastLanguageModel.from_pretrained(
        model_name = "lora_model", # YOUR MODEL YOU USED FOR TRAINING
        max_seq_length = max_seq_length,
        dtype = dtype,
        load_in_4bit = load_in_4bit,
    )
    FastLanguageModel.for_inference(model) # Enable native 2x faster inference

# alpaca_prompt = You MUST copy from above!

inputs = tokenizer(
[
    alpaca_prompt.format(
        "What is a famous tall tower in Paris?", # instruction
        "", # input
        "", # output - leave this blank for generation!
    )
], return_tensors = "pt").to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 64)

Your help is much appreciated.

Thanks.

<!-- gh-comment-id:1927169793 --> @oliverbob commented on GitHub (Feb 5, 2024): Hi all, I thank you for your invaluable support to my inquiry. I have recently followed the tutorial made available by [unsloth](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) regarding tinyllama. However, I realize that it ignores the following template: ``` TEMPLATE """<|system|> {{ .System }}</s> <|user|> {{ .Prompt }}</s> <|assistant|> """ SYSTEM """You are a helpful AI assistant.""" PARAMETER stop "<|system|>" PARAMETER stop "<|user|>" PARAMETER stop "<|assistant|>" PARAMETER stop "</s>" ``` How can I add/integrate this template into this colab notebook so that I'll be able to run GGUF with the same template magic like the one Ollama has for Tinyllama? The finetuning code that needed changing is: ``` if False: from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( model_name = "lora_model", # YOUR MODEL YOU USED FOR TRAINING max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, ) FastLanguageModel.for_inference(model) # Enable native 2x faster inference # alpaca_prompt = You MUST copy from above! inputs = tokenizer( [ alpaca_prompt.format( "What is a famous tall tower in Paris?", # instruction "", # input "", # output - leave this blank for generation! ) ], return_tensors = "pt").to("cuda") from transformers import TextStreamer text_streamer = TextStreamer(tokenizer) _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 64) ``` Your help is much appreciated. Thanks.
Author
Owner

@jmorganca commented on GitHub (Feb 20, 2024):

Hi folks! Going to close this just to keep the issues tidy, but feel free to let me know if you'd like to leave it open. The tinyllama model on ollama.com was converted from https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0

<!-- gh-comment-id:1955251435 --> @jmorganca commented on GitHub (Feb 20, 2024): Hi folks! Going to close this just to keep the issues tidy, but feel free to let me know if you'd like to leave it open. The `tinyllama` model on ollama.com was converted from https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0
Author
Owner

@bibhas2 commented on GitHub (Mar 18, 2024):

I wanted to add some more details on this issue. I too was perplexed by how good a model performs in Ollama vs Hugging Face. This is true irrespective of if the Hugging Face model was quantized (Ollama models are usually quantized). The culprit, as people have noted above, is most likely improper prompt format. Getting the format right can be tricky. I suggest using tokenizer.apply_chat_template() to generate the prompt tokens. It will use the prompt template as stated in the model configuration and hence the most official source. Below is an example use case.

I was testing mistralai/Mistral-7B-Instruct-v0.1 with this question:

Why did John Wilkes Booth kill George Washington?

With Hugging Face, the model was constantly hallucinating irrespective of quantization.

John Wilkes Booth assassinated George Washington on April 15, 1865, during the American Civil War. 

With Ollama the model was very good:

John Wilkes Booth did not kill George Washington. George Washington died 
on December 14, 1799, at the age of 67 from complications after undergoing
a throat surgery.

Finally, I changed my code to use apply_chat_template() and started getting acceptable result.

streamer = TextStreamer(tokenizer)

messages = [
    {"role": "user", "content": "Why did John Wilkes Booth kill George Washington?"}
]
 
encoded = tokenizer.apply_chat_template(
    messages, 
    add_generation_prompt=True, 
    return_tensors="pt")

generated_ids = model.generate(encoded, streamer=streamer, max_new_tokens=4096)
John Wilkes Booth did not kill George Washington. He assassinated Abraham Lincoln, the 16th President 
of the United States, on April 15, 1865.

I still like the answer from Ollama and don't know why they are different. In any case, I hope this helps!

<!-- gh-comment-id:2004951264 --> @bibhas2 commented on GitHub (Mar 18, 2024): I wanted to add some more details on this issue. I too was perplexed by how good a model performs in Ollama vs Hugging Face. This is true irrespective of if the Hugging Face model was quantized (Ollama models are usually quantized). The culprit, as people have noted above, is most likely improper prompt format. Getting the format right can be tricky. I suggest using ``tokenizer.apply_chat_template()`` to generate the prompt tokens. It will use the prompt template as stated in the model configuration and hence the most official source. Below is an example use case. I was testing ``mistralai/Mistral-7B-Instruct-v0.1`` with this question: ``Why did John Wilkes Booth kill George Washington?`` With Hugging Face, the model was constantly hallucinating irrespective of quantization. ``` John Wilkes Booth assassinated George Washington on April 15, 1865, during the American Civil War. ``` With Ollama the model was very good: ``` John Wilkes Booth did not kill George Washington. George Washington died on December 14, 1799, at the age of 67 from complications after undergoing a throat surgery. ``` Finally, I changed my code to use ``apply_chat_template()`` and started getting acceptable result. ```python streamer = TextStreamer(tokenizer) messages = [ {"role": "user", "content": "Why did John Wilkes Booth kill George Washington?"} ] encoded = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt") generated_ids = model.generate(encoded, streamer=streamer, max_new_tokens=4096) ``` ``` John Wilkes Booth did not kill George Washington. He assassinated Abraham Lincoln, the 16th President of the United States, on April 15, 1865. ``` I still like the answer from Ollama and don't know why they are different. In any case, I hope this helps!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1201