[GH-ISSUE #2217] Message vs Template vs System #47782

Closed
opened 2026-04-28 05:18:57 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @giannisak on GitHub (Jan 26, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2217

What is the difference between message, template and system if I want to do few-shot prompting?
I mean, I could pass the example of release(v0.1.21) to a model in three different ways:

  1. Few-shot using Message:
    SYSTEM You are a friendly assistant that only answers with 'yes' or 'no'
    MESSAGE user Is Toronto in Canada?
    MESSAGE assistant yes
    (etc..)

  2. Few-show using Template:
    TEMPLATE """
    <|im_start|>system
    {{ .System }}
    <|im_end|>

    <|im_start|>user
    Is Toronto in Canada?
    <|im_end|>

    <|im_start|>assistant
    yes
    <|im_end|>
    (etc..)
    """
    SYSTEM You are a friendly assistant that only answers with 'yes' or 'no'

  3. Few-shot using only System:
    SYSTEM """
    You are a friendly assistant that only answers with 'yes' or 'no'.
    You will be given questions about whether a city is located in a specific country.
    Example 1:
    Is Toronto in Canada?
    yes
    Example 2:
    (etc..)
    """

I am running some tests using llama index in a similar topic on 7B models and I am getting better results in
System format compared to Template format (I was expecting the opposite).

I will test message format too, but I am trying to understand the differences and the expected behavior of each.

Originally created by @giannisak on GitHub (Jan 26, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2217 What is the difference between message, template and system if I want to do few-shot prompting? I mean, I could pass the example of release(v0.1.21) to a model in three different ways: 1) Few-shot using Message: SYSTEM You are a friendly assistant that only answers with 'yes' or 'no' MESSAGE user Is Toronto in Canada? MESSAGE assistant yes (etc..) 2) Few-show using Template: TEMPLATE """ <|im_start|>system {{ .System }} <|im_end|> <|im_start|>user Is Toronto in Canada? <|im_end|> <|im_start|>assistant yes <|im_end|> (etc..) """ SYSTEM You are a friendly assistant that only answers with 'yes' or 'no' 3) Few-shot using only System: SYSTEM """ You are a friendly assistant that only answers with 'yes' or 'no'. You will be given questions about whether a city is located in a specific country. Example 1: Is Toronto in Canada? yes Example 2: (etc..) """ I am running some tests using llama index in a similar topic on 7B models and I am getting better results in System format compared to Template format (I was expecting the opposite). I will test message format too, but I am trying to understand the differences and the expected behavior of each.
Author
Owner

@pdevine commented on GitHub (Jan 26, 2024):

Hey @giannisak

  1. will work. You can alternatively put in MESSAGE system You are a friendly assistant that only answers with 'yes' or 'no' instead of using SYSTEM. Both ways are supported.
  2. won't work, because the template is repeated each time you send a message. The template is supposed to define the format for how data gets transformed into whatever format the model is expecting.
  3. will probably work, but not as well as 1. It depends more on the LLM if it can understand what you're trying to pass to it. I wouldn't recommend doing it this way vs. 1.

Keep in mind that the MESSAGE commands only work with the /api/chat endpoint and do not work with /api/generate. If there's enough demand, we can look at adding it for /api/generate, but it'll take a lot more effort than it was to make it work with the chat endpoint.

<!-- gh-comment-id:1912751400 --> @pdevine commented on GitHub (Jan 26, 2024): Hey @giannisak 1. will work. You can alternatively put in `MESSAGE system You are a friendly assistant that only answers with 'yes' or 'no'` _instead_ of using `SYSTEM`. Both ways are supported. 2. won't work, because the template is repeated each time you send a message. The template is supposed to define the format for how data gets transformed into whatever format the model is expecting. 3. will probably work, but not as well as 1. It depends more on the LLM if it can understand what you're trying to pass to it. I wouldn't recommend doing it this way vs. 1. Keep in mind that the `MESSAGE` commands _only_ work with the `/api/chat` endpoint and do not work with `/api/generate`. If there's enough demand, we can look at adding it for `/api/generate`, but it'll take a lot more effort than it was to make it work with the chat endpoint.
Author
Owner

@pdevine commented on GitHub (Jan 27, 2024):

Going to close this, but feel free to reopen it.

<!-- gh-comment-id:1912883711 --> @pdevine commented on GitHub (Jan 27, 2024): Going to close this, but feel free to reopen it.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47782