[GH-ISSUE #8021] Incorrect configuration in EXAONE 3.5 #51643

Closed
opened 2026-04-28 20:41:06 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @lgai-exaone on GitHub (Dec 10, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/8021

Hello,
We recently published EXAONE 3.5, and we appreciate your quick support of EXAONE 3.5 in Ollama.
https://ollama.com/library/exaone3.5

However, we found that the applied template differs from the original template.
We checked the prompt template and determined it should be modified as below:

{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{ if eq .Role "system" }}[|system|]{{ .Content }}[|endofturn|]
{{ continue }}
{{ else if eq .Role "user" }}[|user|]{{ .Content }}
{{ else if eq .Role "assistant" }}[|assistant|]{{ .Content }}[|endofturn|]
{{ end }}
{{- if and (ne .Role "assistant") $last }}[|assistant|]{{ end }}
{{- end -}}

Could you update the template for EXAONE 3.5 in the ollama library?

Additionally, is there a method to configure generation settings (e.g. stop words, repetition penalty) for the model in the ollama library? we need to set certain parameters to avoid performance degradation.

Originally created by @lgai-exaone on GitHub (Dec 10, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/8021 Hello, We recently published EXAONE 3.5, and we appreciate your quick support of EXAONE 3.5 in Ollama. https://ollama.com/library/exaone3.5 However, we found that the applied template differs from the original template. We checked the prompt template and determined it should be modified as below: ``` {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 -}} {{ if eq .Role "system" }}[|system|]{{ .Content }}[|endofturn|] {{ continue }} {{ else if eq .Role "user" }}[|user|]{{ .Content }} {{ else if eq .Role "assistant" }}[|assistant|]{{ .Content }}[|endofturn|] {{ end }} {{- if and (ne .Role "assistant") $last }}[|assistant|]{{ end }} {{- end -}} ``` Could you update the template for EXAONE 3.5 in the ollama library? Additionally, is there a method to configure generation settings (e.g. stop words, repetition penalty) for the model in the ollama library? we need to set certain parameters to avoid performance degradation.
GiteaMirror added the model label 2026-04-28 20:41:06 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 10, 2024):

Parameters can be set with the PARAMETER command in a Modelfile. If you add the required changes here they can be incorporated in to the model update.

<!-- gh-comment-id:2530622198 --> @rick-github commented on GitHub (Dec 10, 2024): Parameters can be set with the [`PARAMETER`](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#parameter) command in a Modelfile. If you add the required changes here they can be incorporated in to the model update.
Author
Owner

@lgai-exaone commented on GitHub (Dec 10, 2024):

Thank you for reply!

We updated the Modelfile, same as our GitHub README.
Here is our example of full Modelfile:

# Model path (choose appropriate GGUF weights on your own)
FROM ./EXAONE-3.5-7.8B-Instruct-BF16.gguf

# Parameter values
PARAMETER stop "[|endofturn|]"
PARAMETER temperature 1.0
PARAMETER repeat_penalty 1.0
# PARAMETER num_ctx 32768  # if you need a long context

# Chat template
TEMPLATE """{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{ if eq .Role "system" }}[|system|]{{ .Content }}[|endofturn|]
{{ continue }}
{{ else if eq .Role "user" }}[|user|]{{ .Content }}
{{ else if eq .Role "assistant" }}[|assistant|]{{ .Content }}[|endofturn|]
{{ end }}
{{- if and (ne .Role "assistant") $last }}[|assistant|]{{ end }}
{{- end -}}"""

# System prompt
SYSTEM """You are EXAONE model from LG AI Research, a helpful assistant."""

# License
LICENSE """EXAONE AI Model License Agreement 1.1 - NC """
<!-- gh-comment-id:2530646564 --> @lgai-exaone commented on GitHub (Dec 10, 2024): Thank you for reply! We updated the Modelfile, same as our [GitHub README](https://github.com/LG-AI-EXAONE/EXAONE-3.5?tab=readme-ov-file#ollama). Here is our example of full Modelfile: ``` # Model path (choose appropriate GGUF weights on your own) FROM ./EXAONE-3.5-7.8B-Instruct-BF16.gguf # Parameter values PARAMETER stop "[|endofturn|]" PARAMETER temperature 1.0 PARAMETER repeat_penalty 1.0 # PARAMETER num_ctx 32768 # if you need a long context # Chat template TEMPLATE """{{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 -}} {{ if eq .Role "system" }}[|system|]{{ .Content }}[|endofturn|] {{ continue }} {{ else if eq .Role "user" }}[|user|]{{ .Content }} {{ else if eq .Role "assistant" }}[|assistant|]{{ .Content }}[|endofturn|] {{ end }} {{- if and (ne .Role "assistant") $last }}[|assistant|]{{ end }} {{- end -}}""" # System prompt SYSTEM """You are EXAONE model from LG AI Research, a helpful assistant.""" # License LICENSE """EXAONE AI Model License Agreement 1.1 - NC """ ```
Author
Owner

@jmorganca commented on GitHub (Dec 10, 2024):

Hi @lgai-exaone congrats on the model launch and thank you for the correction. It should be reflected here now: https://ollama.com/library/exaone3.5

Excited to launch more models with you in the future 😊

<!-- gh-comment-id:2530721325 --> @jmorganca commented on GitHub (Dec 10, 2024): Hi @lgai-exaone congrats on the model launch and thank you for the correction. It should be reflected here now: https://ollama.com/library/exaone3.5 Excited to launch more models with you in the future 😊
Author
Owner

@lgai-exaone commented on GitHub (Dec 10, 2024):

Thank you @jmorganca, we checked the update on the Ollama library!

We look forward to continuously contributing to the community and collaborating with Ollama 😄

<!-- gh-comment-id:2530749010 --> @lgai-exaone commented on GitHub (Dec 10, 2024): Thank you @jmorganca, we checked the update on the Ollama library! We look forward to continuously contributing to the community and collaborating with Ollama 😄
Author
Owner

@mchiang0610 commented on GitHub (Dec 10, 2024):

Thank you @lgai-exaone for the help! I would love to connect to see how to help launch future models too. We've been working on features that allow you to upload your own models (privately and publicly) to Ollama.com for sharing.

my email is michael@ollama.com

<!-- gh-comment-id:2532604525 --> @mchiang0610 commented on GitHub (Dec 10, 2024): Thank you @lgai-exaone for the help! I would love to connect to see how to help launch future models too. We've been working on features that allow you to upload your own models (privately and publicly) to Ollama.com for sharing. my email is michael@ollama.com
Author
Owner

@lgai-exaone commented on GitHub (Dec 11, 2024):

Thank you @mchiang0610 for your suggestion! We are looking forward to Ollama support for our upcoming models.
We're excited to explore collaboration opportunities, so if you know of any suitable ways we could work together, we're eager to hear them!

Please don't mind to send email to our contact: contact_us@lgresearch.ai

<!-- gh-comment-id:2534237171 --> @lgai-exaone commented on GitHub (Dec 11, 2024): Thank you @mchiang0610 for your suggestion! We are looking forward to Ollama support for our upcoming models. We're excited to explore collaboration opportunities, so if you know of any suitable ways we could work together, we're eager to hear them! Please don't mind to send email to our contact: [contact_us@lgresearch.ai](mailto:contact_us@lgresearch.ai)
Author
Owner

@lgai-exaone commented on GitHub (Dec 16, 2024):

Hello, everyone.
We have conducted experiments to optimize sampling parameters for EXAONE 3.5 across various platforms, including Ollama.

Our findings indicate that model generation quality degrades when the repetition penalty exceeds 1.0, which was consistent across all tested platforms.
Additionally, we discovered that EXAONE models perform well with the default temperature setting (0.7) and don't require forcing it to 1.0.

Thus, could you please remove the temperature setting from the EXAONE models' Modelfiles?

Thank you for your attention.

@jmorganca @mchiang0610 @rick-github

<!-- gh-comment-id:2544420511 --> @lgai-exaone commented on GitHub (Dec 16, 2024): Hello, everyone. We have conducted experiments to optimize sampling parameters for EXAONE 3.5 across various platforms, including Ollama. Our findings indicate that model generation quality degrades when the repetition penalty exceeds 1.0, which was consistent across all tested platforms. Additionally, we discovered that EXAONE models perform well with the default temperature setting (0.7) and don't require forcing it to 1.0. Thus, could you please remove the temperature setting from the EXAONE models' Modelfiles? Thank you for your attention. @jmorganca @mchiang0610 @rick-github
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51643