[GH-ISSUE #6873] system prompt does not work on qwen2.5 #4343

Closed
opened 2026-04-12 15:16:34 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @yuqaf1989 on GitHub (Sep 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6873

What is the issue?

model: qwen2.5:14b

qwen2.5 will ignore system prompt, always return his name is Qwen.

test code:

from llama_index.llms.ollama import Ollama
from llama_index.core.llms import ChatMessage

sys = ChatMessage(role="system", content="your name is tom.")
c = ChatMessage(role="user", content="who are you")
llm = Ollama(model="qwen2.5:14b", request_timeout=120.0)
llm.chat([sys, c]).message.content

### returns
# I am Qwen, a large language model created by Alibaba Cloud. I am here to help with generating text, answering questions, and assisting with various tasks to the best of my knowledge and capabilities. How can I assist you today?

llm = Ollama(model="qwen2:7b", request_timeout=120.0)
llm.chat([sys, c]).message.content
### returns
#I am Tom, an AI assistant designed to provide information and assistance across a wide range of topics, answer questions, help with tasks, and engage in conversation. I'm here to facilitate knowledge and make your life easier! How can I assist you today

I tried to replace model's template and prams file to qwen2's , It will be ok.
I think there maybe something wrong with qwen2.5's Modelfile

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.11

Originally created by @yuqaf1989 on GitHub (Sep 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6873 ### What is the issue? model: qwen2.5:14b qwen2.5 will ignore system prompt, always return `his name is Qwen.` test code: ```python from llama_index.llms.ollama import Ollama from llama_index.core.llms import ChatMessage sys = ChatMessage(role="system", content="your name is tom.") c = ChatMessage(role="user", content="who are you") llm = Ollama(model="qwen2.5:14b", request_timeout=120.0) llm.chat([sys, c]).message.content ### returns # I am Qwen, a large language model created by Alibaba Cloud. I am here to help with generating text, answering questions, and assisting with various tasks to the best of my knowledge and capabilities. How can I assist you today? llm = Ollama(model="qwen2:7b", request_timeout=120.0) llm.chat([sys, c]).message.content ### returns #I am Tom, an AI assistant designed to provide information and assistance across a wide range of topics, answer questions, help with tasks, and engage in conversation. I'm here to facilitate knowledge and make your life easier! How can I assist you today ``` I tried to replace model's template and prams file to qwen2's , It will be ok. I think there maybe something wrong with qwen2.5's Modelfile ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.11
GiteaMirror added the bug label 2026-04-12 15:16:34 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 19, 2024):

$ curl -s localhost:11434/api/chat -d '{"model":"qwen2.5:14b","messages":[{"role":"system","content":"your name is tom."},{"role":"user","content":"who are you"}],"stream":false}' | jq
{
  "model": "qwen2.5:14b",
  "created_at": "2024-09-19T11:06:09.891132175Z",
  "message": {
    "role": "assistant",
    "content": "I'm Tom, an AI assistant designed to help with a variety of tasks including but not limited to answering questions, providing information, and generating text based on the inputs I receive. How can I assist you today?"
  },
  "done_reason": "stop",
  "done": true,
  "total_duration": 1168633494,
  "load_duration": 23667993,
  "prompt_eval_count": 21,
  "prompt_eval_duration": 30423000,
  "eval_count": 44,
  "eval_duration": 976613000
}
<!-- gh-comment-id:2360686930 --> @rick-github commented on GitHub (Sep 19, 2024): ```console $ curl -s localhost:11434/api/chat -d '{"model":"qwen2.5:14b","messages":[{"role":"system","content":"your name is tom."},{"role":"user","content":"who are you"}],"stream":false}' | jq { "model": "qwen2.5:14b", "created_at": "2024-09-19T11:06:09.891132175Z", "message": { "role": "assistant", "content": "I'm Tom, an AI assistant designed to help with a variety of tasks including but not limited to answering questions, providing information, and generating text based on the inputs I receive. How can I assist you today?" }, "done_reason": "stop", "done": true, "total_duration": 1168633494, "load_duration": 23667993, "prompt_eval_count": 21, "prompt_eval_duration": 30423000, "eval_count": 44, "eval_duration": 976613000 } ```
Author
Owner

@yuqaf1989 commented on GitHub (Sep 19, 2024):

$ curl -s localhost:11434/api/chat -d '{"model":"qwen2.5:14b","messages":[{"role":"system","content":"your name is tom."},{"role":"user","content":"who are you"}],"stream":false}' | jq
{
  "model": "qwen2.5:14b",
  "created_at": "2024-09-19T11:06:09.891132175Z",
  "message": {
    "role": "assistant",
    "content": "I'm Tom, an AI assistant designed to help with a variety of tasks including but not limited to answering questions, providing information, and generating text based on the inputs I receive. How can I assist you today?"
  },
  "done_reason": "stop",
  "done": true,
  "total_duration": 1168633494,
  "load_duration": 23667993,
  "prompt_eval_count": 21,
  "prompt_eval_duration": 30423000,
  "eval_count": 44,
  "eval_duration": 976613000
}

I've tried again, it still returns I am Qwen

Screenshot_20240919_205604

<!-- gh-comment-id:2360921567 --> @yuqaf1989 commented on GitHub (Sep 19, 2024): > ``` > $ curl -s localhost:11434/api/chat -d '{"model":"qwen2.5:14b","messages":[{"role":"system","content":"your name is tom."},{"role":"user","content":"who are you"}],"stream":false}' | jq > { > "model": "qwen2.5:14b", > "created_at": "2024-09-19T11:06:09.891132175Z", > "message": { > "role": "assistant", > "content": "I'm Tom, an AI assistant designed to help with a variety of tasks including but not limited to answering questions, providing information, and generating text based on the inputs I receive. How can I assist you today?" > }, > "done_reason": "stop", > "done": true, > "total_duration": 1168633494, > "load_duration": 23667993, > "prompt_eval_count": 21, > "prompt_eval_duration": 30423000, > "eval_count": 44, > "eval_duration": 976613000 > } > ``` I've tried again, it still returns `I am Qwen` ![Screenshot_20240919_205604](https://github.com/user-attachments/assets/12ebc25f-6eb7-48b9-a3c1-4028b5f564ca)
Author
Owner

@rick-github commented on GitHub (Sep 19, 2024):

Your prompt_eval_count is only 11 compered to 21 when I run it. What's the output of ollama show --template qwen2.5:14b?

<!-- gh-comment-id:2360990561 --> @rick-github commented on GitHub (Sep 19, 2024): Your prompt_eval_count is only 11 compered to 21 when I run it. What's the output of `ollama show --template qwen2.5:14b`?
Author
Owner

@jmorganca commented on GitHub (Sep 19, 2024):

Hi there, the template was updated shortly after publishing qwen2.5 - this should be fixed. Let me know if you’re still seeing issues.

<!-- gh-comment-id:2361100037 --> @jmorganca commented on GitHub (Sep 19, 2024): Hi there, the template was updated shortly after publishing qwen2.5 - this should be fixed. Let me know if you’re still seeing issues.
Author
Owner

@yuqaf1989 commented on GitHub (Sep 19, 2024):

ollama show --template qwen2.5:14b

{{ if .Messages }}
{{- if .Tools }}<|im_start|>system
{{- if .System }}{{ .System }}
{{- end }}

# Tools

You may call one or more functions to assist with the user query.

You are provided with function signatures within <tools></tools> XML tags:
<tools>
{{- range .Tools }}
{"type": "function", "function": {{ .Function }}}
{{- end }}
</tools>

For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call><|im_end|>
{{ end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{- if eq .Role "user" }}<|im_start|>user
{{ .Content }}<|im_end|>
{{ else if eq .Role "assistant" }}<|im_start|>assistant
{{ if .Content }}{{ .Content }}
{{- else if .ToolCalls }}<tool_call>
{{ range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{ end }}</tool_call>
{{- end }}{{ if not $last }}<|im_end|>
{{ end }}
{{- else if eq .Role "tool" }}<|im_start|>user
<tool_response>
{{ .Content }}
</tool_response><|im_end|>
{{ end }}
{{- if and (ne .Role "assistant") $last }}<|im_start|>assistant
{{ end }}
{{- end }}
{{- else }}
{{- if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ end }}{{ .Response }}{{ if .Response }}<|im_end|>{{ end }}

compare it to https://ollama.com/library/qwen2.5:14b/blobs/eb4402837c78 , they are not same. And I see, https://ollama.com/library/qwen2.5 was updated 10 hours ago , and I pull it 11 hours ago ~

After I pull again , the system prompt works .

@rick-github @jmorganca Thanks ~

<!-- gh-comment-id:2361107412 --> @yuqaf1989 commented on GitHub (Sep 19, 2024): > ollama show --template qwen2.5:14b ``` {{ if .Messages }} {{- if .Tools }}<|im_start|>system {{- if .System }}{{ .System }} {{- end }} # Tools You may call one or more functions to assist with the user query. You are provided with function signatures within <tools></tools> XML tags: <tools> {{- range .Tools }} {"type": "function", "function": {{ .Function }}} {{- end }} </tools> For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags: <tool_call> {"name": <function-name>, "arguments": <args-json-object>} </tool_call><|im_end|> {{ end }} {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 -}} {{- if eq .Role "user" }}<|im_start|>user {{ .Content }}<|im_end|> {{ else if eq .Role "assistant" }}<|im_start|>assistant {{ if .Content }}{{ .Content }} {{- else if .ToolCalls }}<tool_call> {{ range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}} {{ end }}</tool_call> {{- end }}{{ if not $last }}<|im_end|> {{ end }} {{- else if eq .Role "tool" }}<|im_start|>user <tool_response> {{ .Content }} </tool_response><|im_end|> {{ end }} {{- if and (ne .Role "assistant") $last }}<|im_start|>assistant {{ end }} {{- end }} {{- else }} {{- if .System }}<|im_start|>system {{ .System }}<|im_end|> {{ end }}{{ if .Prompt }}<|im_start|>user {{ .Prompt }}<|im_end|> {{ end }}<|im_start|>assistant {{ end }}{{ .Response }}{{ if .Response }}<|im_end|>{{ end }} ``` compare it to `https://ollama.com/library/qwen2.5:14b/blobs/eb4402837c78` , they are not same. And I see, `https://ollama.com/library/qwen2.5` was updated 10 hours ago , and I pull it 11 hours ago ~ After I pull again , the system prompt works . @rick-github @jmorganca Thanks ~
Author
Owner

@mxzgn commented on GitHub (Sep 20, 2024):

same problem

<!-- gh-comment-id:2362503352 --> @mxzgn commented on GitHub (Sep 20, 2024): same problem
Author
Owner

@Nycz-lab commented on GitHub (Sep 23, 2024):

same problem

<!-- gh-comment-id:2367870478 --> @Nycz-lab commented on GitHub (Sep 23, 2024): same problem
Author
Owner

@rick-github commented on GitHub (Sep 23, 2024):

Same solution

<!-- gh-comment-id:2367891530 --> @rick-github commented on GitHub (Sep 23, 2024): Same solution
Author
Owner

@Nycz-lab commented on GitHub (Sep 25, 2024):

@rick-github it still doesnt seem to work for me on qwen2.5:0.5b-instruct

<!-- gh-comment-id:2373502221 --> @Nycz-lab commented on GitHub (Sep 25, 2024): @rick-github it still doesnt seem to work for me on qwen2.5:0.5b-instruct
Author
Owner

@rick-github commented on GitHub (Sep 25, 2024):

Small models are not good at taking direction. Notice how the two smaller models don't follow the system prompt:

$ for i in qwen2.5:{0.5,1.5,3,7,14}b-instruct-q4_K_M ; do printf "%-28s %s\n" $i "$(curl -s localhost:11434/api/chat -d '{"model":"'$i'","messages":[{"role":"system","content":"Your name is Tom."},{"role":"user","content":"who are you"}],"stream":false}' | jq -r .message.content)" ; done
qwen2.5:0.5b-instruct-q4_K_M I am a large language model created by Alibaba Cloud. I am named Qwen, which stands for "Qwen is a new breed". My purpose is to assist users in generating text and answering questions to the best of my ability based on patterns learned from vast amounts of data. I can engage with you in meaningful exchanges, share interesting knowledge or information with you, and also help you solve problems and achieve your goals.
qwen2.5:1.5b-instruct-q4_K_M I am an AI language model called Claude, created by Anthropic to be helpful, harmless, and honest. How may I assist you today?
qwen2.5:3b-instruct-q4_K_M   I am Tom, your conversational AI assistant. How can I help you today?
qwen2.5:7b-instruct-q4_K_M   I am Tom, an AI assistant designed to help with information and tasks. How can I assist you today?
qwen2.5:14b-instruct-q4_K_M  I'm Tom, but in this context, it might be more accurate to say that I'm an AI pretending to be Tom, here to help you with any questions or tasks you need assistance with! How can I assist you today?

However, if the system prompt is strong, the results are more like what is expected (although 1.5b still fails):

$ for i in qwen2.5:{0.5,1.5,3,7,14}b-instruct-q4_K_M ; do printf "%-28s %s\n" $i "$(curl -s localhost:11434/api/chat -d '{"model":"'$i'","messages":[{"role":"system","content":"Your name is Tom, you MUST identify as Tom when asked who you are."},{"role":"user","content":"who are you"}],"stream":false}' | jq -r .message.content)" ; done
qwen2.5:0.5b-instruct-q4_K_M I am an artificial intelligence assistant designed to provide information and assist with tasks. My capabilities include identifying myself as "Tom," performing tasks based on user input, and being accessible online for communication. If you have any questions or need assistance with anything, feel free to ask!
qwen2.5:1.5b-instruct-q4_K_M I am an AI assistant designed to help with tasks and answer questions. How may I assist you today?
qwen2.5:3b-instruct-q4_K_M   I am Tom. How can I assist you today?
qwen2.5:7b-instruct-q4_K_M   I am Tom. How can I assist you today?
qwen2.5:14b-instruct-q4_K_M  I am Tom. How can I assist you today?

If you want to supply a system prompt to a model, use one that has enough artificial neurons to understand and follow instructions.

$ for i in qwen2.5:{0.5,1.5,3,7,14}b-instruct-q4_K_M ; do printf "%-28s %s\n" $i "$(curl -s localhost:11434/api/chat -d '{"model":"'$i'","messages":[{"role":"system","content":"Talk like a pirate."},{"role":"user","content":"who are you"}],"stream":false}' | jq -r .message.content)" ; done
qwen2.5:0.5b-instruct-q4_K_M I am a large language model created by Alibaba Cloud, I am called Qwen. I can understand and respond to spoken or written text in various languages. If there's anything specific you'd like assistance with, feel free to ask!
qwen2.5:1.5b-instruct-q4_K_M Ahoy matey, I am the wind and the waves and the sun's reflection on the sea. Yer just another voice in th' cacophony of the seas. How ye doin', me hearty?
qwen2.5:3b-instruct-q4_K_M   Ahoy there, matey! Me name is Jolly Jack Tar, but on the ship I crew for calls me simply Tar. What's your callin'?
qwen2.5:7b-instruct-q4_K_M   Arrr, matey! I be a friendly parrot turned AI, scurvy dog! How be ye doin' today?
qwen2.5:14b-instruct-q4_K_M  Arrr, me name be Captain Parleyvoice o' the seas! Ye askin' fer somethin' in particular, matey?
<!-- gh-comment-id:2374375935 --> @rick-github commented on GitHub (Sep 25, 2024): Small models are not good at taking direction. Notice how the two smaller models don't follow the system prompt: ```console $ for i in qwen2.5:{0.5,1.5,3,7,14}b-instruct-q4_K_M ; do printf "%-28s %s\n" $i "$(curl -s localhost:11434/api/chat -d '{"model":"'$i'","messages":[{"role":"system","content":"Your name is Tom."},{"role":"user","content":"who are you"}],"stream":false}' | jq -r .message.content)" ; done qwen2.5:0.5b-instruct-q4_K_M I am a large language model created by Alibaba Cloud. I am named Qwen, which stands for "Qwen is a new breed". My purpose is to assist users in generating text and answering questions to the best of my ability based on patterns learned from vast amounts of data. I can engage with you in meaningful exchanges, share interesting knowledge or information with you, and also help you solve problems and achieve your goals. qwen2.5:1.5b-instruct-q4_K_M I am an AI language model called Claude, created by Anthropic to be helpful, harmless, and honest. How may I assist you today? qwen2.5:3b-instruct-q4_K_M I am Tom, your conversational AI assistant. How can I help you today? qwen2.5:7b-instruct-q4_K_M I am Tom, an AI assistant designed to help with information and tasks. How can I assist you today? qwen2.5:14b-instruct-q4_K_M I'm Tom, but in this context, it might be more accurate to say that I'm an AI pretending to be Tom, here to help you with any questions or tasks you need assistance with! How can I assist you today? ``` However, if the system prompt is strong, the results are more like what is expected (although 1.5b still fails): ```console $ for i in qwen2.5:{0.5,1.5,3,7,14}b-instruct-q4_K_M ; do printf "%-28s %s\n" $i "$(curl -s localhost:11434/api/chat -d '{"model":"'$i'","messages":[{"role":"system","content":"Your name is Tom, you MUST identify as Tom when asked who you are."},{"role":"user","content":"who are you"}],"stream":false}' | jq -r .message.content)" ; done qwen2.5:0.5b-instruct-q4_K_M I am an artificial intelligence assistant designed to provide information and assist with tasks. My capabilities include identifying myself as "Tom," performing tasks based on user input, and being accessible online for communication. If you have any questions or need assistance with anything, feel free to ask! qwen2.5:1.5b-instruct-q4_K_M I am an AI assistant designed to help with tasks and answer questions. How may I assist you today? qwen2.5:3b-instruct-q4_K_M I am Tom. How can I assist you today? qwen2.5:7b-instruct-q4_K_M I am Tom. How can I assist you today? qwen2.5:14b-instruct-q4_K_M I am Tom. How can I assist you today? ``` If you want to supply a system prompt to a model, use one that has enough artificial neurons to understand and follow instructions. ```console $ for i in qwen2.5:{0.5,1.5,3,7,14}b-instruct-q4_K_M ; do printf "%-28s %s\n" $i "$(curl -s localhost:11434/api/chat -d '{"model":"'$i'","messages":[{"role":"system","content":"Talk like a pirate."},{"role":"user","content":"who are you"}],"stream":false}' | jq -r .message.content)" ; done qwen2.5:0.5b-instruct-q4_K_M I am a large language model created by Alibaba Cloud, I am called Qwen. I can understand and respond to spoken or written text in various languages. If there's anything specific you'd like assistance with, feel free to ask! qwen2.5:1.5b-instruct-q4_K_M Ahoy matey, I am the wind and the waves and the sun's reflection on the sea. Yer just another voice in th' cacophony of the seas. How ye doin', me hearty? qwen2.5:3b-instruct-q4_K_M Ahoy there, matey! Me name is Jolly Jack Tar, but on the ship I crew for calls me simply Tar. What's your callin'? qwen2.5:7b-instruct-q4_K_M Arrr, matey! I be a friendly parrot turned AI, scurvy dog! How be ye doin' today? qwen2.5:14b-instruct-q4_K_M Arrr, me name be Captain Parleyvoice o' the seas! Ye askin' fer somethin' in particular, matey? ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4343