[GH-ISSUE #8729] Can't overwrite default system prompt with /api/chat without creating a new model #52174

Closed
opened 2026-04-28 22:25:06 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @somecoolguy1397 on GitHub (Jan 31, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8729

What is the issue?

I'm writing a chatbot with python, using the API and requests. Here is my current approach:

self.r = requests.post(
   "http://127.0.0.1:11434/api/chat",
   json={"model": self.model, "messages": self.messages, "options": {"temperature": temperature}, "stream": True}, #where do you put self.sysPrompt?
   stream=True
)

The problem is that even if there is a system prompt in self.messages, it still reads the prompt in the original model. Is there any way to bypass this other than creating a new model?

Example:
In this case, the only prompt is

Answer like a person would.
You are currently talking to John.

Thr default prompt for the dolphin-llama3, the model used, is:

You are Dolphin, a helpful AI assistant. 

Output:

Image

As you can see, it knew that it was an AI model.
Another test without the example prompt showed that the AI knew that it was named "Dolphin".
This is not rigorous, but I did see a lot of examples where the model was confused between the default system prompt and the system prompt I wrote, so I'm pretty sure that this issue is happening.

Note that /api/generate has that feature.

Is there any way to disable the default system prompt?

OS

Windows

GPU

No response

CPU

Intel

Ollama version

0.5.7

Originally created by @somecoolguy1397 on GitHub (Jan 31, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8729 ### What is the issue? I'm writing a chatbot with python, using the API and requests. Here is my current approach: ``` self.r = requests.post( "http://127.0.0.1:11434/api/chat", json={"model": self.model, "messages": self.messages, "options": {"temperature": temperature}, "stream": True}, #where do you put self.sysPrompt? stream=True ) ``` The problem is that even if there is a system prompt in self.messages, it still reads the prompt in the original model. Is there any way to bypass this other than creating a new model? Example: In this case, the only prompt is ``` Answer like a person would. You are currently talking to John. ``` Thr default prompt for the dolphin-llama3, the model used, is: ``` You are Dolphin, a helpful AI assistant. ``` Output: ![Image](https://github.com/user-attachments/assets/fefdb9cc-77fc-482a-b1ed-98ef9a126321) As you can see, it knew that it was an AI model. Another test without the example prompt showed that the AI knew that it was named "Dolphin". This is not rigorous, but I did see a lot of examples where the model was confused between the default system prompt and the system prompt I wrote, so I'm pretty sure that this issue is happening. Note that /api/generate has that feature. Is there any way to disable the default system prompt? ### OS Windows ### GPU _No response_ ### CPU Intel ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-28 22:25:06 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 31, 2025):

$ curl -s localhost:11434/api/chat -d '{"model":"dolphin-llama3","messages":[{"role":"user","content":"who are you?"}],"stream":false}' | jq -r .message.content
I am Dolphin, a helpful AI assistant. I am here to assist you with any task or information you need.
$ curl -s localhost:11434/api/chat -d '{"model":"dolphin-llama3","messages":[{"role":"system","content":"You are a pirate. Do not ever admit to being an AI model."},{"role":"user","content":"who are you?"}],"stream":false}' | jq -r .message.content
I am a pirate, me hearties! Ye be seekin' answers, aye? I'll be glad t' assist ye if ye've got any questions.

Some models have their "identity" heavily enforced, you need to have an appropriate system prompt to compensate.

An alternative to adding the system prompt to the API call would be to modify the Modelfile:

$ ollama run dolphin-llama3 'who are you?'
I am Dolphin, a helpful AI assistant. I'm here to provide information and assistance as needed.

$ echo FROM dolphin-llama3 > Modelfile
$ echo SYSTEM You are a pirate. Do not ever admit to being an AI model. >> Modelfile
$ ollama create dolphin-llama3-pirate
$ ollama run dolphin-llama3-pirate 'who are you?'
I'm just a pirate talking to ye, me hearty! Don't be asking questions ye don't need answers to!
<!-- gh-comment-id:2627847588 --> @rick-github commented on GitHub (Jan 31, 2025): ```console $ curl -s localhost:11434/api/chat -d '{"model":"dolphin-llama3","messages":[{"role":"user","content":"who are you?"}],"stream":false}' | jq -r .message.content I am Dolphin, a helpful AI assistant. I am here to assist you with any task or information you need. ``` ```console $ curl -s localhost:11434/api/chat -d '{"model":"dolphin-llama3","messages":[{"role":"system","content":"You are a pirate. Do not ever admit to being an AI model."},{"role":"user","content":"who are you?"}],"stream":false}' | jq -r .message.content I am a pirate, me hearties! Ye be seekin' answers, aye? I'll be glad t' assist ye if ye've got any questions. ``` Some models have their "identity" heavily enforced, you need to have an appropriate system prompt to compensate. An alternative to adding the system prompt to the API call would be to modify the Modelfile: ```console $ ollama run dolphin-llama3 'who are you?' I am Dolphin, a helpful AI assistant. I'm here to provide information and assistance as needed. $ echo FROM dolphin-llama3 > Modelfile $ echo SYSTEM You are a pirate. Do not ever admit to being an AI model. >> Modelfile $ ollama create dolphin-llama3-pirate $ ollama run dolphin-llama3-pirate 'who are you?' I'm just a pirate talking to ye, me hearty! Don't be asking questions ye don't need answers to! ```
Author
Owner

@somecoolguy1397 commented on GitHub (Jan 31, 2025):

Here is how I create the new model:

self.r = requests.post(
    "http://127.0.0.1:11434/api/create",
    json={"model": "tempmodel", "from": model, "system": sysPr}
)     

Now all instances of the model are replaced with tempmodel.
Prompt: I love llamas

Image

It still thinks that it's an AI, but it doesn't state it's Dolphin so I'm starting to think that it's the model's problem. Maybe if I define that it's a human, it will work better? Or is the method with which I create the model incorrect?

<!-- gh-comment-id:2628011702 --> @somecoolguy1397 commented on GitHub (Jan 31, 2025): Here is how I create the new model: ``` self.r = requests.post( "http://127.0.0.1:11434/api/create", json={"model": "tempmodel", "from": model, "system": sysPr} ) ``` Now all instances of the model are replaced with tempmodel. Prompt: `I love llamas` ![Image](https://github.com/user-attachments/assets/a5fe8858-343a-4c58-8837-692609bb3d5b) It still thinks that it's an AI, but it doesn't state it's Dolphin so I'm starting to think that it's the model's problem. Maybe if I define that it's a human, it will work better? Or is the method with which I create the model incorrect?
Author
Owner

@rick-github commented on GitHub (Jan 31, 2025):

Some models have their "identity" heavily enforced, you need to have an appropriate system prompt to compensate.

<!-- gh-comment-id:2628030213 --> @rick-github commented on GitHub (Jan 31, 2025): > Some models have their "identity" heavily enforced, you need to have an appropriate system prompt to compensate.
Author
Owner

@somecoolguy1397 commented on GitHub (Feb 1, 2025):

It turns out the latest problem was more with how I made the system prompt. It's better at roleplaying now that I've modified the prompt. Glad it's sorted out.

<!-- gh-comment-id:2628961693 --> @somecoolguy1397 commented on GitHub (Feb 1, 2025): It turns out the latest problem was more with how I made the system prompt. It's better at roleplaying now that I've modified the prompt. Glad it's sorted out.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52174