[GH-ISSUE #2492] System Prompt not honored until re-run ollama serve #47967

Closed
opened 2026-04-28 06:12:53 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @hyjwei on GitHub (Feb 14, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2492

Originally assigned to: @BruceMacD on GitHub.

There are actually two issues regarding System Prompt in the current main branch, and I believe them to be related.

Issue 1: SYSTEM prompt in modelfile not honored

If I run a model, then create a new one based the same model, but with a new SYSTEM prompt, the new SYSTEM prompt is not honored. Killing the current ollama serve process and re-runing a new one with ollama serve would solve the problem.

How to replicate

Start a new server by ollama serve with OLLAMA_DEBUG=1
Run client with any model, for example, ollama run phi
Input a user prompt, you will find prompt debug info on server side, like

time=2024-02-14T06:55:05.081-05:00 level=DEBUG source=routes.go:1205 msg="chat handler" prompt="System: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful answers to the user's questions.\nUser: hello\nAssistant:" images=0

Quit the client, create a custom modelfile like

FROM phi
SYSTEM """I want you to speak French only."""

Create/run a new model with the custom modelfile
Input a user prompt, check prompt debug info on server side again, you will find that prompt debug info has the same System prompt as before. It is not updated to the custom system prompt specified in the modelfile.

If I restart server, and re-run the client with same custom model, then the prompt debug info in the server side is updated correctly.

Issue 2: /set system command in CLI changes System Prompt incorrectly

If I load a model, then use /set system to change System Prompt, ollama will actually append this new system prompt to the existing one, instead of replacing them.

How to replicate

Start a new server by ollama serve with OLLAMA_DEBUG=1
Run client with any model, for example, ollama run phi
Set a new system prompt in CLI, like

/set system I want you to speak French only.

You can confirm that the system prompt has indeed been changed by command /show modelfile or /show system
Input a user prompt, you will find prompt debug info on server side looks like:

time=2024-02-14T07:13:40.139-05:00 level=DEBUG source=routes.go:1205 msg="chat handler" prompt="System: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful answers to the user's questions.\nUser: \nAssistant:System: I want you to speak French only.\nUser: hello\nAssistant:" images=0

You can see the original system prompt is still there and the new system prompt is appended, followed by user input.

Furthermore, to make it worse, every time I set a new system prompt with /set system, the new system prompt will be appended to the old ones, instead of replacing them.

Originally created by @hyjwei on GitHub (Feb 14, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2492 Originally assigned to: @BruceMacD on GitHub. There are actually two issues regarding System Prompt in the current main branch, and I believe them to be related. # Issue 1: `SYSTEM` prompt in modelfile not honored If I run a model, then create a new one based the same model, but with a new `SYSTEM` prompt, the new `SYSTEM` prompt is not honored. Killing the current ollama serve process and re-runing a new one with `ollama serve` would solve the problem. ### How to replicate Start a new server by `ollama serve` with `OLLAMA_DEBUG=1` Run client with any model, for example, `ollama run phi` Input a user prompt, you will find prompt debug info on server side, like ``` time=2024-02-14T06:55:05.081-05:00 level=DEBUG source=routes.go:1205 msg="chat handler" prompt="System: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful answers to the user's questions.\nUser: hello\nAssistant:" images=0 ``` Quit the client, create a custom modelfile like ``` FROM phi SYSTEM """I want you to speak French only.""" ``` Create/run a new model with the custom modelfile Input a user prompt, check prompt debug info on server side again, you will find that prompt debug info has the same System prompt as before. It is not updated to the custom system prompt specified in the modelfile. If I restart server, and re-run the client with same custom model, then the prompt debug info in the server side is updated correctly. # Issue 2: `/set system` command in CLI changes System Prompt incorrectly If I load a model, then use `/set system` to change System Prompt, ollama will actually append this new system prompt to the existing one, instead of replacing them. ### How to replicate Start a new server by `ollama serve` with `OLLAMA_DEBUG=1` Run client with any model, for example, `ollama run phi` Set a new system prompt in CLI, like ``` /set system I want you to speak French only. ``` You can confirm that the system prompt has indeed been changed by command `/show modelfile` or `/show system` Input a user prompt, you will find prompt debug info on server side looks like: ``` time=2024-02-14T07:13:40.139-05:00 level=DEBUG source=routes.go:1205 msg="chat handler" prompt="System: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful answers to the user's questions.\nUser: \nAssistant:System: I want you to speak French only.\nUser: hello\nAssistant:" images=0 ``` You can see the original system prompt is still there and the new system prompt is appended, followed by user input. Furthermore, to make it worse, every time I set a new system prompt with `/set system`, the new system prompt will be appended to the old ones, instead of replacing them.
Author
Owner

@jukofyork commented on GitHub (Feb 16, 2024):

It's probably related to this:

https://github.com/ollama/ollama/issues/2470

Not sure if the ollama CLI uses that loop, but if the same logic is used elsewhere then it could append a second system prompt.

I think we need some much clearer way of logging exactly what the prompt template is producing as otherwise there could be all sorts of weird bugs like this seriously degrading the models.

<!-- gh-comment-id:1947853811 --> @jukofyork commented on GitHub (Feb 16, 2024): It's probably related to this: https://github.com/ollama/ollama/issues/2470 Not sure if the ollama CLI uses that loop, but if the same logic is used elsewhere then it could append a second system prompt. I think we need some much clearer way of logging exactly what the prompt template is producing as otherwise there could be all sorts of weird bugs like this seriously degrading the models.
Author
Owner

@BruceMacD commented on GitHub (Feb 16, 2024):

Thanks for the detailed report. This will be fixed in the next release.

<!-- gh-comment-id:1949218627 --> @BruceMacD commented on GitHub (Feb 16, 2024): Thanks for the detailed report. This will be fixed in the next release.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47967