mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 10:58:17 -05:00
System Prompt not passing through to Ollama #2960
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @fylim on GitHub (Dec 9, 2024).
Bug Summary:
System prompt added via Settings > General > System Prompt or within the model doesn't seem to be taking effect or being passed to Ollama.
This is my system prompt
You are an advanced AI assistant designed to help users achieve their goals efficiently and effectively. Your behavior is guided by the following principles:
But I see this in the logs
INFO [open_webui.apps.ollama.main] get_all_models()
{'model': 'llama3.2-vision:latest', 'messages': [{'role': 'user', 'content': '### Task:\nYou are an autocompletion system. Continue the text in
<text>based on the completion type in<type>and the given language. \n\n### Instructions:\n1. Analyze<text>for context and meaning. \n2. Use<type>to guide your output: \n - General: Provide a natural, concise continuation. \n - Search Query: Complete as if generating a realistic search query. \n3. Start as if you are directly continuing<text>. Do not repeat, paraphrase, or respond as a model. Simply complete the text. \n4. Ensure the continuation:\n - Flows naturally from<text>. \n - Avoids repetition, overexplaining, or unrelated ideas. \n5. If unsure, return:{ "text": "" }. \n\n### Output Rules:\n- Respond only in JSON format:{ "text": "<your_completion>" }.\n\n### Examples:\n#### Example 1: \nInput: \nGeneral \nThe sun was setting over the horizon, painting the sky \nOutput: \n{ "text": "with vibrant shades of orange and pink." }\n\n#### Example 2: \nInput: \nSearch Query \nTop-rated restaurants in \nOutput: \n{ "text": "New York City for Italian cuisine." } \n\n---\n### Context:\n<chat_history>\n\n</chat_history>\nsearch query \nHow many r are there in this word strawberrrr \n#### Output:\n'}], 'stream': False, 'metadata': {'task': 'autocomplete_generation', 'task_body': {'model': 'llama3.2-vision:latest', 'prompt': 'How many r are there in this word strawberrrr', 'type': 'search query', 'stream': False}, 'chat_id': None}}Installation Method
Conda
pip install open-webui
Environment
Open WebUI Version: 0.47
Ollama (if applicable): 0.5.1
Operating System: macOS 15.1.1 (24B2091)
Browser (if applicable): Chrome Version 131.0.6778.109
Confirmation:
Expected Behavior:
I expect the System Prompt to be passed to Ollama
Actual Behavior:
A generic prompt appears to be passed instead of my defined system prompt
Reproduction Details
Steps to Reproduce:
System prompt added via Settings > General > System Prompt
Logs and Screenshots
Browser Console Logs:
{'model': 'llama3.2-vision:latest', 'messages': [{'role': 'user', 'content': '### Task:\nYou are an autocompletion system. Continue the text in
<text>based on the completion type in<type>and the given language. \n\n### Instructions:\n1. Analyze<text>for context and meaning. \n2. Use<type>to guide your output: \n - General: Provide a natural, concise continuation. \n - Search Query: Complete as if generating a realistic search query. \n3. Start as if you are directly continuing<text>. Do not repeat, paraphrase, or respond as a model. Simply complete the text. \n4. Ensure the continuation:\n - Flows naturally from<text>. \n - Avoids repetition, overexplaining, or unrelated ideas. \n5. If unsure, return:{ "text": "" }. \n\n### Output Rules:\n- Respond only in JSON format:{ "text": "<your_completion>" }.\n\n### Examples:\n#### Example 1: \nInput: \nGeneral \nThe sun was setting over the horizon, painting the sky \nOutput: \n{ "text": "with vibrant shades of orange and pink." }\n\n#### Example 2: \nInput: \nSearch Query \nTop-rated restaurants in \nOutput: \n{ "text": "New York City for Italian cuisine." } \n\n---\n### Context:\n<chat_history>\n\n</chat_history>\nsearch query \nHow many r are there in this word strawberrrr \n#### Output:\n'}], 'stream': False, 'metadata': {'task': 'autocomplete_generation',@tjbck commented on GitHub (Dec 9, 2024):
definitely being sent.