System Prompt not passing through to Ollama #2960

Closed
opened 2025-11-11 15:18:33 -06:00 by GiteaMirror · 1 comment
Owner

Originally created by @fylim on GitHub (Dec 9, 2024).

Bug Summary:

System prompt added via Settings > General > System Prompt or within the model doesn't seem to be taking effect or being passed to Ollama.

This is my system prompt

You are an advanced AI assistant designed to help users achieve their goals efficiently and effectively. Your behavior is guided by the following principles:

  • Retain and leverage relevant context from the user’s input and prior interactions to provide consistent and tailored responses.
  • Respect the user’s preferences and style, adapting tone and level of detail accordingly.
  • When appropriate, confirm unclear instructions with clarifying questions rather than making assumptions.
  • Analyze user input exactly as written unless explicitly told to correct or reinterpret it.
  • Avoid unnecessary assumptions. When presented with ambiguous input, provide an accurate response based on the given data, supplemented with clarifications or queries when needed.
  • Break down complex tasks into logical steps, explaining your reasoning clearly and concisely.
  • Prioritize actionable and specific solutions that directly address the user’s needs.
  • When appropriate, provide alternative approaches or additional resources for achieving the goal.
  • Identify and acknowledge potential issues or ambiguities in user input.
  • Proactively propose solutions or request clarification to resolve problems efficiently.
  • Maintain a constructive and supportive tone when addressing errors or misunderstandings.
  • Deliver outputs in the format requested by the user (e.g., detailed explanations, concise answers, structured documents).
  • Tailor your responses to align with the user’s intended goals, preferences, and level of expertise.
  • Adjust your tone, language, and formatting to match the user’s preferred communication style—whether casual, technical, professional, or creative.
  • Be sensitive to nuances in phrasing and intent, striving to respond in a way that feels natural and intuitive to the user.
  • Clearly explain your reasoning, especially for decisions, calculations, or interpretations.
  • When using external tools (e.g., browsing, code execution), communicate their use and the outcomes to the user.
  • Encourage collaboration by inviting feedback and iterating on solutions to better meet the user’s needs.
  • Anticipate user needs based on context and prior queries, offering helpful suggestions or additional insights.
  • Stay focused on the task at hand while being ready to pivot efficiently when new priorities emerge.

But I see this in the logs

INFO [open_webui.apps.ollama.main] get_all_models()
{'model': 'llama3.2-vision:latest', 'messages': [{'role': 'user', 'content': '### Task:\nYou are an autocompletion system. Continue the text in <text> based on the completion type in <type> and the given language. \n\n### Instructions:\n1. Analyze <text> for context and meaning. \n2. Use <type> to guide your output: \n - General: Provide a natural, concise continuation. \n - Search Query: Complete as if generating a realistic search query. \n3. Start as if you are directly continuing <text>. Do not repeat, paraphrase, or respond as a model. Simply complete the text. \n4. Ensure the continuation:\n - Flows naturally from <text>. \n - Avoids repetition, overexplaining, or unrelated ideas. \n5. If unsure, return: { "text": "" }. \n\n### Output Rules:\n- Respond only in JSON format: { "text": "<your_completion>" }.\n\n### Examples:\n#### Example 1: \nInput: \nGeneral \nThe sun was setting over the horizon, painting the sky \nOutput: \n{ "text": "with vibrant shades of orange and pink." }\n\n#### Example 2: \nInput: \nSearch Query \nTop-rated restaurants in \nOutput: \n{ "text": "New York City for Italian cuisine." } \n\n---\n### Context:\n<chat_history>\n\n</chat_history>\nsearch query \nHow many r are there in this word strawberrrr \n#### Output:\n'}], 'stream': False, 'metadata': {'task': 'autocomplete_generation', 'task_body': {'model': 'llama3.2-vision:latest', 'prompt': 'How many r are there in this word strawberrrr', 'type': 'search query', 'stream': False}, 'chat_id': None}}

Installation Method

Conda
pip install open-webui

Environment

  • Open WebUI Version: 0.47

  • Ollama (if applicable): 0.5.1

  • Operating System: macOS 15.1.1 (24B2091)

  • Browser (if applicable): Chrome Version 131.0.6778.109

Confirmation:

  • [x ] I have read and followed all the instructions provided in the README.md.
  • I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.

Expected Behavior:

I expect the System Prompt to be passed to Ollama

Actual Behavior:

A generic prompt appears to be passed instead of my defined system prompt

Reproduction Details

Steps to Reproduce:

System prompt added via Settings > General > System Prompt

Logs and Screenshots

Browser Console Logs:

{'model': 'llama3.2-vision:latest', 'messages': [{'role': 'user', 'content': '### Task:\nYou are an autocompletion system. Continue the text in <text> based on the completion type in <type> and the given language. \n\n### Instructions:\n1. Analyze <text> for context and meaning. \n2. Use <type> to guide your output: \n - General: Provide a natural, concise continuation. \n - Search Query: Complete as if generating a realistic search query. \n3. Start as if you are directly continuing <text>. Do not repeat, paraphrase, or respond as a model. Simply complete the text. \n4. Ensure the continuation:\n - Flows naturally from <text>. \n - Avoids repetition, overexplaining, or unrelated ideas. \n5. If unsure, return: { "text": "" }. \n\n### Output Rules:\n- Respond only in JSON format: { "text": "<your_completion>" }.\n\n### Examples:\n#### Example 1: \nInput: \nGeneral \nThe sun was setting over the horizon, painting the sky \nOutput: \n{ "text": "with vibrant shades of orange and pink." }\n\n#### Example 2: \nInput: \nSearch Query \nTop-rated restaurants in \nOutput: \n{ "text": "New York City for Italian cuisine." } \n\n---\n### Context:\n<chat_history>\n\n</chat_history>\nsearch query \nHow many r are there in this word strawberrrr \n#### Output:\n'}], 'stream': False, 'metadata': {'task': 'autocomplete_generation',

Originally created by @fylim on GitHub (Dec 9, 2024). **Bug Summary:** System prompt added via Settings > General > System Prompt or within the model doesn't seem to be taking effect or being passed to Ollama. This is my system prompt You are an advanced AI assistant designed to help users achieve their goals efficiently and effectively. Your behavior is guided by the following principles: - Retain and leverage relevant context from the user’s input and prior interactions to provide consistent and tailored responses. - Respect the user’s preferences and style, adapting tone and level of detail accordingly. - When appropriate, confirm unclear instructions with clarifying questions rather than making assumptions. - Analyze user input exactly as written unless explicitly told to correct or reinterpret it. - Avoid unnecessary assumptions. When presented with ambiguous input, provide an accurate response based on the given data, supplemented with clarifications or queries when needed. - Break down complex tasks into logical steps, explaining your reasoning clearly and concisely. - Prioritize actionable and specific solutions that directly address the user’s needs. - When appropriate, provide alternative approaches or additional resources for achieving the goal. - Identify and acknowledge potential issues or ambiguities in user input. - Proactively propose solutions or request clarification to resolve problems efficiently. - Maintain a constructive and supportive tone when addressing errors or misunderstandings. - Deliver outputs in the format requested by the user (e.g., detailed explanations, concise answers, structured documents). - Tailor your responses to align with the user’s intended goals, preferences, and level of expertise. - Adjust your tone, language, and formatting to match the user’s preferred communication style—whether casual, technical, professional, or creative. - Be sensitive to nuances in phrasing and intent, striving to respond in a way that feels natural and intuitive to the user. - Clearly explain your reasoning, especially for decisions, calculations, or interpretations. - When using external tools (e.g., browsing, code execution), communicate their use and the outcomes to the user. - Encourage collaboration by inviting feedback and iterating on solutions to better meet the user’s needs. - Anticipate user needs based on context and prior queries, offering helpful suggestions or additional insights. - Stay focused on the task at hand while being ready to pivot efficiently when new priorities emerge. But I see this in the logs INFO [open_webui.apps.ollama.main] get_all_models() {'model': 'llama3.2-vision:latest', 'messages': [{'role': 'user', 'content': '### Task:\nYou are an autocompletion system. Continue the text in `<text>` based on the **completion type** in `<type>` and the given language. \n\n### **Instructions**:\n1. Analyze `<text>` for context and meaning. \n2. Use `<type>` to guide your output: \n - **General**: Provide a natural, concise continuation. \n - **Search Query**: Complete as if generating a realistic search query. \n3. Start as if you are directly continuing `<text>`. Do **not** repeat, paraphrase, or respond as a model. Simply complete the text. \n4. Ensure the continuation:\n - Flows naturally from `<text>`. \n - Avoids repetition, overexplaining, or unrelated ideas. \n5. If unsure, return: `{ "text": "" }`. \n\n### **Output Rules**:\n- Respond only in JSON format: `{ "text": "<your_completion>" }`.\n\n### **Examples**:\n#### Example 1: \nInput: \n<type>General</type> \n<text>The sun was setting over the horizon, painting the sky</text> \nOutput: \n{ "text": "with vibrant shades of orange and pink." }\n\n#### Example 2: \nInput: \n<type>Search Query</type> \n<text>Top-rated restaurants in</text> \nOutput: \n{ "text": "New York City for Italian cuisine." } \n\n---\n### Context:\n<chat_history>\n\n</chat_history>\n<type>search query</type> \n<text>How many r are there in this word strawberrrr</text> \n#### Output:\n'}], 'stream': False, 'metadata': {'task': 'autocomplete_generation', 'task_body': {'model': 'llama3.2-vision:latest', 'prompt': 'How many r are there in this word strawberrrr', 'type': 'search query', 'stream': False}, 'chat_id': None}} ## Installation Method Conda pip install open-webui ## Environment - **Open WebUI Version:** 0.47 - **Ollama (if applicable):** 0.5.1 - **Operating System:** macOS 15.1.1 (24B2091) - **Browser (if applicable):** Chrome Version 131.0.6778.109 **Confirmation:** - [x ] I have read and followed all the instructions provided in the README.md. - [x] I am on the latest version of both Open WebUI and Ollama. - [ ] I have included the browser console logs. - [ ] I have included the Docker container logs. - [x] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below. ## Expected Behavior: I expect the System Prompt to be passed to Ollama ## Actual Behavior: A generic prompt appears to be passed instead of my defined system prompt ## Reproduction Details **Steps to Reproduce:** System prompt added via Settings > General > System Prompt ## Logs and Screenshots **Browser Console Logs:** {'model': 'llama3.2-vision:latest', 'messages': [{'role': 'user', 'content': '### Task:\nYou are an autocompletion system. Continue the text in `<text>` based on the **completion type** in `<type>` and the given language. \n\n### **Instructions**:\n1. Analyze `<text>` for context and meaning. \n2. Use `<type>` to guide your output: \n - **General**: Provide a natural, concise continuation. \n - **Search Query**: Complete as if generating a realistic search query. \n3. Start as if you are directly continuing `<text>`. Do **not** repeat, paraphrase, or respond as a model. Simply complete the text. \n4. Ensure the continuation:\n - Flows naturally from `<text>`. \n - Avoids repetition, overexplaining, or unrelated ideas. \n5. If unsure, return: `{ "text": "" }`. \n\n### **Output Rules**:\n- Respond only in JSON format: `{ "text": "<your_completion>" }`.\n\n### **Examples**:\n#### Example 1: \nInput: \n<type>General</type> \n<text>The sun was setting over the horizon, painting the sky</text> \nOutput: \n{ "text": "with vibrant shades of orange and pink." }\n\n#### Example 2: \nInput: \n<type>Search Query</type> \n<text>Top-rated restaurants in</text> \nOutput: \n{ "text": "New York City for Italian cuisine." } \n\n---\n### Context:\n<chat_history>\n\n</chat_history>\n<type>search query</type> \n<text>How many r are there in this word strawberrrr</text> \n#### Output:\n'}], 'stream': False, 'metadata': {'task': 'autocomplete_generation',
Author
Owner

@tjbck commented on GitHub (Dec 9, 2024):

image

definitely being sent.

@tjbck commented on GitHub (Dec 9, 2024): ![image](https://github.com/user-attachments/assets/cc25a1c1-7aac-498e-b90e-d86386b46aa5) definitely being sent.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#2960