issue: @Model not working in prompt templates #6400

Closed
opened 2025-11-11 16:54:05 -06:00 by GiteaMirror · 1 comment
Owner

Originally created by @cybergexl on GitHub (Sep 14, 2025).

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

v0.6.28

Ollama Version (if applicable)

No response

Operating System

Debian

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

In the workspace > prompts > when adding a prompt that begins with <@the name of the desired model> as one might do in a conversation, the model is not changed dynamically, even though one might expect it to be, since it is specified in the first line of the prompt.

Actual Behavior

<@the name of the desired model> is added to the prompt as text but is not interpreted when sending the conversation.

Steps to Reproduce

  1. Go to Workspace > Prompts > Add New Prompt
  2. Enter a title and a command (anything will do...)
  3. In the prompt content, enter “@Sonoma Sky Alpha” (for example, to choose a specific model) and add content to the line, such as “Which LLM model am I using?”
  4. Save
  5. Create a new conversation and call up the prompt with “/”
  6. Send

Logs & Screenshots

Image

Additional Information

No response

Originally created by @cybergexl on GitHub (Sep 14, 2025). ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version v0.6.28 ### Ollama Version (if applicable) _No response_ ### Operating System Debian ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [ ] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior In the workspace > prompts > when adding a prompt that begins with <@the name of the desired model> as one might do in a conversation, the model is not changed dynamically, even though one might expect it to be, since it is specified in the first line of the prompt. ### Actual Behavior <@the name of the desired model> is added to the prompt as text but is not interpreted when sending the conversation. ### Steps to Reproduce 1. Go to Workspace > Prompts > Add New Prompt 2. Enter a title and a command (anything will do...) 3. In the prompt content, enter “@Sonoma Sky Alpha” (for example, to choose a specific model) and add content to the line, such as “Which LLM model am I using?” 4. Save 5. Create a new conversation and call up the prompt with “/” 6. Send ### Logs & Screenshots <img width="1007" height="442" alt="Image" src="https://github.com/user-attachments/assets/a7d4d008-3d27-4c26-bcc1-9093a7997071" /> ### Additional Information _No response_
GiteaMirror added the bug label 2025-11-11 16:54:05 -06:00
Author
Owner

@tjbck commented on GitHub (Sep 15, 2025):

Not how it's intended to be used, you'll have to click on the model name.

@tjbck commented on GitHub (Sep 15, 2025): Not how it's intended to be used, you'll have to click on the model name.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#6400