[GH-ISSUE #23823] issue: Channel @mention ignores model's native function calling settings and sends no tools to LLM (v0.8.12) #58747

Closed
opened 2026-05-05 23:50:21 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @shommey on GitHub (Apr 16, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/23823

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

v0.8.12

Ollama Version (if applicable)

No response

Operating System

openSuse tumbleweed

Browser (if applicable)

Chrome, Firefox, Zen

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

When tagging a model in a channel (@model), the chat completions request should include:

  • The tools array with at minimum the built-in channel tools (search_channels, search_channel_messages, view_channel_thread, view_channel_message)
  • Proper channel context so the model can reference conversation history

Per the official docs, the model should be able to autonomously navigate channel history using native function calling.

Actual Behavior

The payload sent to the LLM is completely bare - no tools, no features, no metadata, and stream: false hardcoded. The only "context" is the current thread stuffed as plain text into the system prompt:

json{
  "model": "qwen3.5:9b",
  "messages": [
    {
      "role": "system",
      "content": "You are suse.qwen3.5:9b, participating in a threaded conversation. Be concise and conversational.Here's the thread history:\n\n\nShommey: suse.qwen3.5:9b Sup?\n\n\nContinue the conversation naturally as suse.qwen3.5:9b, addressing the most recent message while being aware of the full context."
    },
    {
      "role": "user",
      "content": "Shommey: suse.qwen3.5:9b Sup?"
    }
  ],
  "stream": false
}

Steps to Reproduce

  1. Create a workspace model with Function Calling set to Native, Builtin Tools enabled, Channels subcategory enabled
  2. Verify the model calls tools correctly in a regular chat
  3. Enable Channels (Admin Panel → Settings → General → Channels Beta, and set ENABLE_CHANNELS=true)
  4. Create a channel and tag the model: @modelname hello
  5. Observe the request payload reaching the LLM backend - no tools array present or additional context from the channel except the newly created thread

Logs & Screenshots

No errors in browser logs or server (docker) logs.

Additional Information

  • No errors appear in Docker logs when triggering the channel mention
  • stream: false is hardcoded in the channel payload, which is also a separate known issue with native tool execution (#18121)
  • The docs describe channel-aware AI as a core feature, but the underlying wiring to make it work appears incomplete
Originally created by @shommey on GitHub (Apr 16, 2026). Original GitHub issue: https://github.com/open-webui/open-webui/issues/23823 ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!). - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version v0.8.12 ### Ollama Version (if applicable) _No response_ ### Operating System openSuse tumbleweed ### Browser (if applicable) Chrome, Firefox, Zen ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior When tagging a model in a channel (@model), the chat completions request should include: - The tools array with at minimum the built-in channel tools (search_channels, search_channel_messages, view_channel_thread, view_channel_message) - Proper channel context so the model can reference conversation history Per the official docs, the model should be able to autonomously navigate channel history using native function calling. ### Actual Behavior The payload sent to the LLM is completely bare - no tools, no features, no metadata, and stream: false hardcoded. The only "context" is the current thread stuffed as plain text into the system prompt: ``` json{ "model": "qwen3.5:9b", "messages": [ { "role": "system", "content": "You are suse.qwen3.5:9b, participating in a threaded conversation. Be concise and conversational.Here's the thread history:\n\n\nShommey: suse.qwen3.5:9b Sup?\n\n\nContinue the conversation naturally as suse.qwen3.5:9b, addressing the most recent message while being aware of the full context." }, { "role": "user", "content": "Shommey: suse.qwen3.5:9b Sup?" } ], "stream": false } ``` ### Steps to Reproduce 1. Create a workspace model with Function Calling set to Native, Builtin Tools enabled, Channels subcategory enabled 2. Verify the model calls tools correctly in a regular chat 3. Enable Channels (Admin Panel → Settings → General → Channels Beta, and set ENABLE_CHANNELS=true) 4. Create a channel and tag the model: @modelname hello 5. Observe the request payload reaching the LLM backend - no tools array present or additional context from the channel except the newly created thread ### Logs & Screenshots No errors in browser logs or server (docker) logs. ### Additional Information - No errors appear in Docker logs when triggering the channel mention - stream: false is hardcoded in the channel payload, which is also a separate known issue with native tool execution (#18121) - The docs describe channel-aware AI as a core feature, but the underlying wiring to make it work appears incomplete
GiteaMirror added the bug label 2026-05-05 23:50:21 -05:00
Author
Owner

@tjbck commented on GitHub (Apr 17, 2026):

#8050

<!-- gh-comment-id:4264799900 --> @tjbck commented on GitHub (Apr 17, 2026): #8050
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#58747