[GH-ISSUE #22284] feat: Knowledge Base / RAG Integration in Channels #58354

Closed
opened 2026-05-05 23:00:19 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @nrmjeremy on GitHub (Mar 6, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/22284

Check Existing Issues

  • I have searched for all existing open AND closed issues and discussions for similar requests. I have found none that is comparable to my request.

Verify Feature Scope

  • I have read through and understood the scope definition for feature requests in the Issues section. I believe my feature request meets the definition and belongs in the Issues section instead of the Discussions.

Problem Description

Summary

Enable Knowledge Base retrieval (RAG) and knowledge builtin tools when @mentioning models in Channels, consistent with how they work in standard Chat.

Current Behavior

When a model is @mentioned in a Channel, the message is routed through /api/v1/channels/.../messages/post. At this point:

  • Only web_search and web_fetch tools are injected into the tool list sent to the model
  • Knowledge tools (query_knowledge_bases, list_knowledge_bases, query_knowledge_files) are not injected
  • RAG context from Knowledge Bases attached to the model wrapper is not injected
  • The model has no access to any configured Knowledge Base, regardless of how the model is configured

This is confirmed in Docker logs during a Channel @mention:

🔧 Tool: web_search [type=web_search_20260209]
🔧 Tool: web_fetch [type=web_fetch_20260209]

No knowledge tools appear. No query_collection or hybrid_search calls are made.

By contrast, the same model used in a standard Chat correctly receives RAG-injected context from its attached Knowledge Base.

Steps to Reproduce

  1. Create a Knowledge Base and index files into it
  2. Create a model wrapper in Workspace → Models, attach the Knowledge Base (Focused Retrieval)
  3. Create a Channel and @mention the model wrapper with a question answerable from the KB
  4. Observe: model has no KB access, responds without any knowledge context
  5. Start a standard Chat with the same model wrapper, ask the same question
  6. Observe: RAG retrieval fires, model responds with KB content

Desired Solution you'd like

When a model is @mentioned in a Channel:

  1. Knowledge Bases attached to that model wrapper should be queried via RAG, with relevant chunks injected into context — consistent with standard Chat behavior
  2. Knowledge builtin tools (query_knowledge_bases, list_knowledge_bases, etc.) should be available in the tool list, consistent with Native Function Calling behavior in standard Chat

Use Case / Why This Matters

Channels are positioned as a collaborative workspace for teams and AI models working together. A core enterprise workflow this enables is:

  • Create a Channel per project
  • Use threads for specific workstreams (dev, architecture, ops)
  • @mention specialist AI models (architect, developer, etc.) alongside human team members
  • All artifacts and documentation flow into a shared Knowledge Base
  • Models reference the KB to maintain context across sessions — nothing is lost or repeated

This workflow is fully functional except for KB access in Channels. The Knowledge Base indexes correctly, model wrappers with KB attached work correctly in Chat, and the Channel threading/mention system works correctly. The only missing piece is the KB pipeline not being invoked on the Channel message path.

This is not a configuration issue — it is a gap in the Channel message routing pipeline. Adding KB tool injection and/or RAG context injection to the Channel path would complete an otherwise production-ready collaborative AI workflow.

Suggested Implementation

The Channel message handler should invoke the same RAG/tool injection pipeline that the standard chat completion handler uses, scoped to the model being @mentioned and its configured Knowledge Base attachments.

Alternatives Considered

No response

Additional Context

Environment

  • Open WebUI Version: v0.8.8
  • Installation: Docker on macOS (Apple Silicon / Mac Studio)
  • Embedding: Ollama / nomic-embed-text
  • Reranking: BAAI/bge-reranker-v2-m3
  • Hybrid Search: Enabled
  • Models: Anthropic Claude via API (claude-opus-4-6, etc.)
  • Content Extraction: Apache Tika
Originally created by @nrmjeremy on GitHub (Mar 6, 2026). Original GitHub issue: https://github.com/open-webui/open-webui/issues/22284 ### Check Existing Issues - [x] I have searched for all existing **open AND closed** issues and discussions for similar requests. I have found none that is comparable to my request. ### Verify Feature Scope - [x] I have read through and understood the scope definition for feature requests in the Issues section. I believe my feature request meets the definition and belongs in the Issues section instead of the Discussions. ### Problem Description ## Summary Enable Knowledge Base retrieval (RAG) and knowledge builtin tools when @mentioning models in Channels, consistent with how they work in standard Chat. ## Current Behavior When a model is @mentioned in a Channel, the message is routed through `/api/v1/channels/.../messages/post`. At this point: - Only `web_search` and `web_fetch` tools are injected into the tool list sent to the model - Knowledge tools (`query_knowledge_bases`, `list_knowledge_bases`, `query_knowledge_files`) are **not** injected - RAG context from Knowledge Bases attached to the model wrapper is **not** injected - The model has no access to any configured Knowledge Base, regardless of how the model is configured This is confirmed in Docker logs during a Channel @mention: ``` 🔧 Tool: web_search [type=web_search_20260209] 🔧 Tool: web_fetch [type=web_fetch_20260209] ``` No knowledge tools appear. No `query_collection` or `hybrid_search` calls are made. By contrast, the same model used in a standard Chat correctly receives RAG-injected context from its attached Knowledge Base. ## Steps to Reproduce 1. Create a Knowledge Base and index files into it 2. Create a model wrapper in Workspace → Models, attach the Knowledge Base (Focused Retrieval) 3. Create a Channel and @mention the model wrapper with a question answerable from the KB 4. Observe: model has no KB access, responds without any knowledge context 5. Start a standard Chat with the same model wrapper, ask the same question 6. Observe: RAG retrieval fires, model responds with KB content ### Desired Solution you'd like When a model is @mentioned in a Channel: 1. Knowledge Bases attached to that model wrapper should be queried via RAG, with relevant chunks injected into context — consistent with standard Chat behavior 2. Knowledge builtin tools (`query_knowledge_bases`, `list_knowledge_bases`, etc.) should be available in the tool list, consistent with Native Function Calling behavior in standard Chat ## Use Case / Why This Matters Channels are positioned as a collaborative workspace for teams and AI models working together. A core enterprise workflow this enables is: - Create a Channel per project - Use threads for specific workstreams (dev, architecture, ops) - @mention specialist AI models (architect, developer, etc.) alongside human team members - All artifacts and documentation flow into a shared Knowledge Base - Models reference the KB to maintain context across sessions — nothing is lost or repeated This workflow is **fully functional except for KB access in Channels**. The Knowledge Base indexes correctly, model wrappers with KB attached work correctly in Chat, and the Channel threading/mention system works correctly. The only missing piece is the KB pipeline not being invoked on the Channel message path. This is not a configuration issue — it is a gap in the Channel message routing pipeline. Adding KB tool injection and/or RAG context injection to the Channel path would complete an otherwise production-ready collaborative AI workflow. ## Suggested Implementation The Channel message handler should invoke the same RAG/tool injection pipeline that the standard chat completion handler uses, scoped to the model being @mentioned and its configured Knowledge Base attachments. ### Alternatives Considered _No response_ ### Additional Context ## Environment - Open WebUI Version: v0.8.8 - Installation: Docker on macOS (Apple Silicon / Mac Studio) - Embedding: Ollama / nomic-embed-text - Reranking: BAAI/bge-reranker-v2-m3 - Hybrid Search: Enabled - Models: Anthropic Claude via API (claude-opus-4-6, etc.) - Content Extraction: Apache Tika
Author
Owner

@i4j5 commented on GitHub (Mar 6, 2026):

the same problem

<!-- gh-comment-id:4010927663 --> @i4j5 commented on GitHub (Mar 6, 2026): the same problem
Author
Owner

@nrmjeremy commented on GitHub (Mar 6, 2026):

the same problem

@i4j5 Thanks for the quick confirmation — good to know others are hitting this too.
I'd like to contribute a fix if it would be welcome. Before spinning up on it, a couple of questions for the maintainers:

Is there an active branch or in-progress work toward this? I don't want to duplicate effort or submit a PR that conflicts with something already underway.

If a community PR would be welcome, any guidance on scope? From what I can tell the fix is in the Channel message routing pipeline — it needs to invoke the same RAG/tool injection that the standard chat completion path uses, scoped to the @mentioned model's configured Knowledge Base. Happy to be corrected if the right approach is different.

For context on my setup and diagnosis: I've traced this through Docker logs on a v0.8.8 instance. The Channel path only injects web_search and web_fetch tools — no knowledge tools appear, and no query_collection or hybrid_search calls are made. The same model wrapper with an attached Knowledge Base works correctly in standard Chat. So the gap appears to be isolated to the Channel message handler.

Happy to contribute this back to the community if it's a good use of effort. Let me know!

<!-- gh-comment-id:4012516606 --> @nrmjeremy commented on GitHub (Mar 6, 2026): > the same problem @i4j5 Thanks for the quick confirmation — good to know others are hitting this too. I'd like to contribute a fix if it would be welcome. Before spinning up on it, a couple of questions for the maintainers: Is there an active branch or in-progress work toward this? I don't want to duplicate effort or submit a PR that conflicts with something already underway. If a community PR would be welcome, any guidance on scope? From what I can tell the fix is in the Channel message routing pipeline — it needs to invoke the same RAG/tool injection that the standard chat completion path uses, scoped to the @mentioned model's configured Knowledge Base. Happy to be corrected if the right approach is different. For context on my setup and diagnosis: I've traced this through Docker logs on a v0.8.8 instance. The Channel path only injects web_search and web_fetch tools — no knowledge tools appear, and no query_collection or hybrid_search calls are made. The same model wrapper with an attached Knowledge Base works correctly in standard Chat. So the gap appears to be isolated to the Channel message handler. Happy to contribute this back to the community if it's a good use of effort. Let me know!
Author
Owner

@tjbck commented on GitHub (Mar 7, 2026):

#8050

<!-- gh-comment-id:4017585543 --> @tjbck commented on GitHub (Mar 7, 2026): #8050
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#58354