[GH-ISSUE #21164] issue: Models do not use associated knowledge collections as they did in versions prior to v0.7.0 #58073

Closed
opened 2026-05-05 22:17:18 -05:00 by GiteaMirror · 35 comments
Owner

Originally created by @Baronco on GitHub (Feb 4, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/21164

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

v0.7.2

Ollama Version (if applicable)

No response

Operating System

Windows

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

We have several assistants responsible for helping and sending emails with documents to manage matters related to human resources, such as vacation requests, allowances, etc. Each model has its corresponding knowledge document; most of them are in Markdown.

The basic setup is function calling Native, temperature 0.5, and the respective OpenAPI tools to send attachments by email.

Up to the last version v0.6.4, when users asked questions the assistant was able to automatically use the attached knowledge to respond. In version v0.6.4, the model was able to automatically attach knowledge to answer questions about human resources processes:

Image

This is how I associate the knowledge collection or knowledge files with my models.

Image

Actual Behavior

The current behavior in the latest version of OWUI is that the model does not use the knowledge collection; it attempts to call the new OWUI tools but those are also unable to find the knowledge.

Image

I don’t understand whether this is a bug or if we are definitely using the new OWUI tools incorrectly. This behavior causes it either to hallucinate in its responses or to say that it does not have the knowledge to answer.

Steps to Reproduce

  1. Install the Docker image: docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

  2. Create a new model and add its corresponding system prompt.

  3. Configure in advanced parameters: function calling native and temperature 0.5.

  4. Select the tools + the new built-in tools from version 0.7.2.

  5. Create the knowledge collection and select it in the model settings or upload the file.

  6. Create a new chat and start asking the model how process X works as described in that knowledge.

Logs & Screenshots

Precisely, no error appears in the chat, so there are no obvious errors in the logs.

2026-02-04 12:49:30.853 | 2026-02-04 17:49:30.852 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:54078 - "POST /api/v1/chats/new HTTP/1.1" 200
2026-02-04 12:49:30.900 | 2026-02-04 17:49:30.900 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:54078 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200
2026-02-04 12:49:30.970 | 2026-02-04 17:49:30.970 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:54078 - "POST /api/v1/chats/039cee57-1a1f-4148-a0df-a81c9289f794 HTTP/1.1" 200
2026-02-04 12:49:30.986 | 2026-02-04 17:49:30.985 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:54078 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200
2026-02-04 12:49:31.081 | 2026-02-04 17:49:31.081 | INFO     | open_webui.routers.openai:get_all_models:477 - get_all_models()
2026-02-04 12:49:31.088 | 2026-02-04 17:49:31.088 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:54078 - "POST /api/chat/completions HTTP/1.1" 200
2026-02-04 12:49:31.110 | 2026-02-04 17:49:31.109 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:54078 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200
2026-02-04 12:49:31.322 | 2026-02-04 17:49:31.322 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:33420 - "GET /api/v1/tools/ HTTP/1.1" 200
2026-02-04 12:49:33.233 | 
2026-02-04 12:49:33.234 | Batches:   0%|          | 0/1 [00:00<?, ?it/s]
2026-02-04 12:49:33.234 | Batches: 100%|██████████| 1/1 [00:00<00:00, 54.91it/s]
2026-02-04 12:49:33.236 | 2026-02-04 17:49:33.235 | INFO     | open_webui.routers.openai:get_all_models:477 - get_all_models()
2026-02-04 12:49:37.690 | 2026-02-04 17:49:37.690 | INFO     | open_webui.routers.openai:get_all_models:477 - get_all_models()
2026-02-04 12:49:37.713 | 2026-02-04 17:49:37.712 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:56508 - "POST /api/chat/completed HTTP/1.1" 200
2026-02-04 12:49:37.772 | 2026-02-04 17:49:37.772 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:56508 - "POST /api/v1/chats/039cee57-1a1f-4148-a0df-a81c9289f794 HTTP/1.1" 200
2026-02-04 12:49:37.786 | 2026-02-04 17:49:37.786 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:56508 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200
2026-02-04 12:49:39.358 | 2026-02-04 17:49:39.357 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:56508 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200

Additional Information

No response

Originally created by @Baronco on GitHub (Feb 4, 2026). Original GitHub issue: https://github.com/open-webui/open-webui/issues/21164 ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!). - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version v0.7.2 ### Ollama Version (if applicable) _No response_ ### Operating System Windows ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior We have several assistants responsible for helping and sending emails with documents to manage matters related to human resources, such as vacation requests, allowances, etc. Each model has its corresponding knowledge document; most of them are in Markdown. The basic setup is function calling Native, temperature 0.5, and the respective OpenAPI tools to send attachments by email. Up to the last version v0.6.4, when users asked questions the assistant was able to automatically use the attached knowledge to respond. In version v0.6.4, the model was able to automatically attach knowledge to answer questions about human resources processes: <img width="737" height="132" alt="Image" src="https://github.com/user-attachments/assets/89689118-42e6-4ee9-856e-3f9c6bcadd9e" /> This is how I associate the knowledge collection or knowledge files with my models. <img width="966" height="162" alt="Image" src="https://github.com/user-attachments/assets/fb152936-cecd-4317-8daf-929146efa937" /> ### Actual Behavior The current behavior in the latest version of OWUI is that the model does not use the knowledge collection; it attempts to call the new OWUI tools but those are also unable to find the knowledge. <img width="960" height="727" alt="Image" src="https://github.com/user-attachments/assets/643941a3-2df0-41f8-9d6c-a92511a7ca3e" /> I don’t understand whether this is a bug or if we are definitely using the new OWUI tools incorrectly. This behavior causes it either to hallucinate in its responses or to say that it does not have the knowledge to answer. ### Steps to Reproduce 1. Install the Docker image: `docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama` 2. Create a new model and add its corresponding system prompt. 3. Configure in advanced parameters: function calling native and temperature 0.5. 4. Select the tools + the new built-in tools from version 0.7.2. 5. Create the knowledge collection and select it in the model settings or upload the file. 6. Create a new chat and start asking the model how process X works as described in that knowledge. ### Logs & Screenshots Precisely, no error appears in the chat, so there are no obvious errors in the logs. ``` 2026-02-04 12:49:30.853 | 2026-02-04 17:49:30.852 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:54078 - "POST /api/v1/chats/new HTTP/1.1" 200 2026-02-04 12:49:30.900 | 2026-02-04 17:49:30.900 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:54078 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 2026-02-04 12:49:30.970 | 2026-02-04 17:49:30.970 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:54078 - "POST /api/v1/chats/039cee57-1a1f-4148-a0df-a81c9289f794 HTTP/1.1" 200 2026-02-04 12:49:30.986 | 2026-02-04 17:49:30.985 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:54078 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 2026-02-04 12:49:31.081 | 2026-02-04 17:49:31.081 | INFO | open_webui.routers.openai:get_all_models:477 - get_all_models() 2026-02-04 12:49:31.088 | 2026-02-04 17:49:31.088 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:54078 - "POST /api/chat/completions HTTP/1.1" 200 2026-02-04 12:49:31.110 | 2026-02-04 17:49:31.109 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:54078 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 2026-02-04 12:49:31.322 | 2026-02-04 17:49:31.322 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:33420 - "GET /api/v1/tools/ HTTP/1.1" 200 2026-02-04 12:49:33.233 | 2026-02-04 12:49:33.234 | Batches: 0%| | 0/1 [00:00<?, ?it/s] 2026-02-04 12:49:33.234 | Batches: 100%|██████████| 1/1 [00:00<00:00, 54.91it/s] 2026-02-04 12:49:33.236 | 2026-02-04 17:49:33.235 | INFO | open_webui.routers.openai:get_all_models:477 - get_all_models() 2026-02-04 12:49:37.690 | 2026-02-04 17:49:37.690 | INFO | open_webui.routers.openai:get_all_models:477 - get_all_models() 2026-02-04 12:49:37.713 | 2026-02-04 17:49:37.712 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:56508 - "POST /api/chat/completed HTTP/1.1" 200 2026-02-04 12:49:37.772 | 2026-02-04 17:49:37.772 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:56508 - "POST /api/v1/chats/039cee57-1a1f-4148-a0df-a81c9289f794 HTTP/1.1" 200 2026-02-04 12:49:37.786 | 2026-02-04 17:49:37.786 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:56508 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 2026-02-04 12:49:39.358 | 2026-02-04 17:49:39.357 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 172.17.0.1:56508 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 ``` ### Additional Information _No response_
GiteaMirror added the bug label 2026-05-05 22:17:18 -05:00
Author
Owner

@owui-terminator[bot] commented on GitHub (Feb 4, 2026):

🔍 Similar Issues Found

I found some existing issues that might be related to this one. Please check if any of these are duplicates or contain helpful solutions:

  1. #20824 issue: Response from model does not appear until page refresh
    by NTShop • Jan 20, 2026 • bug

  2. #20676 issue: Cloud models fail to use native tools
    by 0x7CFE • Jan 15, 2026 • bug

  3. #20361 Issue: Large-scale model setting-related functionality fails.
    by shentong0722 • Jan 04, 2026 • bug

  4. #19610 issue: Models not appearing for non-admin users
    by westbrook-ai • Nov 30, 2025 • bug

  5. #19615 issue: [TEST] Models Not Available to Users
    by westbrook-ai • Nov 30, 2025 • bug

Show 5 more related issues
  1. #19711 issue: Editing function for models broken
    by skleffmann • Dec 03, 2025 • bug

  2. #19899 issue: openrouter Models not showing up in the model selection list.
    by AZComputerSolutions • Dec 12, 2025 • bug

  3. #19188 issue: Model drop-down fails to show models from remote hosts (ollama, llama.cpp)
    by d-shehu • Nov 14, 2025 • bug

  4. #19549 issue: usage model setting becomes unticked when model is changed
    by YetheSamartaka • Nov 27, 2025 • bug, confirmed issue

  5. #19103 issue: no response from the model when ask in "channels"
    by silenceroom • Nov 11, 2025 • bug


💡 Tips:

  • If this is a duplicate, please consider closing this issue and adding any additional details to the existing one
  • If you found a solution in any of these issues, please share it here to help others

This comment was generated automatically by a bot. Please react with a 👍 if this comment was helpful, or a 👎 if it was not.

<!-- gh-comment-id:3848875870 --> @owui-terminator[bot] commented on GitHub (Feb 4, 2026): 🔍 **Similar Issues Found** I found some existing issues that might be related to this one. Please check if any of these are duplicates or contain helpful solutions: 1. [#20824](https://github.com/open-webui/open-webui/issues/20824) **issue: Response from model does not appear until page refresh** *by NTShop • Jan 20, 2026 • `bug`* 2. [#20676](https://github.com/open-webui/open-webui/issues/20676) **issue: Cloud models fail to use native tools** *by 0x7CFE • Jan 15, 2026 • `bug`* 3. [#20361](https://github.com/open-webui/open-webui/issues/20361) **Issue: Large-scale model setting-related functionality fails.** *by shentong0722 • Jan 04, 2026 • `bug`* 4. [#19610](https://github.com/open-webui/open-webui/issues/19610) **issue: Models not appearing for non-admin users** *by westbrook-ai • Nov 30, 2025 • `bug`* 5. [#19615](https://github.com/open-webui/open-webui/issues/19615) **issue: [TEST] Models Not Available to Users** *by westbrook-ai • Nov 30, 2025 • `bug`* <details> <summary>Show 5 more related issues</summary> 6. [#19711](https://github.com/open-webui/open-webui/issues/19711) **issue: Editing function for models broken** *by skleffmann • Dec 03, 2025 • `bug`* 7. [#19899](https://github.com/open-webui/open-webui/issues/19899) **issue: openrouter Models not showing up in the model selection list.** *by AZComputerSolutions • Dec 12, 2025 • `bug`* 8. [#19188](https://github.com/open-webui/open-webui/issues/19188) **issue: Model drop-down fails to show models from remote hosts (ollama, llama.cpp)** *by d-shehu • Nov 14, 2025 • `bug`* 9. [#19549](https://github.com/open-webui/open-webui/issues/19549) **issue: usage model setting becomes unticked when model is changed** *by YetheSamartaka • Nov 27, 2025 • `bug`, `confirmed issue`* 10. [#19103](https://github.com/open-webui/open-webui/issues/19103) **issue: no response from the model when ask in "channels"** *by silenceroom • Nov 11, 2025 • `bug`* </details> --- 💡 **Tips:** - If this is a duplicate, please consider closing this issue and adding any additional details to the existing one - If you found a solution in any of these issues, please share it here to help others *This comment was generated automatically by a bot.* Please react with a 👍 if this comment was helpful, or a 👎 if it was not.
Author
Owner

@Classic298 commented on GitHub (Feb 4, 2026):

with function calling native you need the model to first find the knowledge base you attached and then to query it. e.g. add it to the system prompt of the model

or dont use native function calling here

docs will be updated to reflect this

<!-- gh-comment-id:3848886521 --> @Classic298 commented on GitHub (Feb 4, 2026): with function calling native you need the model to first find the knowledge base you attached and then to query it. e.g. add it to the system prompt of the model or dont use native function calling here docs will be updated to reflect this
Author
Owner

@Baronco commented on GitHub (Feb 4, 2026):

do you mean including an explicit instruction with the name of the knowledge base or the collection ID?"

<!-- gh-comment-id:3848899383 --> @Baronco commented on GitHub (Feb 4, 2026): do you mean including an explicit instruction with the name of the knowledge base or the collection ID?"
Author
Owner

@Classic298 commented on GitHub (Feb 4, 2026):

i mean telling the model to list knowledge bases first it has access to using the list_knowledge_bases and then it knows which knowledge base to query (the only one it has access to - the one you added) and can query it.

<!-- gh-comment-id:3848905226 --> @Classic298 commented on GitHub (Feb 4, 2026): i mean telling the model to list knowledge bases first it has access to using the list_knowledge_bases and then it knows which knowledge base to query (the only one it has access to - the one you added) and can query it.
Author
Owner

@Baronco commented on GitHub (Feb 4, 2026):

I am adding the following to the end of my system prompt:

# Available knowledge to use the built-in tools for knowledge queries

knowledge -> 0f818570-0d05-4fcb-bc63-a12cc6c8a378

I understand that the collection ID can be obtained manually from the workspace URL:
http://localhost:3000/workspace/knowledge/0f818570-0d05-4fcb-bc63-a12cc6c8a378

My model keeps trying to perform searches using the query_knowledge_files tool without getting any results:

{
  "query": "maximum vacation days",
  "knowledge_ids": [
    "0f818570-0d05-4fcb-bc63-a12cc6c8a378"
  ],
  "count": 5
}
[]
<!-- gh-comment-id:3848947124 --> @Baronco commented on GitHub (Feb 4, 2026): I am adding the following to the end of my system prompt: ``` # Available knowledge to use the built-in tools for knowledge queries knowledge -> 0f818570-0d05-4fcb-bc63-a12cc6c8a378 ``` I understand that the collection ID can be obtained manually from the workspace URL: http://localhost:3000/workspace/knowledge/0f818570-0d05-4fcb-bc63-a12cc6c8a378 My model keeps trying to perform searches using the `query_knowledge_files` tool without getting any results: ```` { "query": "maximum vacation days", "knowledge_ids": [ "0f818570-0d05-4fcb-bc63-a12cc6c8a378" ], "count": 5 } [] ````
Author
Owner

@Classic298 commented on GitHub (Feb 4, 2026):

  1. did you enable the knowledge base in the file as full context? if yes, disable it. This currently doesnt work, is a bug
  2. try to have the model use list knowledge bases first and then use query_knowledge_files

reference here https://docs.openwebui.com/features/plugin/tools/#built-in-system-tools-nativeagentic-mode

<!-- gh-comment-id:3848956812 --> @Classic298 commented on GitHub (Feb 4, 2026): 1) did you enable the knowledge base in the file as full context? if yes, disable it. This currently doesnt work, is a bug 2) try to have the model use list knowledge bases first and then use query_knowledge_files reference here https://docs.openwebui.com/features/plugin/tools/#built-in-system-tools-nativeagentic-mode
Author
Owner

@Baronco commented on GitHub (Feb 4, 2026):

I just disabled that "full context" option, but I'm still not getting any results:

Image Image

Honestly, I don't understand what I'm doing wrong. Please translate that :(

<!-- gh-comment-id:3849037622 --> @Baronco commented on GitHub (Feb 4, 2026): I just disabled that "full context" option, but I'm still not getting any results: <img width="826" height="81" alt="Image" src="https://github.com/user-attachments/assets/67b864cf-113d-429f-8f2b-99f67a451007" /> <img width="777" height="472" alt="Image" src="https://github.com/user-attachments/assets/d710cd7c-4cc0-4e7c-8d3c-f13dae47a19c" /> Honestly, I don't understand what I'm doing wrong. Please translate that :(
Author
Owner

@Classic298 commented on GitHub (Feb 4, 2026):

No you disabled file context. Enable it again

You should disable full context.

Open model
Go to where you added the knowledge
Click on the knowledge
Ensure the toggle is turned off

<!-- gh-comment-id:3849473603 --> @Classic298 commented on GitHub (Feb 4, 2026): No you disabled file context. Enable it again You should disable full context. Open model Go to where you added the knowledge Click on the knowledge Ensure the toggle is turned off
Author
Owner

@Classic298 commented on GitHub (Feb 4, 2026):

Also let the model handle the tool calls. Don't tell it the id exactly. It should handle it.

<!-- gh-comment-id:3849475331 --> @Classic298 commented on GitHub (Feb 4, 2026): Also let the model handle the tool calls. Don't tell it the id exactly. It should handle it.
Author
Owner

@Baronco commented on GitHub (Feb 5, 2026):

  1. enable file context again
Image
  1. full context disable
Image
  1. toggle disable
Image
  1. still not getting any results 😭
Image

would you be able to share your exact configuration with a model so that it can answer questions based on the assigned knowledge base? 🙇‍♂️

I'm no longer sure if this is a bug, because this did not occur before version 0.7.0.

<!-- gh-comment-id:3850997206 --> @Baronco commented on GitHub (Feb 5, 2026): 1. enable file context again <img width="1030" height="146" alt="Image" src="https://github.com/user-attachments/assets/59c60c3a-d7e8-4162-b35f-76ce96233a07" /> 2. full context disable <img width="891" height="1182" alt="Image" src="https://github.com/user-attachments/assets/e37d2657-9f91-44ed-8423-a84a2ac02542" /> 3. toggle disable <img width="913" height="177" alt="Image" src="https://github.com/user-attachments/assets/5bb47eb9-69dc-4d1f-9fa3-4f435a02b1f9" /> 4. still not getting any results 😭 <img width="961" height="214" alt="Image" src="https://github.com/user-attachments/assets/0f574651-2122-4418-8b38-9bc5b0cc8dfd" /> would you be able to share your exact configuration with a model so that it can answer questions based on the assigned knowledge base? 🙇‍♂️ I'm no longer sure if this is a bug, because this did not occur before version 0.7.0.
Author
Owner

@Classic298 commented on GitHub (Feb 5, 2026):

Interesting. OH WAIT

Did you attach only a single file?

That might be it

Try putting that .md file into a knowledge base first

and then attach the knowledge base

do that

<!-- gh-comment-id:3851575301 --> @Classic298 commented on GitHub (Feb 5, 2026): Interesting. OH WAIT Did you attach only a single file? That might be it Try putting that .md file into a knowledge base first and then attach the knowledge base do that
Author
Owner

@Baronco commented on GitHub (Feb 5, 2026):

As you said, I was attaching a single document. I tried what you suggested and now it works; I am attaching the steps:

  1. I create a new knowledge collection and add the respective markdown document:
    Image

  2. In the model settings, I add the knowledge collection:
    Image

  3. The enabled capabilities I have are:
    Image

  4. I keep my original configurations of the Documents module with Bypass Embedding and Retrieval enabled:
    Image

  5. I test again by asking about a topic contained in that knowledge, and this time the assistant does use the query_knowledge_files tool and gets a response. Just to clarify, in the system prompt, I removed any instruction that told it which knowledge bases are available and their IDs. It worked!
    Image

  6. But if I try to upload the .md file with the knowledge in the settings instead of selecting the created collection, again the query_knowledge_files tool malfunctions as it does not get results. For me, this is clearly a bug 🚨. Do you know if this behavior is associated with any currently reported bug?_
    Image

Finally, the documentation of the sbuilint tools does not mention how query_knowledge_files works: which embedding model does it use? What is the chunk size? Is the chunk size measured in tokens or characters? Can this behavior be modified in the Documents module by disabling Bypass Embedding and Retrieval?

<!-- gh-comment-id:3854793407 --> @Baronco commented on GitHub (Feb 5, 2026): As you said, I was attaching a single document. I tried what you suggested and now it works; I am attaching the steps: 1. I create a new knowledge collection and add the respective markdown document: <img width="460" height="162" alt="Image" src="https://github.com/user-attachments/assets/b610bd72-03d6-4f89-8da5-303eb260912f" /> 2. In the model settings, I add the knowledge collection: <img width="475" height="122" alt="Image" src="https://github.com/user-attachments/assets/0c0488fa-770d-45f8-a897-6cdf8d21bb88" /> 3. The enabled capabilities I have are: <img width="828" height="80" alt="Image" src="https://github.com/user-attachments/assets/49388876-1c2f-480d-8775-f320cc2c2e48" /> 4. I keep my original configurations of the Documents module with Bypass Embedding and Retrieval enabled: <img width="711" height="576" alt="Image" src="https://github.com/user-attachments/assets/74bbbb12-6fee-4cee-b807-a997d5fa846d" /> 5. I test again by asking about a topic contained in that knowledge, and this time the assistant does use the query_knowledge_files tool and gets a response. Just to clarify, in the system prompt, I removed any instruction that told it which knowledge bases are available and their IDs. It worked! <img width="957" height="377" alt="Image" src="https://github.com/user-attachments/assets/bf228b7f-ae77-4597-abeb-2bbc82131440" /> 6. But if I try to upload the .md file with the knowledge in the settings instead of selecting the created collection, again the query_knowledge_files tool malfunctions as it does not get results. For me, this is clearly a bug 🚨. Do you know if this behavior is associated with any currently reported bug?_ <img width="967" height="252" alt="Image" src="https://github.com/user-attachments/assets/fa5a47c8-cfa6-43e9-824b-cf15204263d5" /> Finally, the documentation of the sbuilint tools does not mention how query_knowledge_files works: which embedding model does it use? What is the chunk size? Is the chunk size measured in tokens or characters? Can this behavior be modified in the Documents module by disabling Bypass Embedding and Retrieval?
Author
Owner

@Classic298 commented on GitHub (Feb 5, 2026):

@Baronco dont worry docs will be updated here to make the descriptions better - in fact - docs already updated on dev branch, changes will go live soon on the docs page once next version releases.

RE this - great observation. @jimbo-p do you want to look into this? you pr is already tackling that exact code area i believe

<!-- gh-comment-id:3855369220 --> @Classic298 commented on GitHub (Feb 5, 2026): @Baronco dont worry docs will be updated here to make the descriptions better - in fact - docs already updated on dev branch, changes will go live soon on the docs page once next version releases. RE this - great observation. @jimbo-p do you want to look into this? you pr is already tackling that exact code area i believe
Author
Owner

@jimbo-p commented on GitHub (Feb 5, 2026):

@Classic298 sure, this is a good find and should fit right in with my stuff. Will try and get on this tonight!

<!-- gh-comment-id:3856121503 --> @jimbo-p commented on GitHub (Feb 5, 2026): @Classic298 sure, this is a good find and should fit right in with my stuff. Will try and get on this tonight!
Author
Owner

@Classic298 commented on GitHub (Feb 5, 2026):

thanks much appreciated. new version is imminent

<!-- gh-comment-id:3856158502 --> @Classic298 commented on GitHub (Feb 5, 2026): thanks much appreciated. new version is imminent
Author
Owner

@jimbo-p commented on GitHub (Feb 6, 2026):

@Classic298
Assume the below discussion assumes native tools is on and 'Use Entire Document' toggle is not enabled. Previous discussions / PR fix the 'User Entire Document' issue and it's completely ignored from knowledge search when enabled.

So one weird scenario about this one. Right now, the behavior is:

  1. No KBs attached, native tools looks at / for any KBs you have access to (public / you created) and can search those.
  2. At least one KB attached, native tools is limited to ONLY look in that KB, even if you technically have access to others. The fact that it's attached direclty to the model signals you don't want native tool KB searches to look anywhere but those attached KBs.

That all makes sense. But now when a file enters the scene, a new condition is introduced:
No KBs attached, a file is attached
a) Only search the attached file(s)?
b) Search KBs you have access to (public / you created) + add the file to the search.

(Other conditions involving files I think are obvious. Namely: No KB, No file - (1) above. At least one KB, at least one or no files, (2) above)

So in the scenario

No KBs attached, a file is attached

What makes the most sense to me is attaching a file to a custom model simply adds the file to the knowledge search... whatever that search may be. So if the knowledge search was going to look at your public KBs + KBs you created (because you hadn't attached a KB to the model), add a file and that will also be searched in addition to the KB collections.

What will NOT happen is if you attach a file and no KBs are attached, knowledge search will be scoped to only that file.

I'm going to build this out now. Let me know if you disagree.

<!-- gh-comment-id:3857914988 --> @jimbo-p commented on GitHub (Feb 6, 2026): @Classic298 _Assume the below discussion assumes native tools is on and 'Use Entire Document' toggle is not enabled. Previous discussions / PR fix the 'User Entire Document' issue and it's completely ignored from knowledge search when enabled._ So one weird scenario about this one. Right now, the behavior is: 1) No KBs attached, native tools looks at / for any KBs you have access to (public / you created) and can search those. 2) At least one KB attached, native tools is limited to ONLY look in that KB, even if you technically have access to others. The fact that it's attached direclty to the model signals you don't want native tool KB searches to look anywhere but those attached KBs. That all makes sense. But now when a file enters the scene, a new condition is introduced: No KBs attached, **a file is attached** a) Only search the attached file(s)? b) Search KBs you have access to (public / you created) + add the file to the search. (Other conditions involving files I think are obvious. Namely: No KB, No file - (1) above. At least one KB, at least one or no files, (2) above) So in the scenario > No KBs attached, **a file is attached** What makes the most sense to me is attaching a file to a custom model simply adds the file to the knowledge search... whatever that search may be. So if the knowledge search was going to look at your public KBs + KBs you created (because you hadn't attached a KB to the model), add a file and that will also be searched in addition to the KB collections. What will **NOT** happen is if you attach a file and no KBs are attached, knowledge search will be scoped to **only** that file. I'm going to build this out now. Let me know if you disagree.
Author
Owner

@jimbo-p commented on GitHub (Feb 6, 2026):

fix added to #21144

<!-- gh-comment-id:3858211765 --> @jimbo-p commented on GitHub (Feb 6, 2026): fix added to #21144
Author
Owner

@Classic298 commented on GitHub (Feb 6, 2026):

That all makes sense. But now when a file enters the scene, a new condition is introduced:
No KBs attached, a file is attached

Well then only search the attached file.

It should behave the same way as the other scenario we had.

KB attached -> only has access to that one
File attached -> only has access to that

Dont think of it ** this specifically**
Think of it more generally

If i attach ANYTHING to a model - then the implied behaviour in all cases should be "only access what is attached"

<!-- gh-comment-id:3859646908 --> @Classic298 commented on GitHub (Feb 6, 2026): > That all makes sense. But now when a file enters the scene, a new condition is introduced: > No KBs attached, a file is attached Well then only search the attached file. It should behave the same way as the other scenario we had. KB attached -> only has access to that one File attached -> only has access to that Dont think of it ** this specifically** Think of it more generally If i attach ANYTHING to a model - then the implied behaviour in all cases should be "only access what is attached"
Author
Owner

@jimbo-p commented on GitHub (Feb 6, 2026):

If i attach ANYTHING to a model - then the implied behaviour in all cases should be "only access what is attached"

10-4. Done! PR now includes this behavior.

<!-- gh-comment-id:3860318523 --> @jimbo-p commented on GitHub (Feb 6, 2026): > If i attach ANYTHING to a model - then the implied behaviour in all cases should be "only access what is attached" 10-4. Done! PR now includes this behavior.
Author
Owner

@jimbo-p commented on GitHub (Feb 6, 2026):

If i attach ANYTHING to a model - then the implied behaviour in all cases should be "only access what is attached"

Just to make sure we are aligned on the technicalities:

If i attach ANYTHING (excluding Notes (always full context) or KBs / Files set to Full Context) to a model - then the implied behaviour in all cases should be "only access what is attached"

<!-- gh-comment-id:3860450632 --> @jimbo-p commented on GitHub (Feb 6, 2026): > If i attach ANYTHING to a model - then the implied behaviour in all cases should be "only access what is attached" Just to make sure we are aligned on the technicalities: If i attach ANYTHING (excluding Notes (always full context) or KBs / Files set to Full Context) to a model - then the implied behaviour in all cases should be "only access what is attached"
Author
Owner

@Classic298 commented on GitHub (Feb 6, 2026):

hm

i dont think we should have exclusions there?
If i attach a document, no matter if a note or file or knowledge base and no matter if full context or not, the access control should limit the model to only what is attached to the model - no matter if full context or not

and if nothing is attached - then the same access controls of the user that queries the model are applied.

This is how i see it.
Tim MIGHT see it differently, but i think we should aim for consistency.

<!-- gh-comment-id:3860468676 --> @Classic298 commented on GitHub (Feb 6, 2026): hm i dont think we should have exclusions there? If i attach a document, no matter if a note or file or knowledge base and no matter if full context or not, the access control should limit the model to only what is attached to the model - no matter if full context or not and if nothing is attached - then the same access controls of the user that queries the model are applied. This is how i see it. Tim MIGHT see it differently, but i think we should aim for consistency.
Author
Owner

@jimbo-p commented on GitHub (Feb 6, 2026):

When it's full context, it's not included in the knowledge search anyways though. Full context injects the entire text of the collection into the message and knowledge search is completely unaware of being there. It would be redundant to search over something which already has all of its context in the message.

Like if you attach a single file to a model, turn on full context, and then ask to search knowledge for it, it will be impossible for knowledge to find it because it's not an item knowledge search can see.

<!-- gh-comment-id:3860489927 --> @jimbo-p commented on GitHub (Feb 6, 2026): When it's full context, it's not included in the knowledge search anyways though. Full context injects the entire text of the collection into the message and knowledge search is completely unaware of being there. It would be redundant to search over something which already has all of its context in the message. Like if you attach a single file to a model, turn on full context, and then ask to search knowledge for it, it will be impossible for knowledge to find it because it's not an item knowledge search can see.
Author
Owner

@Classic298 commented on GitHub (Feb 6, 2026):

hmmmmmmmmmmmmmmmmmmmmmmmmmmm

<!-- gh-comment-id:3860510104 --> @Classic298 commented on GitHub (Feb 6, 2026): hmmmmmmmmmmmmmmmmmmmmmmmmmmm
Author
Owner

@Classic298 commented on GitHub (Feb 6, 2026):

how does the model query individual files that are not full context with your PR?

<!-- gh-comment-id:3860511450 --> @Classic298 commented on GitHub (Feb 6, 2026): how does the model query individual files that are not full context with your PR?
Author
Owner

@jimbo-p commented on GitHub (Feb 6, 2026):

query_knowledge_files is used. If you attach knowledge to a model, native tools available related to knowledge is scoped down to "query_knowledge_files" (original behavior). Function gets model_knowledge to know what files / KBs are available to search.

model_knowledge includes that individual file.

<!-- gh-comment-id:3860559009 --> @jimbo-p commented on GitHub (Feb 6, 2026): query_knowledge_files is used. If you attach knowledge to a model, native tools available related to knowledge is scoped down to "query_knowledge_files" (original behavior). Function gets __model_knowledge__ to know what files / KBs are available to search. __model_knowledge__ includes that individual file.
Author
Owner

@Classic298 commented on GitHub (Feb 6, 2026):

ok and in full context the model cannot query it anyways.... so full context should be exempt is your logic...

hmmm

here we need a tie breaker.
Tim will have to decide.

I think, even if full context is enabled, then the model should be limited to that and only that knowledge

<!-- gh-comment-id:3860586515 --> @Classic298 commented on GitHub (Feb 6, 2026): ok and in full context the model cannot query it anyways.... so full context should be exempt is your logic... hmmm here we need a tie breaker. Tim will have to decide. I think, even if full context is enabled, then the model should be limited to that and only that knowledge
Author
Owner

@jimbo-p commented on GitHub (Feb 6, 2026):

I'm good either way. I do use/administer this in a corporate setting with about 3k users and can say I think a lot of people don't get this deep. If they are attaching files and marking them full context, they are also almost certainly attaching files/KBs they think are pertinent for the native knowledge search tools to use... that's intuitive to do.

Relying on a KB to be public or for you to have ownership and that giving your model permission to search it (AND knowing you shouldn't attach anything else if you desire this behavior) is not a common or expected thing. That is how it is currently and I'm fine with it, just saying I find it rare that's what people actually expect to happen.

I think the current PR does enhance / fix things regardless - full context is fixed (not working with native tools at all in main branch) and attaching a file in non-full-context mode is also working now. Maybe worth another PR / next version update to really think this behavior through including which native tools get called in what scenarios when it comes to knowledge.

<!-- gh-comment-id:3860936425 --> @jimbo-p commented on GitHub (Feb 6, 2026): I'm good either way. I do use/administer this in a corporate setting with about 3k users and can say I think a lot of people don't get this deep. If they are attaching files and marking them full context, they are also almost certainly attaching files/KBs they think are pertinent for the native knowledge search tools to use... that's intuitive to do. Relying on a KB to be public or for you to have ownership and that giving your model permission to search it (AND knowing you shouldn't attach anything else if you desire this behavior) is not a common or expected thing. That is how it is currently and I'm fine with it, just saying I find it rare that's what people actually expect to happen. I think the current PR does enhance / fix things regardless - full context is fixed (not working with native tools at all in main branch) and attaching a file in non-full-context mode is also working now. Maybe worth another PR / next version update to really think this behavior through including which native tools get called in what scenarios when it comes to knowledge.
Author
Owner

@Baronco commented on GitHub (Mar 3, 2026):

Hi everyone, in the current version, knowledge files added to a model that has tool invocation set to "native" still do not work. The functionality does not behave like it did before v0.7.0; I am still forced to copy all knowledge directly into the system prompt because the model is no longer able to inject them as it used to. I’m also considering converting each knowledge item into a skill as a workaround.

Additionally, the KB tool is not very useful either :( I see some alternatives, but I believe there is still an issue with the knowledge collections. I’m using Open WebUI ‧ v0.8.7.

<!-- gh-comment-id:3992458058 --> @Baronco commented on GitHub (Mar 3, 2026): Hi everyone, in the current version, knowledge files added to a model that has tool invocation set to "native" still do not work. The functionality does not behave like it did before v0.7.0; I am still forced to copy all knowledge directly into the system prompt because the model is no longer able to inject them as it used to. I’m also considering converting each knowledge item into a skill as a workaround. Additionally, the KB tool is not very useful either :( I see some alternatives, but I believe there is still an issue with the knowledge collections. I’m using Open WebUI ‧ v0.8.7.
Author
Owner

@Classic298 commented on GitHub (Mar 3, 2026):

@Baronco install .8 its fixed there

<!-- gh-comment-id:3992471217 --> @Classic298 commented on GitHub (Mar 3, 2026): @Baronco install .8 its fixed there
Author
Owner

@Baronco commented on GitHub (Mar 17, 2026):

Testing from version 0.8.10:

  1. Initial configurations, The knowledge is loaded into the model as a file:
Image

1.1 It works ok with the default settings:

Image
  1. Change function calling to Native
Image

2.2 Two tools are enabled for use with native invocation:

Image

2.3 Prompts for searching knowledge and using tools:

Image

2.4 It is not even able to query the knowledge using the built-in tool query knowledge file:

Image

3.disabling the built in tools:

Image

3.1 In these cases, the assistant only attempts to respond using tools:

Image

3.2 I disabled the tools to ensure that it does not have internet access and avoid using knowledge:

Image

3.3 In this case, the user does not use the tool and responds using their knowledge; they are unable to use the attached knowledge file:

Image

I'm still observing the same type of behavior reported. The model isn't injecting the files into the context as it does with the default function calling mode. 🙃

<!-- gh-comment-id:4077528752 --> @Baronco commented on GitHub (Mar 17, 2026): Testing from version 0.8.10: 1. Initial configurations, The knowledge is loaded into the model as a file: <img width="1662" height="852" alt="Image" src="https://github.com/user-attachments/assets/26cca25b-93fe-4256-bb39-0b5d7e6196db" /> 1.1 It works ok with the default settings: <img width="1064" height="770" alt="Image" src="https://github.com/user-attachments/assets/0da73c03-9505-4087-8a8c-7823e86f72c3" /> 2. Change function calling to Native <img width="1128" height="657" alt="Image" src="https://github.com/user-attachments/assets/42ad090f-da37-44cb-8b0a-dabe1d19f85f" /> 2.2 Two tools are enabled for use with native invocation: <img width="1242" height="781" alt="Image" src="https://github.com/user-attachments/assets/8996a242-2e04-48d9-bccb-c2c4ca02652b" /> 2.3 Prompts for searching knowledge and using tools: <img width="835" height="440" alt="Image" src="https://github.com/user-attachments/assets/27f0362e-631a-425d-99d1-41394a19e72c" /> 2.4 It is not even able to query the knowledge using the built-in tool query knowledge file: <img width="1116" height="765" alt="Image" src="https://github.com/user-attachments/assets/dad8b658-cb06-4c9f-9b41-c359a854fcf5" /> 3.disabling the built in tools: <img width="1641" height="310" alt="Image" src="https://github.com/user-attachments/assets/2dddad01-b973-41be-b92e-7c604d317959" /> 3.1 In these cases, the assistant only attempts to respond using tools: <img width="1056" height="502" alt="Image" src="https://github.com/user-attachments/assets/c0b6b826-17b6-4b43-bb87-4a0a58450d65" /> 3.2 I disabled the tools to ensure that it does not have internet access and avoid using knowledge: <img width="1219" height="761" alt="Image" src="https://github.com/user-attachments/assets/da214b7d-fd10-4525-bf04-6e624a916232" /> 3.3 In this case, the user does not use the tool and responds using their knowledge; they are unable to use the attached knowledge file: <img width="1122" height="771" alt="Image" src="https://github.com/user-attachments/assets/f8e97986-4a6f-42cc-a2fa-a25961692a3f" /> > I'm still observing the same type of behavior reported. The model isn't injecting the files into the context as it does with the default function calling mode. 🙃
Author
Owner

@Classic298 commented on GitHub (Mar 17, 2026):

Can't reproduce. Getting back zero results from the tool usually means you don't have access rights or there is nothing queryable in that knowledge base.

Don't turn off built in tools. That literally removes the model's ability to query anything anymore.

Besides what model are you using?

<!-- gh-comment-id:4077729669 --> @Classic298 commented on GitHub (Mar 17, 2026): Can't reproduce. Getting back zero results from the tool usually means you don't have access rights or there is nothing queryable in that knowledge base. Don't turn off built in tools. That literally removes the model's ability to query anything anymore. Besides what model are you using?
Author
Owner

@Baronco commented on GitHub (Mar 20, 2026):

Bro I'm going to go crazy, this isn't working, I'm reading the documentation again and I can't get it to query the KB. I have a markdown document OWUI_DOCUMENTATION.md with very short documentation (around 1000 tokens).

  1. In admin settings, under documents I have Bypass Embedding and Retrieval enabled, using Document Intelligence prebuilt-layout as the extraction engine.
  2. In workspace, under models I create a new one.
  3. Model Name "OWUI AGENT", Base Model "model-router" (Microsoft Foundry), System prompt "You are an assistant in charge of answering questions about the Open Web UI project documentation; you have attached knowledge that you must use to answer user queries."
  4. Advanced Params: Function Calling Native and Temperature 0.5
  5. Knowledge → Upload Files, I upload the OWUI_DOCUMENTATION.md file with the option Using Focused Retrieval.
  6. I leave every Capability enabled.
  7. Under Builtin Tools I only leave Knowledge Base enabled, I don't need the rest — save and create.
  8. In a new chat with my agent "OWUI AGENT" I send: "explain me about X-Open-WebUI-Chat-Id and how to use this in owui"
  9. The agent invokes the query_knowledge_files tool 3 times with no response:
INPUT
query
X-Open-WebUI-Chat-Id header Open Web UI documentation how to use in owui
count
5
OUTPUT
[]
  1. No error is visible in the log:
2026-03-20T02:48:07.4314696Z 2026-03-20 02:48:07.431 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13591:0 - "GET /api/v1/tools/ HTTP/1.1" 200
2026-03-20T02:48:11.6445324Z 2026-03-20 02:48:11.644 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13591:0 - "POST /api/v1/users/user/settings/update HTTP/1.1" 200
2026-03-20T02:48:21.1138258Z 2026-03-20 02:48:21.113 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13591:0 - "GET /_app/version.json HTTP/1.1" 200
2026-03-20T02:48:50.3352661Z 2026-03-20 02:48:50.335 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13591:0 - "POST /api/v1/chats/new HTTP/1.1" 200
2026-03-20T02:48:50.869696Z 2026-03-20 02:48:50.869 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13595:0 - "GET /_app/version.json HTTP/1.1" 200
2026-03-20T02:48:55.4501214Z 2026-03-20 02:48:55.448 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13600:0 - "GET /api/v1/tools/ HTTP/1.1" 200
2026-03-20T02:48:55.4506314Z 2026-03-20 02:48:55.450 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13595:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200
2026-03-20T02:48:56.1938523Z 2026-03-20 02:48:56.193 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13600:0 - "POST /api/v1/chats/3dc215fb-0999-4665-9765-4c5bf2daed38 HTTP/1.1" 200
2026-03-20T02:49:00.7826856Z 2026-03-20 02:49:00.782 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13600:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200
2026-03-20T02:49:01.4256067Z 2026-03-20 02:49:01.425 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13600:0 - "POST /api/chat/completions HTTP/1.1" 200
2026-03-20T02:49:05.8048591Z 2026-03-20 02:49:05.804 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13600:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200
2026-03-20T02:49:09.5715473Z 
Batches:   0%|          | 0/1 [00:00<?, ?it/s]
Batches: 100%|██████████| 1/1 [00:01<00:00,  1.11s/it]
Batches: 100%|██████████| 1/1 [00:01<00:00,  1.11s/it]
2026-03-20T02:49:12.0453546Z 
Batches:   0%|          | 0/1 [00:00<?, ?it/s]
Batches: 100%|██████████| 1/1 [00:00<00:00, 81.83it/s]
2026-03-20T02:49:13.8276154Z 
Batches:   0%|          | 0/1 [00:00<?, ?it/s]
Batches: 100%|██████████| 1/1 [00:00<00:00, 11.34it/s]
2026-03-20T02:49:35.1180079Z 2026-03-20 02:49:35.117 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13600:0 - "POST /api/chat/completed HTTP/1.1" 200
2026-03-20T02:49:35.94252Z 2026-03-20 02:49:35.942 | INFO     | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13600:0 - "POST /api/v1/chats/3dc215fb-0999-4665-9765-4c5bf2daed38 HTTP/1.1" 200

Could these other issues be related to this?

#22262 #22181 #21299

I'm using Open Web UI v0.8.10 from an Azure Web App with the following image: open-webui/open-webui:main

I don't know what else to try but no matter what I attempt, it doesn't work 😭😭😭

OWUI_DOCUMENTATION.md

<!-- gh-comment-id:4095106779 --> @Baronco commented on GitHub (Mar 20, 2026): Bro I'm going to go crazy, this isn't working, I'm reading the documentation again and I can't get it to query the KB. I have a markdown document OWUI_DOCUMENTATION.md with very short documentation (around 1000 tokens). 1. In admin settings, under documents I have `Bypass Embedding and Retrieval` enabled, using Document Intelligence prebuilt-layout as the extraction engine. 2. In workspace, under models I create a new one. 3. Model Name "OWUI AGENT", Base Model "model-router" (Microsoft Foundry), System prompt "You are an assistant in charge of answering questions about the Open Web UI project documentation; you have attached knowledge that you must use to answer user queries." 4. Advanced Params: Function Calling Native and Temperature 0.5 5. Knowledge → Upload Files, I upload the OWUI_DOCUMENTATION.md file with the option Using Focused Retrieval. 6. I leave every Capability enabled. 7. Under Builtin Tools I only leave Knowledge Base enabled, I don't need the rest — save and create. 8. In a new chat with my agent "OWUI AGENT" I send: "explain me about X-Open-WebUI-Chat-Id and how to use this in owui" 9. The agent invokes the `query_knowledge_files` tool 3 times with no response: ``` INPUT query X-Open-WebUI-Chat-Id header Open Web UI documentation how to use in owui count 5 OUTPUT [] ``` 10. No error is visible in the log: ``` 2026-03-20T02:48:07.4314696Z 2026-03-20 02:48:07.431 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13591:0 - "GET /api/v1/tools/ HTTP/1.1" 200 2026-03-20T02:48:11.6445324Z 2026-03-20 02:48:11.644 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13591:0 - "POST /api/v1/users/user/settings/update HTTP/1.1" 200 2026-03-20T02:48:21.1138258Z 2026-03-20 02:48:21.113 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13591:0 - "GET /_app/version.json HTTP/1.1" 200 2026-03-20T02:48:50.3352661Z 2026-03-20 02:48:50.335 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13591:0 - "POST /api/v1/chats/new HTTP/1.1" 200 2026-03-20T02:48:50.869696Z 2026-03-20 02:48:50.869 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13595:0 - "GET /_app/version.json HTTP/1.1" 200 2026-03-20T02:48:55.4501214Z 2026-03-20 02:48:55.448 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13600:0 - "GET /api/v1/tools/ HTTP/1.1" 200 2026-03-20T02:48:55.4506314Z 2026-03-20 02:48:55.450 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13595:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 2026-03-20T02:48:56.1938523Z 2026-03-20 02:48:56.193 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13600:0 - "POST /api/v1/chats/3dc215fb-0999-4665-9765-4c5bf2daed38 HTTP/1.1" 200 2026-03-20T02:49:00.7826856Z 2026-03-20 02:49:00.782 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13600:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 2026-03-20T02:49:01.4256067Z 2026-03-20 02:49:01.425 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13600:0 - "POST /api/chat/completions HTTP/1.1" 200 2026-03-20T02:49:05.8048591Z 2026-03-20 02:49:05.804 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13600:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 2026-03-20T02:49:09.5715473Z Batches: 0%| | 0/1 [00:00<?, ?it/s] Batches: 100%|██████████| 1/1 [00:01<00:00, 1.11s/it] Batches: 100%|██████████| 1/1 [00:01<00:00, 1.11s/it] 2026-03-20T02:49:12.0453546Z Batches: 0%| | 0/1 [00:00<?, ?it/s] Batches: 100%|██████████| 1/1 [00:00<00:00, 81.83it/s] 2026-03-20T02:49:13.8276154Z Batches: 0%| | 0/1 [00:00<?, ?it/s] Batches: 100%|██████████| 1/1 [00:00<00:00, 11.34it/s] 2026-03-20T02:49:35.1180079Z 2026-03-20 02:49:35.117 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13600:0 - "POST /api/chat/completed HTTP/1.1" 200 2026-03-20T02:49:35.94252Z 2026-03-20 02:49:35.942 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 181.52.218.17:13600:0 - "POST /api/v1/chats/3dc215fb-0999-4665-9765-4c5bf2daed38 HTTP/1.1" 200 ``` Could these other issues be related to this? #22262 #22181 #21299 I'm using Open Web UI v0.8.10 from an Azure Web App with the following image: open-webui/open-webui:main I don't know what else to try but no matter what I attempt, it doesn't work 😭😭😭 [OWUI_DOCUMENTATION.md](https://github.com/user-attachments/files/26131558/OWUI_DOCUMENTATION.md)
Author
Owner

@Classic298 commented on GitHub (Mar 20, 2026):

under documents I have Bypass Embedding and Retrieval enabled

disable this

model-router" (Microsoft Foundry)

Does this.. have support for function calling?

Temperature 0.5

Dont set it.

Temperature AND tool calling is not supported by many models.

<!-- gh-comment-id:4097208135 --> @Classic298 commented on GitHub (Mar 20, 2026): > under documents I have Bypass Embedding and Retrieval enabled disable this > model-router" (Microsoft Foundry) Does this.. have support for function calling? > Temperature 0.5 Dont set it. Temperature AND tool calling is not supported by many models.
Author
Owner

@Baronco commented on GitHub (Mar 20, 2026):

Yes, the Ms. Foundry model router is capable of using tools; I use tools from the MCPO or MCPs HTTP without any problems. For now, I'm going to give up on using Open Web UI knowledge collections because I can't get them to work as the documentation describes.

<!-- gh-comment-id:4099715748 --> @Baronco commented on GitHub (Mar 20, 2026): Yes, the Ms. Foundry model router is capable of using tools; I use tools from the MCPO or MCPs HTTP without any problems. For now, I'm going to give up on using Open Web UI knowledge collections because I can't get them to work as the documentation describes.
Author
Owner

@eo-ai-team commented on GitHub (May 1, 2026):

Same issue when using OpenAI and native function calling. Seems it doesn't pick up the attached file or collection (via the # command). Only when it explicitly searches for list-knowledge & query-knowledge- will the model be able to find the contents of my knowledge files. Very frustrating.

Only fix is to always manually toggle to "entire document" when attaching a file or collection to a chat message

<!-- gh-comment-id:4358673964 --> @eo-ai-team commented on GitHub (May 1, 2026): Same issue when using OpenAI and native function calling. Seems it doesn't pick up the attached file or collection (via the # command). Only when it explicitly searches for list-knowledge & query-knowledge- will the model be able to find the contents of my knowledge files. Very frustrating. Only fix is to always manually toggle to "entire document" when attaching a file or collection to a chat message
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#58073