mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 19:08:59 -05:00
[GH-ISSUE #19101] issue: RAG template query placeholder escaping logic error #34302
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @matiboux on GitHub (Nov 11, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/19101
Check Existing Issues
Installation Method
Git Clone
Open WebUI Version
v0.6.36
Ollama Version (if applicable)
No response
Operating System
Ubuntu 24.04 LTS (WSL2)
Browser (if applicable)
No response
Confirmation
README.md.Expected Behavior
When a RAG context (from tool results or otherwise) contains the string "{{QUERY}}" or "[query]", then the RAG template formatting is expected to escape these occurrences and use its own unique query placeholder string, ensuring it does not modify or replace the context's original content.
Actual Behavior
When a RAG context (from tool results or otherwise) contains the string "{{QUERY}}" or "[query]", then RAG template formatting:
Steps to Reproduce
npm run devto start the frontend.sh dev.shfrom thebackend/directory to start the backend./chat/completionsendpoint to a real OpenAI-compatible API, and whose only other role is to print out:get_current_timemethod to return a string containing{{QUERY}}and/or[query]:Note: Alternatively, you could add logs in the backend to print the request parameters, but using a custom API ensures backend code is untouched for this test.
Logs & Screenshots
Below is actual output from the custom API proxy used during testing. This show the unexpected substitution of the query placeholder in the context's original content, not respecting the intended escaping.
LLM Chat Query:
Additional Information
Related code responsible for the escaping faulty logic:
e0d5de1697/backend/open_webui/utils/task.py (L207-L224)I will create a PR to attempt to resolve this issue.
Note: The context duplication issue you may have noticed in the LLM Chat Query user message is related to issue #19098 and is not addressed in the current issue.
@tjbck commented on GitHub (Nov 11, 2025):
Good catch, should be addressed in dev, testing wanted here.