mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 10:58:17 -05:00
[GH-ISSUE #22505] issue: Multiple system messages break chat template parsing with Qwen3.5 GGUF models #58391
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @GlisseManTV on GitHub (Mar 9, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/22505
Check Existing Issues
Installation Method
Docker
Open WebUI Version
0.8.10
Ollama Version (if applicable)
0.17.7 and llama.cpp b8249 CUDA
Operating System
Truenas / Debian
Browser (if applicable)
No response
Confirmation
README.md.Expected Behavior
When a User System Prompt is configured in Open WebUI and file/RAG system context injection is enabled, Open WebUI should ensure that the generated request sent to the backend contains a valid message structure.
Specifically, the system should either:
Merge multiple system prompts into a single system message, or
Ensure that the generated chat message structure remains compatible with models that enforce strict chat templates.
The request sent to the backend should look like this:
This ensures compatibility with models that validate the chat structure, such as Qwen models with strict Jinja chat templates.
Actual Behavior
When both User System Prompt and System Context injection (for files/RAG) are enabled, Open WebUI generates multiple consecutive system messages at the beginning of the conversation.
Example request generated by Open WebUI:
This structure causes models with strict chat templates (e.g. Qwen models) to fail during template parsing.
The backend returns the following error:
Unable to generate parser for this template.
Automatic parser generation failed:
Jinja Exception: System message must be at the beginning.
As a result:
Steps to Reproduce
Open WebUI will generate a request containing:
Example message structure:
The backend then returns a template parsing error.
Removing the User System Prompt immediately resolves the issue.
Logs & Screenshots
Additional Information
Environment:
Workaround:
Removing the User System Prompt prevents the issue.
Possible improvement:
Open WebUI could merge multiple system prompts before sending the request to the backend to avoid incompatibility with strict chat templates.
@GlisseManTV commented on GitHub (Mar 9, 2026):
It also occurs with file and chat system prompt.
And also when I have file and folder system prompt.
User system prompt in addition of chat system prompt don't trigger the issue.
@tjbck commented on GitHub (Mar 24, 2026):
Addressed in dev.
@moritzderallerechte commented on GitHub (Apr 9, 2026):
May I ask why there is an <attached_files>....</attached_files> part added to the user prompt anyways?
The files are added to the content field of the user message anyways, so why not add filename and other metada there?
The url field can also easily be misunderstood by some llm providers, as i have learned, since there is no way to actually use that url to retrieve file content when doing cloud inference.
I had models try and resolve that url which resulted in internal server errors.
Could you add a way to customize that prompt injection? I really don't like this unconfigurable, hardcoded, style @tjbck
@dylanmorris1231ho-spec commented on GitHub (Apr 9, 2026):
That's a fair point.
The <attached_files> block was introduced to give models a consistent
structure for attachments, but you're right that in some setups it can be
redundant since the file content is already included in the user message.
The issue with models trying to resolve the url field is also a good
observation. Making this formatting configurable instead of hardcoded would
probably be the better approach.
We'll consider adding a way to customize the attachment prompt formatting.
On Thu, Apr 9, 2026, 9:03 AM Moritz Terhechte @.***>
wrote: