[GH-ISSUE #15876] issue: Adding system prompt to a custom model based on Google Gen AI API models results in 400 Error #33229

Closed
opened 2026-04-25 07:07:51 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @T-A-GIT on GitHub (Jul 19, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/15876

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

v0.6.17

Ollama Version (if applicable)

No response

Operating System

Ubuntu 22.04

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

Using the latest version of Open Web UI 0.6.17 via docker install on Ubuntu 22.04.

Trying to create a custom model for my kids with system prompt using google models. Expected behavior to have the model work in line with the system prompt restrictions.

Actual Behavior

When trying to use the custom model it results in the 400 Server connection error.

Steps to Reproduce

Reproduce / Troubleshooting steps:

The base model work fine when i try use them directly.
Also if i include no system prompt on the custom models it works fine.
But if i add a system prompt to the custom models to restrict the output it fails with the 400: Open WebUI: Server Connection Error

Logs & Screenshots

'stream': True, 'model': 'custom-1', 'messages': [{'role': 'system', 'content': '8 year old assistant model'}, {'role': 'use...
│ └ <starlette.requests.Request object at 0x72077c607310>
└ <function generate_chat_completion at 0x7207cc3f4400>

File "/app/backend/open_webui/utils/chat.py", line 278, in generate_chat_completion
return await generate_openai_chat_completion(
└ <function generate_chat_completion at 0x7207cc34d9e0>

File "/app/backend/open_webui/routers/openai.py", line 884, in generate_chat_completion
r.raise_for_status()
│ └ <function ClientResponse.raise_for_status at 0x72080dfc5760>
└ <ClientResponse(https://generativelanguage.googleapis.com/v1beta/openai/chat/completions) [400 Bad Request]>

Additional Information

No response

Originally created by @T-A-GIT on GitHub (Jul 19, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/15876 ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version v0.6.17 ### Ollama Version (if applicable) _No response_ ### Operating System Ubuntu 22.04 ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior Using the latest version of Open Web UI 0.6.17 via docker install on Ubuntu 22.04. Trying to create a custom model for my kids with system prompt using google models. Expected behavior to have the model work in line with the system prompt restrictions. ### Actual Behavior When trying to use the custom model it results in the 400 Server connection error. ### Steps to Reproduce Reproduce / Troubleshooting steps: The base model work fine when i try use them directly. Also if i include no system prompt on the custom models it works fine. But if i add a system prompt to the custom models to restrict the output it fails with the 400: Open WebUI: Server Connection Error ### Logs & Screenshots 'stream': True, 'model': 'custom-1', 'messages': [**{'role': 'system', 'content': '8 year old assistant model'},** {'role': 'use... │ └ <starlette.requests.Request object at 0x72077c607310> └ <function generate_chat_completion at 0x7207cc3f4400> File "/app/backend/open_webui/utils/chat.py", line 278, in generate_chat_completion return await generate_openai_chat_completion( └ <function generate_chat_completion at 0x7207cc34d9e0> > File "/app/backend/open_webui/routers/openai.py", line 884, in generate_chat_completion r.raise_for_status() │ └ <function ClientResponse.raise_for_status at 0x72080dfc5760> └ <ClientResponse(https://generativelanguage.googleapis.com/v1beta/openai/chat/completions) [400 Bad Request]> ### Additional Information _No response_
GiteaMirror added the bug label 2026-04-25 07:07:51 -05:00
Author
Owner

@T-A-GIT commented on GitHub (Jul 19, 2025):

Closing as the issue is on Google APIs where system prompt are not supported for Gemma models.

<!-- gh-comment-id:3092526310 --> @T-A-GIT commented on GitHub (Jul 19, 2025): Closing as the issue is on Google APIs where system prompt are not supported for Gemma models.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#33229