mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 10:58:17 -05:00
[GH-ISSUE #2492] litellm report error code 400 when send the a Zero content length request #12901
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @hiddenblue on GitHub (May 22, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/2492
Bug Report
the litellm plugin in openwebui docker image is used to get compatibility with some LLM except Chatgpt, especilly with some openai compatible model such as qwen, chatglm and so on.
however, the openwebui will send a zero length request to the litellm proxy server, which relay the request with openai compatible api. This zero length request is forbidden by openai.
So, the litellm will occasionally report Error code: 400.
Description
Bug Summary:
[Provide a brief but clear summary of the bug]
Steps to Reproduce:
[Outline the steps to reproduce the bug. Be as detailed as possible.]
Expected Behavior:
[Describe what you expected to happen.]
Actual Behavior:
[Describe what actually happened.]
Environment
Open WebUI Version: v0.1.125
Ollama (if applicable): [e.g., 0.1.30, 0.1.32-rc1]
Operating System: [e.g., Windows 10, macOS Big Sur, Ubuntu 20.04]
ubuntu24.04 KDE
Browser (if applicable): [e.g., Chrome 100.0, Firefox 98.0]
edge 124.0.2478.97
firefox
Reproduction Details
add some openai compatible LLM by litellm config file.
and send some message to these llm models
**USE "openai" prefix in "model" arg to get compatibility **
(because the litellm didn't support qwen and chatglm LLM officially. And these LLM servers provide openai compatible api for calling.
refer to this docs https://litellm.vercel.app/docs/providers/openai_compatible
and send some message to these LLM via openwebui
Confirmation:
Logs and Screenshots
Browser Console Logs:
[Include relevant browser console logs, if applicable]
Docker Container Logs:

[Include relevant Docker container logs, if applicable]
Screenshots (if applicable):
[Attach any relevant screenshots to help illustrate the issue]
Installation Method
install by docker
[Describe the method you used to install the project, e.g., manual installation, Docker, package manager, etc.]
Additional Information
[Include any additional details that may help in understanding and reproducing the issue. This could include specific configurations, error messages, or anything else relevant to the bug.]
I copy the litellm request information from docker logs, and try to send the request to my LLM remote api manually
and I can reproduce the problem.
Note
If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!