mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 10:58:17 -05:00
[GH-ISSUE #5469] LiteLLM "Budget has been exceeded!" error is translated to "Bad Request" #52657
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @vogtp on GitHub (Sep 17, 2024).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/5469
Bug Report
Installation Method
docker
Environment
Open WebUI Version: v0.3.21
LiteLLM Version: 1.46.1
Operating System: [Ubuntu 24.04.1 LTS
Browser (if applicable): Google Chrome 128.0.6613.84
Confirmation:
Expected Behavior:
Display the http error to the user:
{"error":{"message":"Budget has been exceeded! Current cost: 0.3759, Max budget: 0.35","type":"budget_exceeded","param":null,"code":"400"}}
The content of error.message would give a nice error message:
Uh-oh! There was an issue connecting to GPT-4.
Budget has been exceeded! Current cost: 0.3759, Max budget: 0.35
Actual Behavior:
The User gets a generic error message:
Uh-oh! There was an issue connecting to GPT-4.
External: 400, message='Bad Request', url='http://host.docker.internal:4444/chat/completions'
Description
Bug Summary:
Openwebui does not pass on error messages of litellm and therefore confuses the user.
Reproduction Details
Steps to Reproduce:
Logs and Screenshots
Browser Console Logs:
{
"detail": "External: 400, message='Bad Request', url='http://host.docker.internal:4444/chat/completions'"
}
(anonymous) @ Chat.svelte:743
await in (anonymous)
$e @ Chat.svelte:688
await in $e
rt @ Chat.svelte:604
await in rt
_t @ MessageInput.svelte:547
Docker Container Logs:
INFO: 10.3.2.161:0 - "GET /static/favicon.png HTTP/1.1" 200 OK
ERROR [open_webui.apps.openai.main] 400, message='Bad Request', url='http://host.docker.internal:4444/chat/completions'
Traceback (most recent call last):
File "/app/backend/open_webui/apps/openai/main.py", line 438, in generate_chat_completion
r.raise_for_status()
File "/usr/local/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 1093, in raise_for_status
raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 400, message='Bad Request', url='http://host.docker.internal:4444/chat/completions'
INFO: 10.3.2.161:0 - "POST /api/chat/completions HTTP/1.1" 400 Bad Request
INFO: 10.3.2.161:0 - "POST /api/v1/chats/b350b35f-d8ba-4c2c-afa4-9300cbbd6dd5 HTTP/1.1" 200 OK
INFO: 10.3.2.161:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
INFO [open_webui.apps.ollama.main] url: http://llama-1.its.unibas.ch:11434
generate_title
llama3.1:latest
INFO: 10.3.2.161:0 - "POST /api/task/title/completions HTTP/1.1" 200 OK
INFO: 10.3.2.161:0 - "POST /api/v1/chats/b350b35f-d8ba-4c2c-afa4-9300cbbd6dd5 HTTP/1.1" 200 OK
INFO: 10.3.2.161:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
INFO: 10.3.2.161:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
Screenshots/Screen Recordings (if applicable):
[Attach any relevant screenshots to help illustrate the issue]
Additional Information
[Include any additional details that may help in understanding and reproducing the issue. This could include specific configurations, error messages, or anything else relevant to the bug.]
Note
If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!
@GrayXu commented on GitHub (Sep 17, 2024):
To add, the Azure OpenAI also reports a bad request when triggering the filter, which can indeed be misleading to users.
@tjbck commented on GitHub (Sep 19, 2024):
Should be fixed on dev, let me know if the issue persists!