mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-05 18:38:17 -05:00
[GH-ISSUE #16459] issue: Unhelpful error when OpenAI balance is depleted: “argument of type ‘JSONResponse’ is not iterable” instead of quota/exceeded message (v0.6.21) #17912
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @andrsksr on GitHub (Aug 10, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/16459
Check Existing Issues
Installation Method
Docker
Open WebUI Version
0.6.21
Ollama Version (if applicable)
No response
Operating System
Koyeb cloud platform (Docker container, Linux-based host)
Browser (if applicable)
Version 138.0.7204.184
Confirmation
README.md.Expected Behavior
When my OpenAI platform balance is exhausted, I should receive a clear, direct error message such as “You exceeded your current quota, please check your plan and billing details,” allowing me to instantly diagnose the real problem.
Actual Behavior
Instead, Open WebUI v0.6.21 shows only: “argument of type ‘JSONResponse’ is not iterable”
This message gives no clue that the real problem is a depleted OpenAI balance.
My local Open WebUI version (v0.6.18) previously displayed the correct and actionable quota error message.
Steps to Reproduce
“argument of type ‘JSONResponse’ is not iterable”
/api/chat/completions:Logs & Screenshots
Docker-Log:
2025-08-10 20:29:41.206 | INFO | open_webui.routers.openai:get_all_models:397 - get_all_models()
2025-08-10 20:29:42.156 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 109.202.219.141:0 - "POST /api/chat/completions HTTP/1.1" 400
2025-08-10 20:29:42.226 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 109.202.219.141:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200
Additional Information
My local OpenWebUI version displayed something like “You exceeded your current quota…” message for the same API key/scenario, confirming that the error parsing in v0.6.21 is broken.
@zicochaos commented on GitHub (Aug 11, 2025):
Got this error only with new models. Balance is ok. From LiteLLM logs
Type:BadRequestError
Message:litellm.BadRequestError: OpenAIException - Your organization must be verified to stream this model. Please go to: https://platform.openai.com/settings/organization/general and click on Verify Organization. If you just verified, it can take up to 15 minutes for access to propagate.. Received Model Group=gpt-5-mini
Available Model Group Fallbacks=None LiteLLM Retried: 1 times, LiteLLM Max Retries: 2
@tjbck commented on GitHub (Aug 11, 2025):
Should be addressed in dev testing wanted here!