[GH-ISSUE #16459] issue: Unhelpful error when OpenAI balance is depleted: “argument of type ‘JSONResponse’ is not iterable” instead of quota/exceeded message (v0.6.21) #17912

Closed
opened 2026-04-19 23:48:10 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @andrsksr on GitHub (Aug 10, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/16459

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

0.6.21

Ollama Version (if applicable)

No response

Operating System

Koyeb cloud platform (Docker container, Linux-based host)

Browser (if applicable)

Version 138.0.7204.184

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

When my OpenAI platform balance is exhausted, I should receive a clear, direct error message such as “You exceeded your current quota, please check your plan and billing details,” allowing me to instantly diagnose the real problem.

Actual Behavior

Instead, Open WebUI v0.6.21 shows only: “argument of type ‘JSONResponse’ is not iterable”
This message gives no clue that the real problem is a depleted OpenAI balance.
My local Open WebUI version (v0.6.18) previously displayed the correct and actionable quota error message.

Steps to Reproduce

  1. Deploy Open WebUI v0.6.21 via Docker on Koyeb.
  2. Open browser (Chrome 126.0 desktop), navigate to the service URL, and set up a new user account as prompted.
  3. Enter a valid OpenAI API key under Connection settings.
  4. Ensure your OpenAI account has zero balance or quota.
  5. Attempt to send any message using chat interface.
  6. Observe that the message fails and the frontend displays:
    “argument of type ‘JSONResponse’ is not iterable”
  7. Check Docker container logs; you will see an HTTP 400 on POST to /api/chat/completions:

Logs & Screenshots

Docker-Log:

2025-08-10 20:29:41.206 | INFO | open_webui.routers.openai:get_all_models:397 - get_all_models()
2025-08-10 20:29:42.156 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 109.202.219.141:0 - "POST /api/chat/completions HTTP/1.1" 400
2025-08-10 20:29:42.226 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 109.202.219.141:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200

Additional Information

My local OpenWebUI version displayed something like “You exceeded your current quota…” message for the same API key/scenario, confirming that the error parsing in v0.6.21 is broken.

Originally created by @andrsksr on GitHub (Aug 10, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/16459 ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version 0.6.21 ### Ollama Version (if applicable) _No response_ ### Operating System Koyeb cloud platform (Docker container, Linux-based host) ### Browser (if applicable) Version 138.0.7204.184 ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior When my OpenAI platform balance is exhausted, I should receive a clear, direct error message such as “You exceeded your current quota, please check your plan and billing details,” allowing me to instantly diagnose the real problem. ### Actual Behavior Instead, Open WebUI v0.6.21 shows only: “argument of type ‘JSONResponse’ is not iterable” This message gives no clue that the real problem is a depleted OpenAI balance. My local Open WebUI version (v0.6.18) previously displayed the correct and actionable quota error message. ### Steps to Reproduce 1. Deploy Open WebUI v0.6.21 via Docker on Koyeb. 2. Open browser (Chrome 126.0 desktop), navigate to the service URL, and set up a new user account as prompted. 3. Enter a valid OpenAI API key under Connection settings. 4. Ensure your OpenAI account has zero balance or quota. 5. Attempt to send any message using chat interface. 6. Observe that the message fails and the frontend displays: “argument of type ‘JSONResponse’ is not iterable” 7. Check Docker container logs; you will see an HTTP 400 on POST to `/api/chat/completions`: ### Logs & Screenshots Docker-Log: 2025-08-10 20:29:41.206 | INFO | open_webui.routers.openai:get_all_models:397 - get_all_models() 2025-08-10 20:29:42.156 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 109.202.219.141:0 - "POST /api/chat/completions HTTP/1.1" 400 2025-08-10 20:29:42.226 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 109.202.219.141:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 ### Additional Information My local OpenWebUI version displayed something like “You exceeded your current quota…” message for the same API key/scenario, confirming that the error parsing in v0.6.21 is broken.
GiteaMirror added the bug label 2026-04-19 23:48:10 -05:00
Author
Owner

@zicochaos commented on GitHub (Aug 11, 2025):

Got this error only with new models. Balance is ok. From LiteLLM logs

Type:BadRequestError
Message:litellm.BadRequestError: OpenAIException - Your organization must be verified to stream this model. Please go to: https://platform.openai.com/settings/organization/general and click on Verify Organization. If you just verified, it can take up to 15 minutes for access to propagate.. Received Model Group=gpt-5-mini
Available Model Group Fallbacks=None LiteLLM Retried: 1 times, LiteLLM Max Retries: 2

<!-- gh-comment-id:3173550656 --> @zicochaos commented on GitHub (Aug 11, 2025): Got this error only with new models. Balance is ok. From LiteLLM logs Type:BadRequestError Message:litellm.BadRequestError: OpenAIException - Your organization must be verified to stream this model. Please go to: https://platform.openai.com/settings/organization/general and click on Verify Organization. If you just verified, it can take up to 15 minutes for access to propagate.. Received Model Group=gpt-5-mini Available Model Group Fallbacks=None LiteLLM Retried: 1 times, LiteLLM Max Retries: 2
Author
Owner

@tjbck commented on GitHub (Aug 11, 2025):

Should be addressed in dev testing wanted here!

<!-- gh-comment-id:3174002459 --> @tjbck commented on GitHub (Aug 11, 2025): Should be addressed in dev testing wanted here!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#17912