[GH-ISSUE #22579] issue: Litellm token usage stats not requested/consumed by default #35284

Closed
opened 2026-04-25 09:30:50 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @jndao on GitHub (Mar 11, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/22579

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
  • I am using the latest version of Open WebUI.

Installation Method

Git Clone

Open WebUI Version

v0.8.10

Ollama Version (if applicable)

No response

Operating System

Windows 11

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

Token usage should be parsed from litellm (OpenAI compatible) requests and the metadata provided in responses.

Actual Behavior

Token usage is consistently at 0.
No usage data is consumed in the response or retrieved from LiteLLM. This feature should be provided by default by litellm if the stream_options={"include_usage": True}. is provided in the completions request. There is no option to provide this or any other additional headers if required (could be a feature?).

See also: https://docs.litellm.ai/docs/completion/usage

This may be a candidate option to be provided by default.

Steps to Reproduce

Any and all requests do not parse token usage.

  1. Use litellm backend for OpenAI API calls
  2. Perform any model call to a model supported in Litellm
  3. Observe 0 token usage.

Logs & Screenshots

See metadata

{
  "usage_object": {
      "total_tokens": 999,
      "prompt_tokens": 999,
      "completion_tokens": 999,
      "prompt_tokens_details": null,
      "completion_tokens_details": {
        "text_tokens": 999,
        "audio_tokens": null,
        "image_tokens": null,
        "reasoning_tokens": 999,
        "accepted_prediction_tokens": null,
        "rejected_prediction_tokens": null
      }
  // Any additional details
}

Additional Information

A workaround could be to forcefully enable usage metrics via litellm https://docs.litellm.ai/docs/completion/usage#proxy-always-include-streaming-usage

But it appears this setting is missing in the UI right now as of litellm v1.82.0

See: https://github.com/BerriAI/litellm/issues/23343

Workaround is to update the litellm config file and add

general_settings:
  always_include_stream_usage: true
Originally created by @jndao on GitHub (Mar 11, 2026). Original GitHub issue: https://github.com/open-webui/open-webui/issues/22579 ### Check Existing Issues - [x] I have searched for any existing and/or related issues. - [x] I have searched for any existing and/or related discussions. - [x] I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!). - [x] I am using the latest version of Open WebUI. ### Installation Method Git Clone ### Open WebUI Version v0.8.10 ### Ollama Version (if applicable) _No response_ ### Operating System Windows 11 ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have **provided every relevant configuration, setting, and environment variable used in my setup.** - [x] I have clearly **listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup** (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc). - [x] I have documented **step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation**. My steps: - Start with the initial platform/version/OS and dependencies used, - Specify exact install/launch/configure commands, - List URLs visited, user input (incl. example values/emails/passwords if needed), - Describe all options and toggles enabled or changed, - Include any files or environmental changes, - Identify the expected and actual result at each stage, - Ensure any reasonably skilled user can follow and hit the same issue. ### Expected Behavior Token usage should be parsed from litellm (OpenAI compatible) requests and the metadata provided in responses. ### Actual Behavior Token usage is consistently at 0. No usage data is consumed in the response or retrieved from LiteLLM. This feature should be provided by default by litellm if the `stream_options={"include_usage": True}.` is provided in the completions request. There is no option to provide this or any other additional headers if required (could be a feature?). See also: https://docs.litellm.ai/docs/completion/usage This may be a candidate option to be provided by default. ### Steps to Reproduce Any and all requests do not parse token usage. 1. Use litellm backend for OpenAI API calls 2. Perform any model call to a model supported in Litellm 3. Observe 0 token usage. ### Logs & Screenshots See metadata ```json { "usage_object": { "total_tokens": 999, "prompt_tokens": 999, "completion_tokens": 999, "prompt_tokens_details": null, "completion_tokens_details": { "text_tokens": 999, "audio_tokens": null, "image_tokens": null, "reasoning_tokens": 999, "accepted_prediction_tokens": null, "rejected_prediction_tokens": null } // Any additional details } ``` ### Additional Information A workaround could be to forcefully enable usage metrics via litellm https://docs.litellm.ai/docs/completion/usage#proxy-always-include-streaming-usage But it appears this setting is missing in the UI right now as of litellm v1.82.0 See: https://github.com/BerriAI/litellm/issues/23343 **Workaround** is to update the litellm config file and add ```yaml general_settings: always_include_stream_usage: true ```
GiteaMirror added the bug label 2026-04-25 09:30:50 -05:00
Author
Owner

@Classic298 commented on GitHub (Mar 11, 2026):

Yep can confirm this issue CC @tjbck

(issue is actually that open webui doesn't parse the usage block hence token usage is always zero in analytics also)

<!-- gh-comment-id:4039173427 --> @Classic298 commented on GitHub (Mar 11, 2026): Yep can confirm this issue CC @tjbck (issue is actually that open webui doesn't parse the usage block hence token usage is always zero in analytics also)
Author
Owner

@Ithanil commented on GitHub (Mar 11, 2026):

Yep can confirm this issue CC @tjbck

Always thought it was intended to have this off by default, because if "Usage" is checked in the model settings, then it does get send. I would like to have this on by default too though.

<!-- gh-comment-id:4039631399 --> @Ithanil commented on GitHub (Mar 11, 2026): > Yep can confirm this issue CC [@tjbck](https://github.com/tjbck) Always thought it was intended to have this off by default, because if "Usage" is checked in the model settings, then it does get send. I would like to have this on by default too though.
Author
Owner

@Classic298 commented on GitHub (Mar 11, 2026):

@Ithanil not sure what you mean but if i read your comment correctly, then

admin panel > settings > models > top right settings > here you can turn on usage for ALL MODELS centrally (force enabled) for all models

<!-- gh-comment-id:4039665130 --> @Classic298 commented on GitHub (Mar 11, 2026): @Ithanil not sure what you mean but if i read your comment correctly, then admin panel > settings > models > top right settings > here you can turn on usage for ALL MODELS centrally (force enabled) for all models
Author
Owner

@Ithanil commented on GitHub (Mar 11, 2026):

@Ithanil not sure what you mean but if i read your comment correctly, then

admin panel > settings > models > top right settings > here you can turn on usage for ALL MODELS centrally (force enabled) for all models

Yeah, that's only possible since recently though.

And what I mean is that

There is no option to provide this or any other additional headers if required (could be a feature?).

is not true, but still I think the default should be the other way around. As the OP thinks as well:

This may be a candidate option to be provided by default.

<!-- gh-comment-id:4039678945 --> @Ithanil commented on GitHub (Mar 11, 2026): > [@Ithanil](https://github.com/Ithanil) not sure what you mean but if i read your comment correctly, then > > admin panel > settings > models > top right settings > here you can turn on usage for ALL MODELS centrally (force enabled) for all models Yeah, that's only possible since recently though. And what I mean is that > There is no option to provide this or any other additional headers if required (could be a feature?). is not true, but still I think the default should be the other way around. As the OP thinks as well: >This may be a candidate option to be provided by default.
Author
Owner

@Classic298 commented on GitHub (Mar 11, 2026):

Let me re-read this again I.. I am currently not understanding what you mean exactly

<!-- gh-comment-id:4039698881 --> @Classic298 commented on GitHub (Mar 11, 2026): Let me re-read this again I.. I am currently not understanding what you mean exactly
Author
Owner

@Classic298 commented on GitHub (Mar 11, 2026):

AHA!
Shoutout to AI for translating ;)

You mean the Usage capability should be ON by default?

I discussed this with Tim at some point in the last weeks. Reasoning was to not make this breaking change because some providers don't support sending usage, and turning it on by default will break a lot of existing deployments, wheras keeping it off by default will never hurt anyone @Ithanil

<!-- gh-comment-id:4039710814 --> @Classic298 commented on GitHub (Mar 11, 2026): AHA! Shoutout to AI for translating ;) You mean the Usage capability should be ON by default? I discussed this with Tim at some point in the last weeks. Reasoning was to not make this breaking change because some providers don't support sending usage, and turning it on by default will break a lot of existing deployments, wheras keeping it off by default will never hurt anyone @Ithanil
Author
Owner

@Ithanil commented on GitHub (Mar 11, 2026):

AHA! Shoutout to AI for translating ;)

You mean the Usage capability should be ON by default?

I discussed this with Tim at some point in the last weeks. Reasoning was to not make this breaking change because some providers don't support sending usage, and turning it on by default will break a lot of existing deployments, wheras keeping it off by default will never hurt anyone @Ithanil

Didn't know there are providers who return errors, if you send this. OK.

<!-- gh-comment-id:4039719214 --> @Ithanil commented on GitHub (Mar 11, 2026): > AHA! Shoutout to AI for translating ;) > > You mean the Usage capability should be ON by default? > > I discussed this with Tim at some point in the last weeks. Reasoning was to not make this breaking change because some providers don't support sending usage, and turning it on by default will break a lot of existing deployments, wheras keeping it off by default will never hurt anyone [@Ithanil](https://github.com/Ithanil) Didn't know there are providers who return errors, if you send this. OK.
Author
Owner

@jndao commented on GitHub (Mar 11, 2026):

And what I mean is that

There is no option to provide this or any other additional headers if required (could be a feature?).

is not true

Yep. Additional headers can be added in the connection setting which I missed. My bad!

Image

As configuring this relies on buried (& outdated?) litellm docs, this could be an unintentional UX failure mode. I'm not too opinionated on how this should be solved if at all.

Side note:
I did find an additional bug, where the save button gets stuck in a loading statewhen JSON formatting fails

Image Image
<!-- gh-comment-id:4039744608 --> @jndao commented on GitHub (Mar 11, 2026): > And what I mean is that > > There is no option to provide this or any other additional headers if required (could be a feature?). > > is not true Yep. Additional headers can be added in the connection setting which I missed. My bad! <img width="1024" height="117" alt="Image" src="https://github.com/user-attachments/assets/0f3e4a2d-13fd-428e-99ec-189d01b4ed18" /> As configuring this relies on buried (& outdated?) litellm docs, this could be an unintentional UX failure mode. I'm not too opinionated on how this should be solved if at all. Side note: I did find an additional bug, where the save button gets stuck in a loading statewhen JSON formatting fails <img width="854" height="171" alt="Image" src="https://github.com/user-attachments/assets/3a21ef7d-cf04-4ca9-8d08-09fb4c7bcdf7" /> <img width="1087" height="190" alt="Image" src="https://github.com/user-attachments/assets/62e4c93d-9ec8-4660-b3cd-bd98ac03dc17" />
Author
Owner

@Ithanil commented on GitHub (Mar 11, 2026):

And what I mean is that
There is no option to provide this or any other additional headers if required (could be a feature?).
is not true

Yep. The header can be added in the connection setting which I missed. While testing it quickly,
Image

As configuring this relies on buried (& outdated?) litellm docs, this could be an unintentional failure mode. I'm not too opinionated on how this should be solved.

Side note: I did find an additional bug, where the save button gets stuck in a loading statewhen JSON formatting fails
Image Image

You do not need to add it as explicit header, but just click the "Usage" checkbox in the model settings:

Image
<!-- gh-comment-id:4039753382 --> @Ithanil commented on GitHub (Mar 11, 2026): > > And what I mean is that > > There is no option to provide this or any other additional headers if required (could be a feature?). > > is not true > > Yep. The header can be added in the connection setting which I missed. While testing it quickly, > <img alt="Image" width="1024" height="117" src="https://private-user-images.githubusercontent.com/51881944/561654987-0f3e4a2d-13fd-428e-99ec-189d01b4ed18.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzMyNDA2OTYsIm5iZiI6MTc3MzI0MDM5NiwicGF0aCI6Ii81MTg4MTk0NC81NjE2NTQ5ODctMGYzZTRhMmQtMTNmZC00MjhlLTk5ZWMtMTg5ZDAxYjRlZDE4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNjAzMTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjYwMzExVDE0NDYzNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTU4ZThmMTRiMDA4Y2M2YjA3ZDgwNGE4OGI2MzY5Y2M3YjVhYjU2NTljY2RlNDg4NDQ2MjkzODMzODlmOTFlZDUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.9mVFv3reNgBZ44BljCbdYqkAehQB309-EDREVY-vXmY"> > > As configuring this relies on buried (& outdated?) litellm docs, this could be an unintentional failure mode. I'm not too opinionated on how this should be solved. > > Side note: I did find an additional bug, where the save button gets stuck in a loading statewhen JSON formatting fails > <img alt="Image" width="854" height="171" src="https://private-user-images.githubusercontent.com/51881944/561658488-3a21ef7d-cf04-4ca9-8d08-09fb4c7bcdf7.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzMyNDA2OTYsIm5iZiI6MTc3MzI0MDM5NiwicGF0aCI6Ii81MTg4MTk0NC81NjE2NTg0ODgtM2EyMWVmN2QtY2YwNC00Y2E5LThkMDgtMDlmYjRjN2JjZGY3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNjAzMTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjYwMzExVDE0NDYzNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTQ2ZTIwMjJiZjVmMjllMmFiNmY0YjZmM2RkMmY5ODlhOWRjNzZkZmVkOTVmYTE2ZmRhNDgzMjNmZjcyOGFhMWQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.3bgp-B5Lg0RCa06CH2zJasBEEST5M6YdASUJ1ecd48A"> <img alt="Image" width="1087" height="190" src="https://private-user-images.githubusercontent.com/51881944/561658711-62e4c93d-9ec8-4660-b3cd-bd98ac03dc17.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzMyNDA2OTYsIm5iZiI6MTc3MzI0MDM5NiwicGF0aCI6Ii81MTg4MTk0NC81NjE2NTg3MTEtNjJlNGM5M2QtOWVjOC00NjYwLWIzY2QtYmQ5OGFjMDNkYzE3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNjAzMTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjYwMzExVDE0NDYzNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTJkZWRhYzc1MTU1NWFhMmRlODkxZTIzMTIzOGM3MGNmYjk5NGE5M2VhMmYyNGQyN2JkNjAxNWQ0ZWY5NzM1MmMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.G2FJ9fFpAYyAXnzRIwDdHk5kPBtjfd1i4yCZ1iVpgMg"> You do not need to add it as explicit header, but just click the "Usage" checkbox in the model settings: <img width="1162" height="93" alt="Image" src="https://github.com/user-attachments/assets/c27ea165-ab75-43f5-9df2-271f43fe2b95" />
Author
Owner

@Ithanil commented on GitHub (Mar 11, 2026):

@Ithanil not sure what you mean but if i read your comment correctly, then

admin panel > settings > models > top right settings > here you can turn on usage for ALL MODELS centrally (force enabled) for all models

@jndao You might want to try that. ;-)

<!-- gh-comment-id:4039795879 --> @Ithanil commented on GitHub (Mar 11, 2026): > [@Ithanil](https://github.com/Ithanil) not sure what you mean but if i read your comment correctly, then > > admin panel > settings > models > top right settings > here you can turn on usage for ALL MODELS centrally (force enabled) for all models @jndao You might want to try that. ;-)
Author
Owner

@Classic298 commented on GitHub (Mar 11, 2026):

just enable usage for all models at once, no need to configure it on litellm side (unless of course you use litellm for other things also)

<!-- gh-comment-id:4039823414 --> @Classic298 commented on GitHub (Mar 11, 2026): just enable usage for all models at once, no need to configure it on litellm side (unless of course you use litellm for other things also)
Author
Owner

@tjbck commented on GitHub (Mar 11, 2026):

Intended behaviour.

<!-- gh-comment-id:4041608626 --> @tjbck commented on GitHub (Mar 11, 2026): Intended behaviour.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#35284