mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 10:58:17 -05:00
[GH-ISSUE #21675] enh: analytics #58217
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @tjbck on GitHub (Feb 20, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/21675
@KevinRossiTC commented on GitHub (Feb 27, 2026):
I would love this also. Perhaps with a dropdown for Both / UI / API.
@godlobster6 commented on GitHub (Mar 8, 2026):
Great direction. To make this actionable and contributor-friendly, I suggest locking a minimal v1 scope:
Suggested rollout:
If maintainers agree, I can draft a concrete PR plan (API contract + test cases + docs checklist) so contributors can pick this up with less ambiguity.
@zaakiy commented on GitHub (Mar 9, 2026):
Not a maintainer but I'm happy with this.
Would like to stress the urgency because we don't know which users are causing the most usage ($$$).
@zaakiy commented on GitHub (Mar 11, 2026):
Correction: Would like to stress the urgency because we don't know which
usersAPI consumers are causing the most usage ($$$).@Classic298 commented on GitHub (Mar 11, 2026):
If you have the urgency, you can add a filter to track their usage for now.
@zaakiy commented on GitHub (Mar 11, 2026):
I don't see a way to do this @Classic298. Could you please elaborate?
I've already put the API user into a group
And I can confirm that the API user is definitely within this group even though no users are showing up in the Analytics view for this group
@Classic298 commented on GitHub (Mar 11, 2026):
@zaakiy a filter
https://docs.openwebui.com/features/extensibility/plugin/functions/filter
@zaakiy commented on GitHub (Mar 15, 2026):
That's too complex for me. Is it really that difficult to add API usage in the analytics dashboard?
In my opinion, this would have been the obvious thing to do in the first place.
@Classic298 commented on GitHub (Mar 15, 2026):
@zaakiy
If it's too complex for you, ask AI to do it. Give it the docs and tell it you want to track all requests in the inlet() filter.
(API-only requests don't reach the outlet(), so can only track the request itself)
And yes, it is not trivial.
@SFL79 commented on GitHub (Mar 23, 2026):
Been following this issue for a while as it's an extremely important enterprise feature.
@tjbck Is this something the maintainers are planning to do in your roadmap?
Or, perhaps following @godlobster6 's suggestion, is there any agreed upon, pending plan for implementation, or are you looking for the community to go ahead and implement it how they see fit?
@zaakiy commented on GitHub (Mar 24, 2026):
@Classic298 I honestly would not know where to begin how to ask AI this
@Classic298 commented on GitHub (Mar 24, 2026):
https://docs.openwebui.com/features/extensibility/plugin/functions/filter
@zaakiy commented on GitHub (Mar 24, 2026):
@Classic298 I don't see how you giving me the same link that you did before is going to be of any benefit.
I'm not coming here as a contributor. Just because I'm on github following this issue doesn't mean I have infinite time or infinite AI credits to work on something
@smorello87 commented on GitHub (Apr 2, 2026):
Per #23323 — one specific use case for analytics enhancements: per-user and per-group usage limits.
The
chat_messagetable already tracks token usage. Adding an enforcement layer (configurable token/message caps per user or group, with block-on-limit) would let institutions manage costs without external proxies.@smorello87 commented on GitHub (Apr 3, 2026):
Following up on the per-user/group usage limits idea — we'd be interested in contributing a minimal PR for this if it aligns with your plans.
Thinking something scoped to:
chat_messageusage dataWould you be open to a PR along these lines, and are there any design constraints we should follow? Happy to adapt to whatever direction you're taking #21675.
@Classic298 commented on GitHub (Apr 3, 2026):
@smorello87 how would that work when token usage data is based on currently existing chats? that'd be trivially easy to bypass by just deleting chats - or just using the temporary chat mode.
@smorello87 commented on GitHub (Apr 3, 2026):
@Classic298 Good catch — you're right that
chat_messageis unsuitable for enforcement since it's deleted with chats and never written in temporary mode.A proper implementation would decouple usage accounting from chat persistence entirely. Here's a sketch:
Separate, append-only usage ledger
New
usage_ledgertable — no foreign key tochat, no cascade delete, no user-facing delete API:Write path: The streaming response middleware (
generate_chat_completionpipeline) already extracts token counts from every model response. After the existingChatMessages.upsert_message()dual-write, append a row tousage_ledgerunconditionally — regardless of chat mode, before any user-controlled persistence logic runs.Enforcement path: Pre-request check in the same middleware, before the model call:
Compare against the user's group limit. If exceeded, return a 429 with a clear message — the request never reaches the model.
Limit configuration: Extend the existing group model with optional
token_limitandlimit_period(daily/monthly) fields. Admin sets these in the existing group settings UI. Users inherit the limit from their highest-priority group (consistent with current group permission model).This keeps the scope small (one table, two middleware touchpoints, one group schema extension) while being immune to chat deletion and temp mode since the ledger is written and checked independently of the chat lifecycle.
@Classic298 commented on GitHub (Apr 3, 2026):
yeah and that is out of scope for the analytics feature. that would be rate limiting.
please open idea in discussions
btw: same could probably be achieved with a filter and a sqlite database
btw: even if you implement what you proposed you'll run into other issues like correctly tracking all requests since not all requests in open webui go through the same path
@smorello87 commented on GitHub (Apr 3, 2026):
@Classic298 Since @tjbck directed us here from #23323, we'll wait for their input on whether this is the right place or better suited as a separate discussion.
@trevorhayes6561-maker commented on GitHub (Apr 3, 2026):
I agree that the usage ledger approach is the right way to handle
enforcement independently of the chat lifecycle. Given the overlap with
both analytics and rate limiting, let's move this specific implementation
proposal to a new Discussion. This will allow us to finalize the schema and
middleware logic without cluttering the main analytics issue.
On Fri, Apr 3, 2026, 11:48 AM Stefano Morello @.***>
wrote:
@amirparsadd commented on GitHub (Apr 5, 2026):
are input and output tokens alone enough? one model might be 100USD and another one might be 5
maybe a usage cost based limit might be better
@smorello87 commented on GitHub (Apr 5, 2026):
@amirparsadd That's a valid point. In our case we primarily serve open-weight models where the cost difference between them is relatively small, so token-based limits are sufficient. Cost-based limits would add a pricing table that needs ongoing maintenance and updates — we actually have a script that pulls pricing from OpenRouter and Bedrock APIs, so it's doable, but it's extra complexity. Could be a follow-up enhancement rather than part of the initial scope.
@jijunwu commented on GitHub (Apr 21, 2026):
+1 I'd like to see per-user usage dashboard where each user can see their own token consumption and API history. A leaderboard would be great too!
@SamirMoustafa commented on GitHub (May 3, 2026):
@tjbck @Classic298, Quick ask: does the API_analytics_21675.md plan align with your intent for resolving this issue (UI vs API usage tracing)? Any scope or sequencing you’d want changed before I open a PR?