[GH-ISSUE #21347] bug: v0.8.0 Analytics: Inconsistent token tracking + PostgreSQL crash on per-model view #58115

Closed
opened 2026-05-05 22:21:54 -05:00 by GiteaMirror · 21 comments
Owner

Originally created by @smorello87 on GitHub (Feb 13, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/21347

Bug Description

Two related issues with the new Analytics feature in v0.8.0 when using PostgreSQL and OpenAI-compatible providers (OpenRouter).

Bug 1: Inconsistent token counts in streaming responses

Severity: High — analytics token data is unreliable

Token usage is recorded for some messages but not others, seemingly at random. The behavior appears timing-dependent rather than model- or user-specific.

Root cause: Race condition in the frontend streaming handler. OpenRouter (and other OpenAI-compatible providers) send usage data in the last SSE chunk before [DONE]. However, chatCompletedHandler sometimes fires before the usage chunk is processed, resulting in the message being saved with 0 tokens.

Evidence:

  • OpenRouter returns usage in every streaming response (verified via direct curl)
  • The frontend openAIStreamToIterator correctly parses parsedData.usage from SSE chunks
  • But token counts appear intermittently in the analytics dashboard — same model, same user, some messages have tokens, others don't
  • Non-streaming calls (title generation, follow-up suggestions, tag generation) reliably record usage because the full response body includes it

Possibly related: #15850 ("missing tokens when streaming on fast inference providers")

Bug 2: Per-model analytics crashes on PostgreSQL

Severity: Medium — per-model analytics page returns HTTP 500

GET /api/v1/analytics/models/{model_id}/overview?days=30 returns 500.

Error:

sqlalchemy.exc.ProgrammingError: (psycopg2.errors.InvalidColumnReference) 
for SELECT DISTINCT, ORDER BY expressions must appear in select list

[SQL: SELECT DISTINCT chat_message.chat_id
FROM chat_message
WHERE chat_message.model_id = %(model_id_1)s ORDER BY chat_message.created_at DESC
 LIMIT %(param_1)s OFFSET %(param_2)s]

Root cause: get_chat_ids_by_model_id() in chat_messages.py:298 uses SELECT DISTINCT chat_id ... ORDER BY created_at DESC. PostgreSQL requires that ORDER BY columns appear in the SELECT list when using DISTINCT. SQLite does not have this restriction.

Fix: Either add created_at to the SELECT list and use a subquery, or use GROUP BY instead of DISTINCT.

Environment

  • Open WebUI version: 0.8.0
  • Database: PostgreSQL 16.3 (Amazon RDS)
  • Deployment: AWS ECS Fargate (2 tasks)
  • API providers: OpenRouter, AWS Bedrock (via bedrock-access-gateway)
  • "Usage" capability: Enabled on affected models

Steps to Reproduce

Bug 1 (inconsistent tokens):

  1. Enable "Usage" capability on a model
  2. Send multiple chat messages using the same model
  3. Check Admin Panel → Analytics → Users or Tokens view
  4. Observe that some messages record token counts, others show 0

Bug 2 (PostgreSQL crash):

  1. Use PostgreSQL as the database backend
  2. Go to Admin Panel → Analytics
  3. Click on any model to view its detail/overview page
  4. Observe HTTP 500 error

Expected Behavior

  1. All streaming messages should reliably record token usage from the provider's response
  2. Per-model analytics should work on PostgreSQL

Additional Context

The main analytics dashboard endpoints (/analytics/summary, /analytics/users, /analytics/models, /analytics/tokens, /analytics/daily) all return 200 OK. Only the per-model overview endpoint crashes.

OpenRouter has deprecated stream_options.include_usage — usage is now always included in every streaming response regardless of that flag. So the data is available; it's just not being reliably captured.

Originally created by @smorello87 on GitHub (Feb 13, 2026). Original GitHub issue: https://github.com/open-webui/open-webui/issues/21347 ## Bug Description Two related issues with the new Analytics feature in v0.8.0 when using PostgreSQL and OpenAI-compatible providers (OpenRouter). ### Bug 1: Inconsistent token counts in streaming responses **Severity**: High — analytics token data is unreliable Token usage is recorded for some messages but not others, seemingly at random. The behavior appears timing-dependent rather than model- or user-specific. **Root cause**: Race condition in the frontend streaming handler. OpenRouter (and other OpenAI-compatible providers) send usage data in the **last SSE chunk** before `[DONE]`. However, `chatCompletedHandler` sometimes fires before the usage chunk is processed, resulting in the message being saved with 0 tokens. **Evidence**: - OpenRouter returns usage in every streaming response (verified via direct curl) - The frontend `openAIStreamToIterator` correctly parses `parsedData.usage` from SSE chunks - But token counts appear intermittently in the analytics dashboard — same model, same user, some messages have tokens, others don't - Non-streaming calls (title generation, follow-up suggestions, tag generation) reliably record usage because the full response body includes it **Possibly related**: #15850 ("missing tokens when streaming on fast inference providers") ### Bug 2: Per-model analytics crashes on PostgreSQL **Severity**: Medium — per-model analytics page returns HTTP 500 `GET /api/v1/analytics/models/{model_id}/overview?days=30` returns 500. **Error**: ``` sqlalchemy.exc.ProgrammingError: (psycopg2.errors.InvalidColumnReference) for SELECT DISTINCT, ORDER BY expressions must appear in select list [SQL: SELECT DISTINCT chat_message.chat_id FROM chat_message WHERE chat_message.model_id = %(model_id_1)s ORDER BY chat_message.created_at DESC LIMIT %(param_1)s OFFSET %(param_2)s] ``` **Root cause**: `get_chat_ids_by_model_id()` in `chat_messages.py:298` uses `SELECT DISTINCT chat_id ... ORDER BY created_at DESC`. PostgreSQL requires that `ORDER BY` columns appear in the `SELECT` list when using `DISTINCT`. SQLite does not have this restriction. **Fix**: Either add `created_at` to the SELECT list and use a subquery, or use `GROUP BY` instead of `DISTINCT`. ## Environment - **Open WebUI version**: 0.8.0 - **Database**: PostgreSQL 16.3 (Amazon RDS) - **Deployment**: AWS ECS Fargate (2 tasks) - **API providers**: OpenRouter, AWS Bedrock (via bedrock-access-gateway) - **"Usage" capability**: Enabled on affected models ## Steps to Reproduce ### Bug 1 (inconsistent tokens): 1. Enable "Usage" capability on a model 2. Send multiple chat messages using the same model 3. Check Admin Panel → Analytics → Users or Tokens view 4. Observe that some messages record token counts, others show 0 ### Bug 2 (PostgreSQL crash): 1. Use PostgreSQL as the database backend 2. Go to Admin Panel → Analytics 3. Click on any model to view its detail/overview page 4. Observe HTTP 500 error ## Expected Behavior 1. All streaming messages should reliably record token usage from the provider's response 2. Per-model analytics should work on PostgreSQL ## Additional Context The main analytics dashboard endpoints (`/analytics/summary`, `/analytics/users`, `/analytics/models`, `/analytics/tokens`, `/analytics/daily`) all return 200 OK. Only the per-model overview endpoint crashes. OpenRouter has deprecated `stream_options.include_usage` — usage is now **always included** in every streaming response regardless of that flag. So the data is available; it's just not being reliably captured.
Author
Owner

@pr-validator-bot commented on GitHub (Feb 13, 2026):

⚠️ Missing Issue Title Prefix

@smorello87, your issue title is missing a prefix (e.g., bug:, feat:, docs:).

Please update your issue title to include one of the following prefixes:

  • bug: Bug report or error you've encountered
  • feat: Feature request or enhancement suggestion
  • docs: Documentation issue or improvement request
  • question: Question about usage or functionality
  • help: Request for help or support

Example: bug: Login fails when using special characters in password

<!-- gh-comment-id:3894453565 --> @pr-validator-bot commented on GitHub (Feb 13, 2026): # ⚠️ Missing Issue Title Prefix @smorello87, your issue title is missing a prefix (e.g., `bug:`, `feat:`, `docs:`). Please update your issue title to include one of the following prefixes: - **bug**: Bug report or error you've encountered - **feat**: Feature request or enhancement suggestion - **docs**: Documentation issue or improvement request - **question**: Question about usage or functionality - **help**: Request for help or support Example: `bug: Login fails when using special characters in password`
Author
Owner

@smorello87 commented on GitHub (Feb 13, 2026):

Update: Root Cause Identified for Bug 1 (Inconsistent Token Tracking)

After investigating the database directly, I found the actual root cause. It's not a race condition — it's a key name mismatch between what gets saved and what analytics queries read.

The Problem

  1. What gets saved: OpenRouter (and other OpenAI-compatible APIs) return usage as prompt_tokens / completion_tokens. This is what gets stored in the chat_message.usage JSON column.

  2. What analytics queries read: The analytics methods in chat_messages.py (get_token_usage_by_model, get_token_usage_by_user) query for input_tokens / output_tokens:

    input_tokens = cast(
        func.json_extract_path_text(ChatMessage.usage, "input_tokens"), Integer
    )
    
  3. The normalize_usage function (utils/response.py) is supposed to add input_tokens/output_tokens alongside prompt_tokens/completion_tokens, but it is inconsistently applied. In our production database: only 14 out of 107 messages had the normalized keys.

Evidence

Before fix:
  Total messages with usage:    107
  With input_tokens key:         14  (13%) ← what analytics could see
  With prompt_tokens key:        107 (100%) ← actual data

  Analytics showed:  7,809 total tokens
  Actual data:      60,613 total tokens  (87% hidden!)

Why normalize_usage Is Inconsistently Applied

Looking at middleware.py, the streaming handler does call normalize_usage() at line ~3405-3410:

raw_usage = data.get("usage", {}) or {}
if raw_usage:
    usage = normalize_usage(raw_usage)

But there appear to be multiple code paths where usage gets saved to the chat_message table, and not all go through normalize_usage. The ChatMessage.upsert_message method extracts usage directly from the message data dict without normalization:

usage = data.get("usage")
if not usage:
    info = data.get("info", {})
    usage = info.get("usage") if info else None

Suggested Fix

The simplest fix would be to normalize usage in ChatMessage.upsert_message() before saving, ensuring all code paths produce consistent keys:

def upsert_message(self, message_id, chat_id, user_id, data):
    usage = data.get("usage")
    if not usage:
        info = data.get("info", {})
        usage = info.get("usage") if info else None
    
    # Normalize: ensure input_tokens/output_tokens exist
    if usage:
        if "input_tokens" not in usage and "prompt_tokens" in usage:
            usage["input_tokens"] = usage["prompt_tokens"]
        if "output_tokens" not in usage and "completion_tokens" in usage:
            usage["output_tokens"] = usage["completion_tokens"]
    ...

Alternatively, the analytics queries could fall back to prompt_tokens/completion_tokens when input_tokens/output_tokens are not present.

Workaround

We applied this SQL fix to normalize existing records:

UPDATE chat_message 
SET usage = usage::jsonb || jsonb_build_object(
    'input_tokens', (usage->>'prompt_tokens')::int,
    'output_tokens', (usage->>'completion_tokens')::int
)
WHERE usage->>'prompt_tokens' IS NOT NULL 
  AND usage->>'input_tokens' IS NULL;

This needs to be re-run periodically until the upstream fix is applied.

Environment

  • Open WebUI v0.8.0
  • PostgreSQL 16.3
  • API provider: OpenRouter (OpenAI-compatible format)
<!-- gh-comment-id:3894602583 --> @smorello87 commented on GitHub (Feb 13, 2026): ## Update: Root Cause Identified for Bug 1 (Inconsistent Token Tracking) After investigating the database directly, I found the actual root cause. It's **not** a race condition — it's a **key name mismatch** between what gets saved and what analytics queries read. ### The Problem 1. **What gets saved**: OpenRouter (and other OpenAI-compatible APIs) return usage as `prompt_tokens` / `completion_tokens`. This is what gets stored in the `chat_message.usage` JSON column. 2. **What analytics queries read**: The analytics methods in `chat_messages.py` (`get_token_usage_by_model`, `get_token_usage_by_user`) query for `input_tokens` / `output_tokens`: ```python input_tokens = cast( func.json_extract_path_text(ChatMessage.usage, "input_tokens"), Integer ) ``` 3. **The `normalize_usage` function** (`utils/response.py`) is supposed to add `input_tokens`/`output_tokens` alongside `prompt_tokens`/`completion_tokens`, but it is **inconsistently applied**. In our production database: **only 14 out of 107 messages** had the normalized keys. ### Evidence ``` Before fix: Total messages with usage: 107 With input_tokens key: 14 (13%) ← what analytics could see With prompt_tokens key: 107 (100%) ← actual data Analytics showed: 7,809 total tokens Actual data: 60,613 total tokens (87% hidden!) ``` ### Why `normalize_usage` Is Inconsistently Applied Looking at `middleware.py`, the streaming handler does call `normalize_usage()` at line ~3405-3410: ```python raw_usage = data.get("usage", {}) or {} if raw_usage: usage = normalize_usage(raw_usage) ``` But there appear to be multiple code paths where usage gets saved to the `chat_message` table, and not all go through `normalize_usage`. The `ChatMessage.upsert_message` method extracts usage directly from the message data dict without normalization: ```python usage = data.get("usage") if not usage: info = data.get("info", {}) usage = info.get("usage") if info else None ``` ### Suggested Fix The simplest fix would be to normalize usage in `ChatMessage.upsert_message()` before saving, ensuring all code paths produce consistent keys: ```python def upsert_message(self, message_id, chat_id, user_id, data): usage = data.get("usage") if not usage: info = data.get("info", {}) usage = info.get("usage") if info else None # Normalize: ensure input_tokens/output_tokens exist if usage: if "input_tokens" not in usage and "prompt_tokens" in usage: usage["input_tokens"] = usage["prompt_tokens"] if "output_tokens" not in usage and "completion_tokens" in usage: usage["output_tokens"] = usage["completion_tokens"] ... ``` Alternatively, the analytics queries could fall back to `prompt_tokens`/`completion_tokens` when `input_tokens`/`output_tokens` are not present. ### Workaround We applied this SQL fix to normalize existing records: ```sql UPDATE chat_message SET usage = usage::jsonb || jsonb_build_object( 'input_tokens', (usage->>'prompt_tokens')::int, 'output_tokens', (usage->>'completion_tokens')::int ) WHERE usage->>'prompt_tokens' IS NOT NULL AND usage->>'input_tokens' IS NULL; ``` This needs to be re-run periodically until the upstream fix is applied. ### Environment - Open WebUI v0.8.0 - PostgreSQL 16.3 - API provider: OpenRouter (OpenAI-compatible format)
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

Are you sure your chat message table database upgrade ran through completely and without issues - and you actually let it finish?

<!-- gh-comment-id:3895320373 --> @Classic298 commented on GitHub (Feb 13, 2026): Are you sure your chat message table database upgrade ran through completely and without issues - and you actually let it finish?
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

Fixed by: https://github.com/open-webui/open-webui/pull/21351

<!-- gh-comment-id:3895926522 --> @Classic298 commented on GitHub (Feb 13, 2026): Fixed by: https://github.com/open-webui/open-webui/pull/21351
Author
Owner

@smorello87 commented on GitHub (Feb 13, 2026):

@Classic298 Thanks for looking into this and for the quick PostgreSQL fix in #21351!

To answer your question — yes, the migration completed successfully. The chat_message table is fully populated with all records and usage data. We verified this by querying the table directly:

  • 107 messages have usage JSON populated
  • All 107 contain prompt_tokens / completion_tokens (the data is there)
  • Only 14 of those also contain input_tokens / output_tokens (which is what the analytics queries read)

So the migration itself is fine — the issue (Bug 2) is specifically that the normalize_usage() function isn't consistently adding the input_tokens/output_tokens keys before the data gets saved to chat_message.usage. The analytics queries in get_token_usage_by_model and get_token_usage_by_user only look for input_tokens/output_tokens, so records that only have the OpenAI-format keys (prompt_tokens/completion_tokens) are invisible to the dashboard.

Thanks again for the fast turnaround on the PostgreSQL fix!

<!-- gh-comment-id:3896783537 --> @smorello87 commented on GitHub (Feb 13, 2026): @Classic298 Thanks for looking into this and for the quick PostgreSQL fix in #21351! To answer your question — yes, the migration completed successfully. The `chat_message` table is fully populated with all records and usage data. We verified this by querying the table directly: - 107 messages have `usage` JSON populated - All 107 contain `prompt_tokens` / `completion_tokens` (the data is there) - Only 14 of those also contain `input_tokens` / `output_tokens` (which is what the analytics queries read) So the migration itself is fine — the issue (Bug 2) is specifically that the `normalize_usage()` function isn't consistently adding the `input_tokens`/`output_tokens` keys before the data gets saved to `chat_message.usage`. The analytics queries in `get_token_usage_by_model` and `get_token_usage_by_user` only look for `input_tokens`/`output_tokens`, so records that only have the OpenAI-format keys (`prompt_tokens`/`completion_tokens`) are invisible to the dashboard. Thanks again for the fast turnaround on the PostgreSQL fix!
Author
Owner

@Classic298 commented on GitHub (Feb 13, 2026):

one bug is fixed in dev now.

<!-- gh-comment-id:3898553463 --> @Classic298 commented on GitHub (Feb 13, 2026): one bug is fixed in dev now.
Author
Owner

@xec-abailey commented on GitHub (Feb 13, 2026):

In case others are hitting this issue, will this fix prevent all token counts from showing up as 0 or is there additional configuration I'm missing to pull that through?

Image
<!-- gh-comment-id:3899258631 --> @xec-abailey commented on GitHub (Feb 13, 2026): In case others are hitting this issue, will this fix prevent all token counts from showing up as 0 or is there additional configuration I'm missing to pull that through? <img width="774" height="123" alt="Image" src="https://github.com/user-attachments/assets/d1f27cb9-4498-42ef-a41b-0464c4da5d74" />
Author
Owner

@smorello87 commented on GitHub (Feb 13, 2026):

In case others are hitting this issue, will this fix prevent all token counts from showing up as 0 or is there additional configuration I'm missing to pull that through?

Image

Hi, the workaround described above did fix it for our instance but we had to set up a cron job to run it every few hours. hoping for a permanent fix in the next release

<!-- gh-comment-id:3899391654 --> @smorello87 commented on GitHub (Feb 13, 2026): > In case others are hitting this issue, will this fix prevent all token counts from showing up as 0 or is there additional configuration I'm missing to pull that through? > > <img width="774" height="123" alt="Image" src="https://github.com/user-attachments/assets/d1f27cb9-4498-42ef-a41b-0464c4da5d74" /> Hi, the workaround described above did fix it for our instance but we had to set up a cron job to run it every few hours. hoping for a permanent fix in the next release
Author
Owner

@Joly0 commented on GitHub (Feb 18, 2026):

Hey guys, i dont want to hijack this issue, but i guess i have a very similar issue, that the "User activity" data regarding the tokens is very inconsistent, not only for postgres, but for me aswell with sqlite as the database. A lot of users show token usage of 0, while their message count does 100% show that this cant be true.

What i also noticed is that the token count for "Model usage" seems to be incorrect aswell and additionally, when i want to check per model usage, the overview and the chats tab are completly empty, even for models that have milions of token usage:

Image Image Image

(Also would be cool if the name of the model wasnt cut-off like that)

<!-- gh-comment-id:3921605914 --> @Joly0 commented on GitHub (Feb 18, 2026): Hey guys, i dont want to hijack this issue, but i guess i have a very similar issue, that the "User activity" data regarding the tokens is very inconsistent, not only for postgres, but for me aswell with sqlite as the database. A lot of users show token usage of 0, while their message count does 100% show that this cant be true. What i also noticed is that the token count for "Model usage" seems to be incorrect aswell and additionally, when i want to check per model usage, the overview and the chats tab are completly empty, even for models that have milions of token usage: <img width="1134" height="90" alt="Image" src="https://github.com/user-attachments/assets/8f726b58-9175-4f56-887c-1b9604b0cd8c" /> <img width="650" height="412" alt="Image" src="https://github.com/user-attachments/assets/bbc72ddc-6174-4ddb-8811-63ea2146ec3f" /> <img width="654" height="227" alt="Image" src="https://github.com/user-attachments/assets/c5e545bd-70fe-47cc-b4e8-aaaefb5cf96e" /> (Also would be cool if the name of the model wasnt cut-off like that)
Author
Owner

@Classic298 commented on GitHub (Feb 18, 2026):

probably your migration didnt fully finish and not all data was migrated in your case

the overview and the chats tab are completly empty

Overview is empty because there is no feedback given

if the chats tab is empty, then you should look into what data is stored in your table and or if you see any errors.

<!-- gh-comment-id:3921653932 --> @Classic298 commented on GitHub (Feb 18, 2026): probably your migration didnt fully finish and not all data was migrated in your case > the overview and the chats tab are completly empty Overview is empty because there is no feedback given if the chats tab is empty, then you should look into what data is stored in your table and or if you see any errors.
Author
Owner

@smorello87 commented on GitHub (Feb 18, 2026):

Update: Bug 1 (usage key mismatch) is NOT fixed in v0.8.3

We upgraded to v0.8.3 today and confirmed that Bug 2 (PostgreSQL DISTINCT) is fixed — thank you!

However, Bug 2 (usage key name mismatch) persists in v0.8.3. After upgrading, all new messages still save usage with only prompt_tokens/completion_tokens — the input_tokens/output_tokens keys that analytics queries need are still missing.

Root cause: normalize_usage() is called in the streaming middleware (middleware.py:3483) and sets a local usage variable, but the actual database write goes through a different path:

  1. Chats.upsert_message_to_chat_by_id_and_message_id() merges the message into history["messages"]
  2. Then dual-writes to chat_message via ChatMessages.upsert_message(data=history["messages"][message_id])
  3. upsert_message() extracts data.get("usage") — which is the raw usage from the history dict, not the normalized version

The normalized usage variable from middleware.py:3483 is emitted to the frontend via WebSocket (chat:completion event) but never makes it into the history dict that gets passed to ChatMessages.upsert_message().

Suggested fix: Add normalization in ChatMessages.upsert_message() (in models/chat_messages.py) right before saving — this is the bottleneck where all code paths converge:

# After extracting usage from data:
if usage and isinstance(usage, dict):
    if "prompt_tokens" in usage and "input_tokens" not in usage:
        usage["input_tokens"] = int(usage["prompt_tokens"])
    if "completion_tokens" in usage and "output_tokens" not in usage:
        usage["output_tokens"] = int(usage["completion_tokens"])

This needs to be added in both the update and insert branches of upsert_message().

We've applied this as a build-time patch on our deployment and confirmed it works — new messages now have both key formats and analytics displays correctly.

@Joly0 — this is likely the same issue you're seeing with inconsistent token data. The migration itself is fine; it's that new messages are being saved with the wrong key names. If you can run SQL against your database, this one-liner will fix existing records:

UPDATE chat_message
SET usage = usage::jsonb || jsonb_build_object(
    'input_tokens', (usage->>'prompt_tokens')::int,
    'output_tokens', (usage->>'completion_tokens')::int)
WHERE usage->>'prompt_tokens' IS NOT NULL
  AND usage->>'input_tokens' IS NULL;
<!-- gh-comment-id:3923902918 --> @smorello87 commented on GitHub (Feb 18, 2026): ### Update: Bug 1 (usage key mismatch) is NOT fixed in v0.8.3 We upgraded to v0.8.3 today and confirmed that **Bug 2 (PostgreSQL DISTINCT)** is fixed — thank you! However, **Bug 2 (usage key name mismatch)** persists in v0.8.3. After upgrading, all new messages still save usage with only `prompt_tokens`/`completion_tokens` — the `input_tokens`/`output_tokens` keys that analytics queries need are still missing. **Root cause**: `normalize_usage()` is called in the streaming middleware (`middleware.py:3483`) and sets a local `usage` variable, but the actual database write goes through a different path: 1. `Chats.upsert_message_to_chat_by_id_and_message_id()` merges the message into `history["messages"]` 2. Then dual-writes to `chat_message` via `ChatMessages.upsert_message(data=history["messages"][message_id])` 3. `upsert_message()` extracts `data.get("usage")` — which is the **raw** usage from the history dict, not the normalized version The normalized `usage` variable from `middleware.py:3483` is emitted to the frontend via WebSocket (`chat:completion` event) but never makes it into the history dict that gets passed to `ChatMessages.upsert_message()`. **Suggested fix**: Add normalization in `ChatMessages.upsert_message()` (in `models/chat_messages.py`) right before saving — this is the bottleneck where all code paths converge: ```python # After extracting usage from data: if usage and isinstance(usage, dict): if "prompt_tokens" in usage and "input_tokens" not in usage: usage["input_tokens"] = int(usage["prompt_tokens"]) if "completion_tokens" in usage and "output_tokens" not in usage: usage["output_tokens"] = int(usage["completion_tokens"]) ``` This needs to be added in both the update and insert branches of `upsert_message()`. We've applied this as a build-time patch on our deployment and confirmed it works — new messages now have both key formats and analytics displays correctly. @Joly0 — this is likely the same issue you're seeing with inconsistent token data. The migration itself is fine; it's that new messages are being saved with the wrong key names. If you can run SQL against your database, this one-liner will fix existing records: ```sql UPDATE chat_message SET usage = usage::jsonb || jsonb_build_object( 'input_tokens', (usage->>'prompt_tokens')::int, 'output_tokens', (usage->>'completion_tokens')::int) WHERE usage->>'prompt_tokens' IS NOT NULL AND usage->>'input_tokens' IS NULL; ```
Author
Owner

@Classic298 commented on GitHub (Feb 19, 2026):

Thats correct, bug 1 is not fixed yet.

And there's already a pr here in case you didnt see it directly above your comment. https://github.com/open-webui/open-webui/pull/21542

<!-- gh-comment-id:3925564824 --> @Classic298 commented on GitHub (Feb 19, 2026): Thats correct, bug 1 is not fixed yet. And there's already a pr here in case you didnt see it directly above your comment. https://github.com/open-webui/open-webui/pull/21542
Author
Owner

@smorello87 commented on GitHub (Feb 19, 2026):

Thank you, I had missed that indeed.

<!-- gh-comment-id:3930053335 --> @smorello87 commented on GitHub (Feb 19, 2026): Thank you, I had missed that indeed.
Author
Owner

@tjbck commented on GitHub (Feb 23, 2026):

#21675

<!-- gh-comment-id:3941960818 --> @tjbck commented on GitHub (Feb 23, 2026): #21675
Author
Owner

@Podden commented on GitHub (Mar 1, 2026):

Do you track Cache usage as well? I have an Anthropic pipe and I want to make it compatible with your analytics. Currently with "Usage" active on a model, I'm just returning normal anthropic Message api data, but this would not be compatible am I right?

  "usage": {
    "input_tokens": 2048,
    "cache_read_input_tokens": 1800,
    "cache_creation_input_tokens": 248,
    "output_tokens": 503
  }
}

Whats the correct format to get picked up by analytics?

<!-- gh-comment-id:3980271575 --> @Podden commented on GitHub (Mar 1, 2026): Do you track Cache usage as well? I have an Anthropic pipe and I want to make it compatible with your analytics. Currently with "Usage" active on a model, I'm just returning normal anthropic Message api data, but this would not be compatible am I right? ``` "usage": { "input_tokens": 2048, "cache_read_input_tokens": 1800, "cache_creation_input_tokens": 248, "output_tokens": 503 } } ``` Whats the correct format to get picked up by analytics?
Author
Owner

@Doan-IT commented on GitHub (Mar 31, 2026):

Hello, is anyone following this issue?

<!-- gh-comment-id:4161121193 --> @Doan-IT commented on GitHub (Mar 31, 2026): Hello, is anyone following this issue?
Author
Owner

@smorello87 commented on GitHub (Apr 1, 2026):

For what it's worth it, we're still applying our patch to make it work.

On Tue, Mar 31, 2026 at 5:15 AM Doan-IT @.***> wrote:

Doan-IT left a comment (open-webui/open-webui#21347)
https://github.com/open-webui/open-webui/issues/21347#issuecomment-4161121193

Hello, is anyone following this issue?


Reply to this email directly, view it on GitHub
https://github.com/open-webui/open-webui/issues/21347?email_source=notifications&email_token=AIOLBFAEKWMBTEQAXG6Y6YT4TOECJA5CNFSNUABFM5UWIORPF5TWS5BNNB2WEL2JONZXKZKDN5WW2ZLOOQXTIMJWGEYTEMJRHEZ2M4TFMFZW63VHNVSW45DJN5XKKZLWMVXHJLDGN5XXIZLSL5RWY2LDNM#issuecomment-4161121193,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AIOLBFFVTNQXFTQLAU5XWYD4TOECJAVCNFSM6AAAAACU7FFPBOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DCNRRGEZDCMJZGM
.
You are receiving this because you were mentioned.Message ID:
@.***>

--
Stefano Morello, Ph.D. // www.stefanomorello.com
Assistant Director for Digital Projects // American Social History Project
/ Center for Media and Learning https://ashp.cuny.edu/ // The Graduate
Center, CUNY
Founding Editor // JAm It! (Journal of American Studies in Italy)
https://www.ojs.unito.it/index.php/jamit/
Executive Council Representative // Association for Computers and
the Humanities
Executive Committee Representative // Italian American Studies Association

<!-- gh-comment-id:4166874076 --> @smorello87 commented on GitHub (Apr 1, 2026): For what it's worth it, we're still applying our patch to make it work. On Tue, Mar 31, 2026 at 5:15 AM Doan-IT ***@***.***> wrote: > *Doan-IT* left a comment (open-webui/open-webui#21347) > <https://github.com/open-webui/open-webui/issues/21347#issuecomment-4161121193> > > Hello, is anyone following this issue? > > — > Reply to this email directly, view it on GitHub > <https://github.com/open-webui/open-webui/issues/21347?email_source=notifications&email_token=AIOLBFAEKWMBTEQAXG6Y6YT4TOECJA5CNFSNUABFM5UWIORPF5TWS5BNNB2WEL2JONZXKZKDN5WW2ZLOOQXTIMJWGEYTEMJRHEZ2M4TFMFZW63VHNVSW45DJN5XKKZLWMVXHJLDGN5XXIZLSL5RWY2LDNM#issuecomment-4161121193>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AIOLBFFVTNQXFTQLAU5XWYD4TOECJAVCNFSM6AAAAACU7FFPBOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DCNRRGEZDCMJZGM> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> > -- Stefano Morello, Ph.D. // www.stefanomorello.com Assistant Director for Digital Projects // American Social History Project / Center for Media and Learning <https://ashp.cuny.edu/> // The Graduate Center, CUNY Founding Editor // *JAm It! (Journal of American Studies in Italy)* <https://www.ojs.unito.it/index.php/jamit/> Executive Council Representative // Association for Computers and the Humanities Executive Committee Representative // Italian American Studies Association
Author
Owner

@Classic298 commented on GitHub (Apr 1, 2026):

@smorello87 feel free to PR

<!-- gh-comment-id:4168157336 --> @Classic298 commented on GitHub (Apr 1, 2026): @smorello87 feel free to PR
Author
Owner

@garrettashcroft1231-max commented on GitHub (Apr 1, 2026):

Yes, this issue is still reproducible on the latest version

On Wed, Apr 1, 2026, 8:37 AM Classic298 @.***> wrote:

Classic298 left a comment (open-webui/open-webui#21347)
https://github.com/open-webui/open-webui/issues/21347#issuecomment-4168157336

@smorello87 https://github.com/smorello87 feel free to PR


Reply to this email directly, view it on GitHub
https://github.com/open-webui/open-webui/issues/21347?email_source=notifications&email_token=CAXKSQPEG3PKM5U76U5S2Z34TTBM7A5CNFSNUABFM5UWIORPF5TWS5BNNB2WEL2JONZXKZKDN5WW2ZLOOQXTIMJWHAYTKNZTGM3KM4TFMFZW63VKON2WE43DOJUWEZLEUVSXMZLOOSWGM33PORSXEX3DNRUWG2Y#issuecomment-4168157336,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/CAXKSQJFBFXIX5DUGGBOXRT4TTBM7AVCNFSM6AAAAACU7FFPBOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DCNRYGE2TOMZTGY
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>

<!-- gh-comment-id:4168951795 --> @garrettashcroft1231-max commented on GitHub (Apr 1, 2026): Yes, this issue is still reproducible on the latest version On Wed, Apr 1, 2026, 8:37 AM Classic298 ***@***.***> wrote: > *Classic298* left a comment (open-webui/open-webui#21347) > <https://github.com/open-webui/open-webui/issues/21347#issuecomment-4168157336> > > @smorello87 <https://github.com/smorello87> feel free to PR > > — > Reply to this email directly, view it on GitHub > <https://github.com/open-webui/open-webui/issues/21347?email_source=notifications&email_token=CAXKSQPEG3PKM5U76U5S2Z34TTBM7A5CNFSNUABFM5UWIORPF5TWS5BNNB2WEL2JONZXKZKDN5WW2ZLOOQXTIMJWHAYTKNZTGM3KM4TFMFZW63VKON2WE43DOJUWEZLEUVSXMZLOOSWGM33PORSXEX3DNRUWG2Y#issuecomment-4168157336>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/CAXKSQJFBFXIX5DUGGBOXRT4TTBM7AVCNFSM6AAAAACU7FFPBOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DCNRYGE2TOMZTGY> > . > You are receiving this because you are subscribed to this thread.Message > ID: ***@***.***> >
Author
Owner

@Classic298 commented on GitHub (Apr 1, 2026):

@garrettashcroft1231-max thanks we know. feel free to PR

<!-- gh-comment-id:4169301251 --> @Classic298 commented on GitHub (Apr 1, 2026): @garrettashcroft1231-max thanks we know. feel free to PR
Author
Owner

@smorello87 commented on GitHub (Apr 1, 2026):

PR submitted: #23322

<!-- gh-comment-id:4169981195 --> @smorello87 commented on GitHub (Apr 1, 2026): PR submitted: #23322
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#58115