[GH-ISSUE #11745] issue: Evaluations/leaderboards page & API endpoint slow due to profile_image_url and backgroundImageUrl #31870

Closed
opened 2026-04-25 05:45:54 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @lumitry on GitHub (Mar 16, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/11745

Originally assigned to: @tjbck on GitHub.

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

v0.5.20

Ollama Version (if applicable)

No response

Operating System

Server on Ubuntu 22.04, webui hosted via Docker

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have listed steps to reproduce the bug in detail.

Expected Behavior

As an admin user, loading the page (open-webui base URL)/admin/evaluations should respond quickly.

The API endpoint for evaluations/feedbacks (/api/v1/evaluations/feedbacks/all) should (most likely) not include extraneous information such as every last one of the user's settings.

Actual Behavior

It can take many minutes to load, sometimes north of 15 minutes, depending on the connection speed to the server.

If the relevant API endpoint (/api/v1/evaluations/feedbacks/all; requires bearer token) is GET requested, we can see the problem: every feedback object (from what I can tell) includes both the profile image (profile_image_url) and background image (backgroundImageUrl) as base64 encoded strings:

[{"id":"(id redacted)","user_id":"(user id redacted)","version":0,"type":"rating","data":{"rating":1,"model_id":"qwq:latest","sibling_model_ids":["llama3.1:latest"],"reason":"attention_to_detail","comment":"","tags":[],"details":{"rating":9}},"meta":{"arena":true,"model_id":"desktop-arena","message_id":"(message id)","message_index":6,"chat_id":"(chat id)","base_models":{"qwq:latest":null,"llama3.1:latest":null}},"created_at":1742144903,"updated_at":1742144951,"user":{"id":"(user ID redacted)","name":"(name redacted)","email":"(email addr redacted","role":"admin","profile_image_url":"data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt (... continues on for quite a while)

There is a lot of seemingly extraneous data included in the response, but the biggest waste of data is by far the background image URL. That said, the profile image URL also doesn't need to be sent multiple times, nor does it need to be in this query from what I can tell. Evaluations (as displayed on the admin page) include the profile image, but there ought to be far more efficient (and less redundant) ways of fetching it.

Steps to Reproduce

  1. As a user who has left feedbacks on model responses, open your account settings page and set a profile image. The larger the image, the better, for the sake of triggering this behavior.
  2. Open the "Interface" page of settings and set a Chat Background Image.
  3. As an admin user, navigate to the evaluations page (/admin/evaluations) and wait for it to load—the issue here is the load times. Alternatively, for a better view of the problem, send a GET request to the /api/v1/evaluations/feedbacks/all endpoint (including a valid admin bearer token). The response includes the profile_image_url and backgroundImageUrl values multiple times as described in the "Actual Behavior" section, wasting a large amount of bandwidth every time this page/endpoint is hit.

Logs & Screenshots

Browser console (nothing too relevant?):

+layout.svelte:472 Backend config: Object
+layout.svelte:76 connected (NOTE: redacting this, since it looks token-adjacent, LMK if it's important)
+layout.svelte:95 user-list Object
+layout.svelte:100 usage Object
+layout.svelte:95 user-list Object
content.js:2 Uncaught (in promise) TypeError: crypto.randomUUID is not a function
    at _s (content.js:2:34977)
    at RO (content.js:2:744451)
Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received
Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received
Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received
Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received
Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received
Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received
Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received
Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received
Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received
Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received

Docker container logs:

2025-03-16 17:47:52.791 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53753 - "GET /admin/evaluations HTTP/1.1" 304 - {}

2025-03-16 17:47:53.460 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53753 - "GET /api/config HTTP/1.1" 200 - {}

2025-03-16 17:47:53.487 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53753 - "GET /api/v1/auths/ HTTP/1.1" 200 - {}

2025-03-16 17:47:53.521 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53753 - "GET /api/config HTTP/1.1" 200 - {}

2025-03-16 17:47:53.541 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53753 - "GET /api/changelog HTTP/1.1" 200 - {}

2025-03-16 17:47:53.565 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53758 - "GET /api/v1/users/user/settings HTTP/1.1" 200 - {}

2025-03-16 17:47:53.593 | INFO     | open_webui.routers.ollama:get_all_models:300 - get_all_models() - {}

2025-03-16 17:47:53.788 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53758 - "GET /api/models HTTP/1.1" 200 - {}

2025-03-16 17:47:54.939 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53758 - "GET /api/v1/configs/banners HTTP/1.1" 200 - {}

2025-03-16 17:47:54.964 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53758 - "GET /api/v1/tools/ HTTP/1.1" 200 - {}

2025-03-16 17:47:55.003 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53753 - "GET /api/v1/channels/ HTTP/1.1" 200 - {}

2025-03-16 17:48:04.181 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53753 - "GET /api/v1/evaluations/feedbacks/all HTTP/1.1" 200 - {}

2025-03-16 17:48:04.181 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53758 - "GET /api/version/updates HTTP/1.1" 200 - {}

2025-03-16 17:48:04.203 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53759 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 - {}

2025-03-16 17:48:04.285 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53759 - "GET /api/v1/chats/pinned HTTP/1.1" 200 - {}

2025-03-16 17:48:06.411 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53759 - "GET /api/v1/folders/ HTTP/1.1" 200 - {}

2025-03-16 17:48:06.432 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53758 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}

2025-03-16 17:48:06.627 | INFO     | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53758 - "GET /api/v1/chats/?page=2 HTTP/1.1" 200 - {}

Additional Information

I'm using Tailscale. That shouldn't change anything about the curl response, but it might be part of why loading the evaluations page is so slow even after I remove my profile and BG images, especially since it means I've had to do a few weird things to get my Docker container to see the URL of the device I'm running Ollama with.

Originally created by @lumitry on GitHub (Mar 16, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/11745 Originally assigned to: @tjbck on GitHub. ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version v0.5.20 ### Ollama Version (if applicable) _No response_ ### Operating System Server on Ubuntu 22.04, webui hosted via Docker ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have listed steps to reproduce the bug in detail. ### Expected Behavior As an admin user, loading the page `(open-webui base URL)/admin/evaluations` should respond quickly. The API endpoint for evaluations/feedbacks (`/api/v1/evaluations/feedbacks/all`) should (most likely) not include extraneous information such as every last one of the user's settings. ### Actual Behavior It can take many minutes to load, sometimes north of 15 minutes, depending on the connection speed to the server. If the relevant API endpoint (`/api/v1/evaluations/feedbacks/all`; requires bearer token) is GET requested, we can see the problem: every feedback object (from what I can tell) includes both the profile image (`profile_image_url`) and background image (`backgroundImageUrl`) as base64 encoded strings: ```json [{"id":"(id redacted)","user_id":"(user id redacted)","version":0,"type":"rating","data":{"rating":1,"model_id":"qwq:latest","sibling_model_ids":["llama3.1:latest"],"reason":"attention_to_detail","comment":"","tags":[],"details":{"rating":9}},"meta":{"arena":true,"model_id":"desktop-arena","message_id":"(message id)","message_index":6,"chat_id":"(chat id)","base_models":{"qwq:latest":null,"llama3.1:latest":null}},"created_at":1742144903,"updated_at":1742144951,"user":{"id":"(user ID redacted)","name":"(name redacted)","email":"(email addr redacted","role":"admin","profile_image_url":"data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt (... continues on for quite a while) ``` There is a lot of seemingly extraneous data included in the response, but the biggest waste of data is by far the background image URL. That said, the profile image URL _also_ doesn't need to be sent multiple times, nor does it need to be in this query from what I can tell. Evaluations (as displayed on the admin page) include the profile image, but there ought to be far more efficient (and less redundant) ways of fetching it. ### Steps to Reproduce 1. As a user who has left feedbacks on model responses, open your account settings page and set a profile image. The larger the image, the better, for the sake of triggering this behavior. 2. Open the "Interface" page of settings and set a Chat Background Image. 3. As an admin user, navigate to the evaluations page (`/admin/evaluations`) and wait for it to load—the issue here is the load times. Alternatively, for a better view of the problem, send a GET request to the `/api/v1/evaluations/feedbacks/all` endpoint (including a valid admin bearer token). The response includes the `profile_image_url` and `backgroundImageUrl` values multiple times as described in the "Actual Behavior" section, wasting a large amount of bandwidth every time this page/endpoint is hit. ### Logs & Screenshots Browser console (nothing too relevant?): ```log +layout.svelte:472 Backend config: Object +layout.svelte:76 connected (NOTE: redacting this, since it looks token-adjacent, LMK if it's important) +layout.svelte:95 user-list Object +layout.svelte:100 usage Object +layout.svelte:95 user-list Object content.js:2 Uncaught (in promise) TypeError: crypto.randomUUID is not a function at _s (content.js:2:34977) at RO (content.js:2:744451) Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received ``` Docker container logs: ```log 2025-03-16 17:47:52.791 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53753 - "GET /admin/evaluations HTTP/1.1" 304 - {} 2025-03-16 17:47:53.460 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53753 - "GET /api/config HTTP/1.1" 200 - {} 2025-03-16 17:47:53.487 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53753 - "GET /api/v1/auths/ HTTP/1.1" 200 - {} 2025-03-16 17:47:53.521 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53753 - "GET /api/config HTTP/1.1" 200 - {} 2025-03-16 17:47:53.541 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53753 - "GET /api/changelog HTTP/1.1" 200 - {} 2025-03-16 17:47:53.565 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53758 - "GET /api/v1/users/user/settings HTTP/1.1" 200 - {} 2025-03-16 17:47:53.593 | INFO | open_webui.routers.ollama:get_all_models:300 - get_all_models() - {} 2025-03-16 17:47:53.788 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53758 - "GET /api/models HTTP/1.1" 200 - {} 2025-03-16 17:47:54.939 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53758 - "GET /api/v1/configs/banners HTTP/1.1" 200 - {} 2025-03-16 17:47:54.964 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53758 - "GET /api/v1/tools/ HTTP/1.1" 200 - {} 2025-03-16 17:47:55.003 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53753 - "GET /api/v1/channels/ HTTP/1.1" 200 - {} 2025-03-16 17:48:04.181 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53753 - "GET /api/v1/evaluations/feedbacks/all HTTP/1.1" 200 - {} 2025-03-16 17:48:04.181 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53758 - "GET /api/version/updates HTTP/1.1" 200 - {} 2025-03-16 17:48:04.203 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53759 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 - {} 2025-03-16 17:48:04.285 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53759 - "GET /api/v1/chats/pinned HTTP/1.1" 200 - {} 2025-03-16 17:48:06.411 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53759 - "GET /api/v1/folders/ HTTP/1.1" 200 - {} 2025-03-16 17:48:06.432 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53758 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} 2025-03-16 17:48:06.627 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - (CLIENT IP):53758 - "GET /api/v1/chats/?page=2 HTTP/1.1" 200 - {} ``` ### Additional Information I'm using Tailscale. That shouldn't change anything about the curl response, but it might be part of why loading the evaluations page is so slow even after I remove my profile and BG images, especially since it means I've had to do a few weird things to get my Docker container to see the URL of the device I'm running Ollama with.
GiteaMirror added the bug label 2026-04-25 05:45:54 -05:00
Author
Owner

@stevessr commented on GitHub (Mar 21, 2025):

for me about this ... there is no options for me to change url in the frontend.
and of cause , knowledge ibribries save their own referrence to the owener.
a temp method is to reset it in the database

one thing to optimise is to remove knowledge from models endpoint

<!-- gh-comment-id:2744663051 --> @stevessr commented on GitHub (Mar 21, 2025): for me about this ... there is no options for me to change url in the frontend. and of cause , knowledge ibribries save their own referrence to the owener. a temp method is to reset it in the database one thing to optimise is to remove knowledge from models endpoint
Author
Owner

@i-iooi-i commented on GitHub (Mar 22, 2025):

Thank you for this question, I spent a lot of time troubleshooting, and finally I know why the loading speed is slow, it turns out that the background image is added.

<!-- gh-comment-id:2745316980 --> @i-iooi-i commented on GitHub (Mar 22, 2025): Thank you for this question, I spent a lot of time troubleshooting, and finally I know why the loading speed is slow, it turns out that the background image is added.
Author
Owner

@tjbck commented on GitHub (Mar 31, 2025):

Addressed with d55735dc1e

<!-- gh-comment-id:2765077658 --> @tjbck commented on GitHub (Mar 31, 2025): Addressed with d55735dc1e035f6da4c022b2ec6acde6567f6332
Author
Owner

@GrayXu commented on GitHub (Apr 8, 2025):

This commit likely didn't fix the issue.
Opening the dialogue page now triggers excessive profile_image_url requests from v1/models to fetch each model's image. Instead of using base64, these requests should point to a static image URL (ideally served via CDN). This would avoid repeatedly requesting the same images and significantly reduce network traffic.

<!-- gh-comment-id:2785264973 --> @GrayXu commented on GitHub (Apr 8, 2025): This commit likely didn't fix the issue. Opening the dialogue page now triggers excessive `profile_image_url` requests from v1/models to fetch each model's image. Instead of using base64, these requests should point to a static image URL (ideally served via CDN). This would avoid repeatedly requesting the same images and significantly reduce network traffic.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#31870