[GH-ISSUE #14037] issue: Duplicate Model Entries in Evaluation Leaderboard #17117

Closed
opened 2026-04-19 22:52:33 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @merrime-n on GitHub (May 19, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/14037

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

v0.6.10

Ollama Version (if applicable)

No response

Operating System

Ubuntu 24.04.2 LTS

Browser (if applicable)

Firefox 138.0.4 (64-bit)

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have listed steps to reproduce the bug in detail.

Expected Behavior

I’m encountering an issue where the Evaluation Leaderboard displays repeated rows for the same models, making it difficult to interpret results accurately. Each model should appear only once in the leaderboard, with aggregated or distinct evaluation metrics.

Actual Behavior

Duplicate rows appear, potentially skewing visibility or comparison of results.

Steps to Reproduce

  1. Navigate to the Evaluation Leaderboard.
  2. Observe multiple identical entries for the same model (e.g., the same model name/settings listed more than once).

Logs & Screenshots

Image
Image

Additional Information

No response

Originally created by @merrime-n on GitHub (May 19, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/14037 ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version v0.6.10 ### Ollama Version (if applicable) _No response_ ### Operating System Ubuntu 24.04.2 LTS ### Browser (if applicable) Firefox 138.0.4 (64-bit) ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have listed steps to reproduce the bug in detail. ### Expected Behavior I’m encountering an issue where the Evaluation Leaderboard displays repeated rows for the same models, making it difficult to interpret results accurately. Each model should appear only once in the leaderboard, with aggregated or distinct evaluation metrics. ### Actual Behavior Duplicate rows appear, potentially skewing visibility or comparison of results. ### Steps to Reproduce 1. Navigate to the Evaluation Leaderboard. 2. Observe multiple identical entries for the same model (e.g., the same model name/settings listed more than once). ### Logs & Screenshots ![Image](https://github.com/user-attachments/assets/40ca9622-cf58-4f2f-9698-8739835651e7) ![Image](https://github.com/user-attachments/assets/ee7c9169-15f8-4420-913e-56fbeea5e42d) ### Additional Information _No response_
GiteaMirror added the bug label 2026-04-19 22:52:33 -05:00
Author
Owner

@tjbck commented on GitHub (May 19, 2025):

Can you share your connections settings as well?

<!-- gh-comment-id:2890457091 --> @tjbck commented on GitHub (May 19, 2025): Can you share your connections settings as well?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#17117