mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 19:08:59 -05:00
feat: tag llms #683
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @RealMrCactus on GitHub (Apr 22, 2024).
Originally assigned to: @tjbck on GitHub.
Is your feature request related to a problem? Please describe.
Browsing models when you have too many or many different varients
Describe the solution you'd like
The ability to make folders to organize them in the selector
Describe alternatives you've considered
Tags to search by?
Additional context
Not really
@stephen304 commented on GitHub (Apr 22, 2024):
I would also like to see this, specifically I would find it super helpful if there was a categorize feature. I have about 16 models downloaded now and it can be hard to remember which one has what kinds of strengths.
Adding categories would enable the model dropdown to have little headings and potentially allow each category to be collapsed.
Some categories I wish I could make include:
If tags were a possibility, I might also want to tag each model with which model is its "parent", like tagging all llama2-based models with llama2, so if I want to ask a completely different model a question I can be sure I choose one with a different lineage.
Edit: Another consideration might be whether this should be a system-wide setting by the admin or a per-user categorization/tagging. Personally it doesn't matter to me since I'm only using this myself, but if I shared with friends I could see some use in being able to set up categories/tags on models for them. On the other hand, I could see a user wanting to make their own categories so I think it could go either way. Maybe it's beyond an MVP to allow non-admin users to uncheck a "use system model categories/tags" to enable the categorization/tagging settings and to ignore the system settings for that user. Personally I think the MVP should be a system-wide categorization with a further ticket for per-user setting if there's interest/demand for such a feature.
@RealMrCactus commented on GitHub (Apr 23, 2024):
I think it should be both user and systemwide admin and the user can choose to see the admin ones or not if the user would like their own
@silentoplayz commented on GitHub (Apr 23, 2024):
I did have an idea for the model selector dropdown to be able to Star models to mark them as a favorite, so that starred/favorited models are prioritized and displayed first over other models in the model selector dropdown. Just thought I'd bring this idea over to this thread and make it known where discussion is occurring.
@RealMrCactus commented on GitHub (Apr 23, 2024):
Maybe users could have stars and the admins can make the folders/tags then
edit: If there is more than 1 tag on a model there could be a drop down listing the tags instead then as well so it's more visually pleasing.
@tjbck commented on GitHub (May 31, 2024):
Model tagging feature has been added to our dev branch, let me know if you encounter any issues!
@tjbck commented on GitHub (May 31, 2024):
FYI, You can also set the position of the model from the model workspace by dragging it to the position you want.
@RealMrCactus commented on GitHub (Jun 1, 2024):
damn nice thanks 👍🏽
@silentoplayz commented on GitHub (Jun 1, 2024):
Sweet! This is fantastic!
@justinh-rahb commented on GitHub (Jun 1, 2024):
https://github.com/open-webui/open-webui/assets/52832301/ead534c6-51db-453a-8fa7-d15fbc45067e
https://github.com/open-webui/open-webui/assets/52832301/58892f21-cfc6-4831-9908-2b72a64b0faf
@ctdavi commented on GitHub (May 11, 2025):
I'd like to see which server and which inference provider along with the model name in the model-picker dropdown. Or when there's more than one server and/or inference provider, maybe a filter where you choose which server and/or inference provider you want, and then the model-picker refreshes with the models availalbe on the chosen server+inference-provider.
why? well i have an ollama running local to the open-webui server's machine, another ollama running on a different computer, and also on that different computer I have a vllm provider running too. Sometimes i have the same model on either server and/or either provider and can't clearly discern which is which. I'd imagine there's plenty of folks with far more compex situations than me and my mere piddly three possibilities.
If adding server + inference-provider (or whatever-type provider) isn't possible within the model-picker, maybe at least show them on screen above the model picker and link(s) for changing them so it's all just right there and very upfront seeing as how it can be kind of key in some instances, especially where i grabbed a model for vllm but haven't quite got it working yet and need to pick that one while working on making it work but pick the other one (on ollama) for when i need to do some inference.
Ah crap just noticed this issue is closed. I clicked my way to it from another issue that linked this issue and supercedious. Bleh. I typed too much already, so now I will hit "Comment" and feel better
@Classic298 commented on GitHub (May 11, 2025):
@ctdavi you can do so by editing the connection and adding a prefix
@silentoplayz commented on GitHub (May 12, 2025):
@ctdavi You can add either a specific tag or tags for a connection or you can choose to add a prefix to all model IDs provided by an API connection.
@ctdavi commented on GitHub (May 12, 2025):
Thanks y'all! Those ideas, prefixes & tags, both sound very workable for my solo setup.
Scalable? Doesn't sound like it, but for just me, that'll do just fine!