mirror of
https://github.com/open-webui/open-webui.git
synced 2026-03-09 23:35:09 -05:00
feat: role-based access control w/ multi-user support #26
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Notarin on GitHub (Nov 6, 2023).
Description:
I'd like to request a user system, with simply antonymous as default, I like to have ui's like this web facing so i may use this on the go, or maybe even share it with friends, however no barrier for entry this allows anyone who discovers the url to spam requests, so I wish for a user system with a disable-able anonymous default, to not intrude on users who do not with to partake in this feature.
Alternatives:
A simple password would do fine for the initial problem, but would not be preferable.
@Notarin commented on GitHub (Nov 6, 2023):
just clarifying to be sure, my primary request is for the ability to create accounts, as well as the ability to disable anonymous logins.
@tjbck commented on GitHub (Nov 6, 2023):
Hi @Notarin @JDRay42, Thanks for the suggestions.
RBAC w/ multi-user support is also something I've been planning on implementing for a while now. I guess the only thing that's preventing me from working on this feature would be, other than me being occupied and overwhelmed with the amount of work, that I'm unsure if it would be a good idea to turn this project into a fully-fledged fullstack project, as it would introduce breaking changes.
As of right now, pretty much 100% of the feature has been implemented on the client side and only requires a static file server. The rationale behind keeping it static was that, at some point, we would just be able to embed the web UI with the Ollama server so that most people would just be able to use Ollama straight out of the box, instead of jumping through multiple hoops to just get the web UI working.
What do you guys think? It would also help in my decision-making process if we could have a larger sample size so that I can estimate the accurate percentage of people who would want the project to head in this direction. I personally think that the pros of turning this project into a fullstack application outweighs the cons and I would, in the near future, add backend components to this project which would address #31, #49, #73, etc., as it seems like Ollama maintainers don't seem all that interested in actually allowing people to serve, alongside their API routes, static files in the root directory in their applications.
EDIT: We've decided to make this project into a fullstack application with additional features, Stay tuned! 🚀
@tjbck commented on GitHub (Nov 6, 2023):
Just opened a poll here: #75
@ianderse commented on GitHub (Nov 6, 2023):
I'd love to see RBAC multi-user support as well. I agree that it would be ideal for the Ollama maintainers to support a webui out of the box, but if they are not interested then having this project implement features like this would be fantastic.
Unrelated note: I am currently implementing Unraid support for Ollama and Ollama-Webui. I have it working locally but I will let you know when it is available!
@JDRay42 commented on GitHub (Nov 6, 2023):
This may sound a bit pedantic, but RBAC and multi-user support are two different things. They kind of go hand-in-hand, though, and maybe one feature request will cover them both. @tjbck, do you prefer a lumped feature or individual ones? I'm guessing the former.
@Notarin commented on GitHub (Nov 7, 2023):
@tjbck
Well, I do see your intended vision, it being a simple static page where the clients are making the requests themselves, but you cite simplicity, simplicity to me is not cloning a github project, opening a shell, navigating to said dir, downloading dependencies, and executing via a shell, and if I want it reliably writing up a system service. That does not spell simplicity to me, what does is an app that takes minimal clicks to install; Something I can find on the web, bookmark, and be done; Something that can be finished and fired up in a single command without requiring the shell remain open.
I think that this project can go in two directions, becoming a very simple installable client ran on clients, at which point it may be a good idea to have it as its own stand-alone app, seperate from the browser, maybe via electron.
Or, this could become a server-side application hosted on the server which may or may not al be the client, allowing for individuals or even organizations to host their own "chat-gpt"s designed for serving to clients. This will also allow for much further development including granular controls, backend services and add-on, whisper implementation, so on so forth. Also, this does not remove the capability of a user hosting this theirselves.
I am personally in favor of the server side implementation, as it provides the best of both worlds. However, it isn't the end of the world should this project not see this being its future, as another project could fill that void.
@Notarin commented on GitHub (Nov 7, 2023):
I have no idea what i was smoking, I could've sworn I read a defense for the static page being its simplicity.
Maybe I had an ADHD moment and one thought led to another.
But regardless the rest of my post makes sense.
@JDRay42 commented on GitHub (Nov 22, 2023):
I'm excited that this is closed as "Implemented". Is there any documentation yet about how to configure it?
@tjbck commented on GitHub (Nov 22, 2023):
Hi, Things have been partially implemented at the moment. I will create a separate PR documenting how to set things up as well as the implementation of the separate chat history feature. Stay tuned, Thanks!
@henrywithu commented on GitHub (Nov 26, 2023):
I totally understand your point. Simplicity is preferred. I'm deploying it on public web now, just with a simple Nginx user auth via .htpasswd, to restrict certain access. It is definitely not the best approach, but works for my user case.
@oliverbob commented on GitHub (Dec 18, 2023):
I would love to see RBAC for administrators like Bionic GPT, while be able to control how it is served to the client as an Ollama-Webui via an API, so that it can be referenced (or become extensible) by any third party applications like this simple Ollama UI.
It will be helpful to Cloud administrators, Local System admins, company/organization admins and Hosting Service providers running Virtualmin or ISPConfig. Meaning that it should be available for a wide variety of audience and use cases and not just the imaginary few offline local/private users who generally can't afford a GPU.
Meaning that it will have the features of all three as a full-stack, full-fledged app that can provide ease, affordability and flexibility to a wide variety of users.
@turnercore commented on GitHub (Dec 24, 2023):
I think it's useful to have both, a super dead simple chat UI on top of ollama models to quickly test them and for simple single-user setups, and a full stack web application. Personally since this repo has some momentum and increasing interest, I'd probably spin off the simple version as a fork of this one, and continue to develop this repo into a full stack application which opens up a lot of feature opportunities like Multi-user, RAG, whisper voice support, api support, etc.
@tjbck commented on GitHub (Dec 24, 2023):
@turnercore, that's what we'll be doing! https://github.com/ollama-webui/ollama-webui/discussions/260 All the feature's you've listed coming soon!
@tjbck commented on GitHub (Dec 27, 2023):
Hey everyone! Just finished working on the multi-user w/ RBAC support #216, I've been testing the webui to check if there are any issues, so far everything's looking good. But I would greatly appreciate if any of you guys could try out the rbac/auth feature and help me with the testing for the sake of redundancy! Make sure to backup all your chat logs, just in case.
I've been working non-stop and I feel like my brain isn't cooperating well so If you guys have any suggestion for the installation doc for better clarity, I'm all ears. Let me know what you guys think!
@oliverbob commented on GitHub (Dec 27, 2023):
Thanks Timothy, can't wait to test it. Pulling the most recent commit.
New features for the new year.
Congrats!
@oliverbob commented on GitHub (Dec 27, 2023):
I appreciate all the work and the efforts you've made for the new features.
I've tried it on dev branch. I have also filed an bug issue #285 about the code view which I don't find on main branch.
The RBAC feature looks good. Some enhancement candidates could be:
To make the admin section more useful and for security reasons,
1.) can we possibly include a feature to disable/enable the Settings>General>Ollama Server URL.
2.) Also, the enable/disable the entire tab on Settings>Models?
3.) Is it possible to make a privatized public sharing of chat without going to OllamaHub but have the option to share with the community? Although, I'm curious if whether can catch what is being sent to this third party service. So if I may ask what is the data being sent to OllamaHub and where can I find or modify it on the codebase so that I can integrate it into my local database?
Thanks.
@tjbck commented on GitHub (Dec 27, 2023):
@oliverbob, Thanks for all the suggestions! I do plan on adding most of the features you've requested in the near future. But for now, if you could provide feedback on strictly the multi-user w/ rbac feature, so that I can merge the PR to main ASAP, I'd appreciate it. What I'm most interested in is, how was the chat migration workflow? Thanks!
@tjbck commented on GitHub (Dec 27, 2023):
Alright guys, just merged the feature to main! When upgrading make sure to add
-v ollama-webui:/app/backend/dataflag to the docker command. It's been a long journey, Thank you for your continued support and looking forward to hearing your feedback!@turnercore commented on GitHub (Dec 27, 2023):
Can't wait to see the improvements!
@oliverbob commented on GitHub (Dec 28, 2023):
@tjbck , big thanks mate. You have the greatest AI chat in the world.!
By the way, I realized that most of the features can now be disabled by a manual $user['role']=='admin checks, example, in the Settings>Models view, JavaScript which is good.
Will play around the new commit when electricity comes back.
Hey, if you need some help for the docs, feel free to add me as contributor. Or must I just issue a PR? What do you think?