feat: add optional support to online providers? #329

Closed
opened 2025-11-11 14:17:42 -06:00 by GiteaMirror · 2 comments
Owner

Originally created by @SethBurkart123 on GitHub (Feb 21, 2024).

Is your feature request related to a problem? Please describe.
Sometimes you can't run certain larger models locally eg. a 70b llama model.

Describe the solution you'd like
If we implemented alternative providers that could be connected in by just logging into their accounts e.g. let people log in with their huggingface or perplexity accounts to make available online hosted models. This would largely be a secondary feature similar to the openai api support.

I'd be happy for this to be some kind of implementation with the upcoming Custom Python Backend Actions as it might be slightly off the core functionality that some people may not like implemented into the core. But in order to make logging into these services possible from the frontend there would just have to be a way of injecting extra settings into the frontend.

Thoughts?

Originally created by @SethBurkart123 on GitHub (Feb 21, 2024). **Is your feature request related to a problem? Please describe.** Sometimes you can't run certain larger models locally eg. a 70b llama model. **Describe the solution you'd like** If we implemented alternative providers that could be connected in by just logging into their accounts e.g. let people log in with their huggingface or perplexity accounts to make available online hosted models. This would largely be a secondary feature similar to the openai api support. I'd be happy for this to be some kind of implementation with the upcoming **Custom Python Backend Actions** as it might be slightly off the core functionality that some people may not like implemented into the core. But in order to make logging into these services possible from the frontend there would just have to be a way of injecting extra settings into the frontend. Thoughts?
Author
Owner

@justinh-rahb commented on GitHub (Feb 21, 2024):

Take a look at LiteLLM, it's an OpenAI-API compatible proxy server for various AI APIs including HF models. We've got a few people (myself included) using that to serve multiple providers in WebUI instead of just OpenAI's models.

@justinh-rahb commented on GitHub (Feb 21, 2024): Take a look at [LiteLLM](https://github.com/BerriAI/litellm), it's an OpenAI-API compatible proxy server for various AI APIs including HF models. We've got a few people (myself included) using that to serve multiple providers in WebUI instead of just OpenAI's models.
Author
Owner

@tjbck commented on GitHub (Feb 26, 2024):

LiteLLM proxy support has been added!

@tjbck commented on GitHub (Feb 26, 2024): LiteLLM proxy support has been added!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#329