mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 02:48:13 -05:00
feat: Unable to add Litellm Keys for webSearch #6507
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Harshuqt on GitHub (Sep 26, 2025).
Check Existing Issues
Problem Description
I'm trying to add my LiteLLM virtual key to Open WebUI to use Perplexity's web search and other models, but I haven't been able to add it correctly.
I can add the key as a model and chat with it successfully, but I can't get the web search feature to work when I try to configure it in the admin settings.
Why this is important
This is crucial because LiteLLM allows me to set usage limits on my API keys and monitor their activity.
Desired Solution you'd like
Could you please add a feature to select litellm web search keys hosted on my local machine? This would allow Open WebUI to access them directly, so I can use the various web search LLM models I have configured in litellm. I'm looking forward to this feature being implemented.
Alternatives Considered
No response
Additional Context
No response
@Classic298 commented on GitHub (Sep 26, 2025):
LiteLLM is not perplexity - for web search you must use perplexity API
I dont think LiteLLM even has web search api endpoints.
@Classic298 commented on GitHub (Sep 26, 2025):
Just checked the docs and yep, no search engine endpoints.
@Harshuqt commented on GitHub (Sep 27, 2025):
@Harshuqt commented on GitHub (Sep 27, 2025):
I also urge you to check this out Lite LLM Web Seach Docs as it has Perplexity support, though I may be wrong.
@Classic298 commented on GitHub (Sep 27, 2025):
This is for the models themselves to use web search
Again, LiteLLM does not have a websearch api endpoint
@krrishdholakia commented on GitHub (Sep 27, 2025):
what does a websearch endpoint here look like? @Classic298
for context we do support websearch for models via chat completions - https://docs.litellm.ai/docs/completion/web_search
@Classic298 commented on GitHub (Sep 27, 2025):
@krrishdholakia by websearch endpoint, Open WebUI means search engines, callable via API.
I.e. Google PSE, Brave, Perplexity Search API (brand new) and like a dozen more
In fact, really, really many
What LiteLLM offers are models that have native web search or websearch capabilities provided by the inference provider.
But LiteLLM does not offer actual web search API endpoints, which is what the user was asking about here.
If LiteLLM introduces a proxy for web searches, I am sure Open WebUI can implement it easily.
I.e.:
Would be a cool feature for LiteLLM to introduce indeed!
PS: Can you pls respond to my emails krrish? I've been waiting on a reply for some days :D Thanks, appreciate it
@Classic298 commented on GitHub (Sep 27, 2025):
If you need additional information or clarification, let me know, happy to help.
@krrishdholakia commented on GitHub (Sep 27, 2025):
@Classic298
re: websearch
what endpoint spec would you want us to unify against?
re: emails
sure - what's the name i should look for?
@Classic298 commented on GitHub (Sep 27, 2025):
Hi @krrishdholakia
I don't think this is for me to decide, if anything, @tjbck would have to give directions here.
(I'll send this to Tim so he sees this and can also throw in his 2 cents)
Just my 2 cents on the topic: If you provide a unified websearch endpoint, you should think about the MANY different search engines that exist and their many different endpoints and how they return data. Some return short content snippets of the websites, while others only return the Website title and the URL. Some let you search for videos or academic of social media content, while others are as plain and dead simple as Google PSE. There are academic search engines, image search, video search, normal search and so forth.
But
Though one thing is certain: if your unified endpoint is dead-simple (and all the configuration happens on LiteLLM's side by configuring the providers) then that makes it much simpler for open webui to implement the LiteLLM endpoint, because all Open WebUI would have to do is send a request to the LiteLLM endpoint and specify 1) what to search for and 2) how many requests to fetch.
See implementation for perplexity_search API
To be fair, image search using AI sound enticing - and video search too. I don't think anyone has this yet, besides Perplexity.
I personally like Perplexity's AI service, I use it daily, in fact, because it searches across youtube, github, reddit, normal google searches and even academic paper websites at the same time and has superior results in my experience.
If you build such an API endpoint in LiteLLM, you really should ask yourself
re: emails
Pinged you on discord #litellm-enterprise, would appreciate a quick DM :)
Best!
@tjbck commented on GitHub (Sep 27, 2025):
Our web search config does not utilize chat completion endpoints and is managed entirely separately.
@tjbck commented on GitHub (Sep 27, 2025):
@krrishdholakia Just re-read the whole thread, https://github.com/open-webui/open-webui/blob/main/backend/open_webui/retrieval/web/external.py here's the vendor agnostic implementation we do currently support, this might be what you're looking for. Hope that helps!
@ishaan-jaff commented on GitHub (Oct 21, 2025):
@tjbck this is great ! thanks for sharing your implementation!
@ishaan-jaff commented on GitHub (Oct 22, 2025):
@tjbck we support this now: https://github.com/BerriAI/litellm/pull/15780
our API is perplexity search API compatible, so if you allow someone to set an API base for perplexity it should work out of the box.
@Classic298 commented on GitHub (Oct 22, 2025):
@Harshuqt
@Harshuqt commented on GitHub (Oct 25, 2025):
Thank you for Fixing the issue, but i still can't able to use it's key out of the box any simpler way just seleting perplexity or perplexity_seach in web search, so it would able to use any model + websearch combo inside webui
I Maybe Worng , your provided solutions might be more better than my Approch.
Thank you for looking into the issue.