feat: Unable to add Litellm Keys for webSearch #6507

Closed
opened 2025-11-11 16:57:47 -06:00 by GiteaMirror · 16 comments
Owner

Originally created by @Harshuqt on GitHub (Sep 26, 2025).

Check Existing Issues

  • I have searched the existing issues and discussions.

Problem Description

I'm trying to add my LiteLLM virtual key to Open WebUI to use Perplexity's web search and other models, but I haven't been able to add it correctly.

I can add the key as a model and chat with it successfully, but I can't get the web search feature to work when I try to configure it in the admin settings.

Why this is important
This is crucial because LiteLLM allows me to set usage limits on my API keys and monitor their activity.

Desired Solution you'd like

Could you please add a feature to select litellm web search keys hosted on my local machine? This would allow Open WebUI to access them directly, so I can use the various web search LLM models I have configured in litellm. I'm looking forward to this feature being implemented.

Alternatives Considered

No response

Additional Context

No response

Originally created by @Harshuqt on GitHub (Sep 26, 2025). ### Check Existing Issues - [x] I have searched the existing issues and discussions. ### Problem Description I'm trying to add my LiteLLM virtual key to Open WebUI to use Perplexity's web search and other models, but I haven't been able to add it correctly. I can add the key as a model and chat with it successfully, but I can't get the web search feature to work when I try to configure it in the admin settings. Why this is important This is crucial because LiteLLM allows me to set usage limits on my API keys and monitor their activity. ### Desired Solution you'd like Could you please add a feature to select litellm web search keys hosted on my local machine? This would allow Open WebUI to access them directly, so I can use the various web search LLM models I have configured in litellm. I'm looking forward to this feature being implemented. ### Alternatives Considered _No response_ ### Additional Context _No response_
Author
Owner

@Classic298 commented on GitHub (Sep 26, 2025):

I can add the key as a model and chat with it successfully, but I can't get the web search feature to work when I try to configure it in the admin settings.

LiteLLM is not perplexity - for web search you must use perplexity API

Could you please add a feature to select litellm web search keys hosted on my local machine?

I dont think LiteLLM even has web search api endpoints.

@Classic298 commented on GitHub (Sep 26, 2025): > I can add the key as a model and chat with it successfully, but I can't get the web search feature to work when I try to configure it in the admin settings. LiteLLM is not perplexity - for web search you must use perplexity API > Could you please add a feature to select litellm web search keys hosted on my local machine? I dont think LiteLLM even has web search api endpoints.
Author
Owner

@Classic298 commented on GitHub (Sep 26, 2025):

Just checked the docs and yep, no search engine endpoints.

@Classic298 commented on GitHub (Sep 26, 2025): Just checked the docs and yep, no search engine endpoints.
Author
Owner

@Harshuqt commented on GitHub (Sep 27, 2025):

Image Please see the attached image from my models hub panel, which lists 'Web search' as a feature. I'm a beginner learning to connect APIs, so I may have misunderstood. Could you please check if it's possible to run this feature through my localhost? My primary concern is managing costs and staying within my budget.
@Harshuqt commented on GitHub (Sep 27, 2025): <img width="1825" height="896" alt="Image" src="https://github.com/user-attachments/assets/52a2f945-ba9b-4bec-ad68-e9549661447f" /> Please see the attached image from my models hub panel, which lists 'Web search' as a feature. I'm a beginner learning to connect APIs, so I may have misunderstood. Could you please check if it's possible to run this feature through my localhost? My primary concern is managing costs and staying within my budget.
Author
Owner

@Harshuqt commented on GitHub (Sep 27, 2025):

I also urge you to check this out Lite LLM Web Seach Docs as it has Perplexity support, though I may be wrong.

@Harshuqt commented on GitHub (Sep 27, 2025): I also urge you to check this out [Lite LLM Web Seach Docs](https://docs.litellm.ai/docs/completion/web_search) as it has Perplexity support, though I may be wrong.
Author
Owner

@Classic298 commented on GitHub (Sep 27, 2025):

This is for the models themselves to use web search

Again, LiteLLM does not have a websearch api endpoint

@Classic298 commented on GitHub (Sep 27, 2025): This is for the models themselves to use web search Again, LiteLLM does not have a websearch api endpoint
Author
Owner

@krrishdholakia commented on GitHub (Sep 27, 2025):

what does a websearch endpoint here look like? @Classic298

for context we do support websearch for models via chat completions - https://docs.litellm.ai/docs/completion/web_search

@krrishdholakia commented on GitHub (Sep 27, 2025): what does a websearch endpoint here look like? @Classic298 for context we do support websearch for models via chat completions - https://docs.litellm.ai/docs/completion/web_search
Author
Owner

@Classic298 commented on GitHub (Sep 27, 2025):

@krrishdholakia by websearch endpoint, Open WebUI means search engines, callable via API.

I.e. Google PSE, Brave, Perplexity Search API (brand new) and like a dozen more

In fact, really, really many

What LiteLLM offers are models that have native web search or websearch capabilities provided by the inference provider.
But LiteLLM does not offer actual web search API endpoints, which is what the user was asking about here.

If LiteLLM introduces a proxy for web searches, I am sure Open WebUI can implement it easily.

I.e.:

  1. LiteLLM integrates with Perplexity Search and Google PSE
  2. User configures Google PSE or other search provider inside LiteLLM
  3. LiteLLM offers unified API endpoint for web searches and API key
  4. Open WebUI can access LiteLLM's web search endpoint with api key, send requests, LiteLLM forwards those requests to the configured search provider (like Google PSE) and takes care of cost tracking (Google PSE typically costs 5$ per 1000 searches, while the first 100 each day are free - Perplexity Search API also costs 5$/1000 and Brave has some cheaper offers available).

Would be a cool feature for LiteLLM to introduce indeed!

PS: Can you pls respond to my emails krrish? I've been waiting on a reply for some days :D Thanks, appreciate it

@Classic298 commented on GitHub (Sep 27, 2025): @krrishdholakia by websearch endpoint, Open WebUI means [search engines, callable via API](https://docs.openwebui.com/category/-web-search). [I.e. Google PSE, Brave, Perplexity Search API (brand new) and like a dozen more](https://docs.openwebui.com/getting-started/env-configuration#web-search) [In fact, really, really many](https://docs.openwebui.com/getting-started/env-configuration/#web_search_engine) What LiteLLM offers are models that have native web search or websearch capabilities provided by the inference provider. But LiteLLM does not offer actual web search API endpoints, which is what the user was asking about here. If LiteLLM introduces a proxy for web searches, I am sure Open WebUI can implement it easily. I.e.: 1. LiteLLM integrates with Perplexity Search and Google PSE 2. User configures Google PSE or other search provider inside LiteLLM 3. LiteLLM offers unified API endpoint for web searches and API key 4. Open WebUI can access LiteLLM's web search endpoint with api key, send requests, LiteLLM forwards those requests to the configured search provider (like Google PSE) and takes care of cost tracking (Google PSE typically costs 5$ per 1000 searches, while the first 100 each day are free - Perplexity Search API also costs 5$/1000 and Brave has some cheaper offers available). Would be a cool feature for LiteLLM to introduce indeed! PS: Can you pls respond to my emails krrish? I've been waiting on a reply for some days :D Thanks, appreciate it
Author
Owner

@Classic298 commented on GitHub (Sep 27, 2025):

If you need additional information or clarification, let me know, happy to help.

@Classic298 commented on GitHub (Sep 27, 2025): If you need additional information or clarification, let me know, happy to help.
Author
Owner

@krrishdholakia commented on GitHub (Sep 27, 2025):

@Classic298

re: websearch

what endpoint spec would you want us to unify against?

re: emails

sure - what's the name i should look for?

@krrishdholakia commented on GitHub (Sep 27, 2025): @Classic298 re: websearch what endpoint spec would you want us to unify against? re: emails sure - what's the name i should look for?
Author
Owner

@Classic298 commented on GitHub (Sep 27, 2025):

Hi @krrishdholakia

I don't think this is for me to decide, if anything, @tjbck would have to give directions here.
(I'll send this to Tim so he sees this and can also throw in his 2 cents)

Just my 2 cents on the topic: If you provide a unified websearch endpoint, you should think about the MANY different search engines that exist and their many different endpoints and how they return data. Some return short content snippets of the websites, while others only return the Website title and the URL. Some let you search for videos or academic of social media content, while others are as plain and dead simple as Google PSE. There are academic search engines, image search, video search, normal search and so forth.

But

Though one thing is certain: if your unified endpoint is dead-simple (and all the configuration happens on LiteLLM's side by configuring the providers) then that makes it much simpler for open webui to implement the LiteLLM endpoint, because all Open WebUI would have to do is send a request to the LiteLLM endpoint and specify 1) what to search for and 2) how many requests to fetch.
See implementation for perplexity_search API

To be fair, image search using AI sound enticing - and video search too. I don't think anyone has this yet, besides Perplexity.

I personally like Perplexity's AI service, I use it daily, in fact, because it searches across youtube, github, reddit, normal google searches and even academic paper websites at the same time and has superior results in my experience.

If you build such an API endpoint in LiteLLM, you really should ask yourself

  • do we want to offer such configuration options (if so, how?)
    • would you configure it on LiteLLMs end (configuring the provider) or on the API call end (configuring each request individually based on API call parameters)?
    • how to handle parameters that are available for one provider, but aren't available for another provider (assuming you'll implement search engine fallbacks) (as some APIs like Perplexity Search and Brave offer multiple configuration options and different API endpoints, and different pricing (Brave has like 3 different pricing tiers) depending on what you need)
  • how do we implement such vastly different providers into one single endpoint
  • and well, many more

re: emails

Pinged you on discord #litellm-enterprise, would appreciate a quick DM :)

Best!

@Classic298 commented on GitHub (Sep 27, 2025): Hi @krrishdholakia I don't think this is for me to decide, if anything, @tjbck would have to give directions here. (I'll send this to Tim so he sees this and can also throw in his 2 cents) Just my 2 cents on the topic: If you provide a unified websearch endpoint, you should think about the MANY different search engines that exist and their many different endpoints and how they return data. Some return short content snippets of the websites, while others **only return the Website title and the URL**. Some let you search for videos or academic of social media content, while others are as plain and dead simple as Google PSE. There are academic search engines, image search, video search, normal search and so forth. # But **Though one thing is certain: if your unified endpoint is dead-simple (and all the configuration happens on LiteLLM's side by configuring the providers) then that makes it much simpler for open webui to implement the LiteLLM endpoint, because all Open WebUI would have to do is send a request to the LiteLLM endpoint and specify 1) what to search for and 2) how many requests to fetch.** **[See implementation for perplexity_search API](https://github.com/open-webui/open-webui/blob/main/backend/open_webui/retrieval/web/perplexity_search.py)** To be fair, image search using AI sound enticing - and video search too. I don't think anyone has this yet, besides Perplexity. I personally like Perplexity's AI service, I use it daily, in fact, because it searches across youtube, github, reddit, normal google searches and even academic paper websites at the same time and has superior results in my experience. If you build such an API endpoint in LiteLLM, you really should ask yourself - do we want to offer such configuration options (if so, how?) - would you configure it on LiteLLMs end (configuring the provider) or on the API call end (configuring each request individually based on API call parameters)? - how to handle parameters that are available for one provider, but aren't available for another provider (assuming you'll implement search engine fallbacks) (as some APIs like Perplexity Search and Brave offer multiple configuration options and different API endpoints, and different pricing (Brave has like 3 different pricing tiers) depending on what you need) - how do we implement such vastly different providers into one single endpoint - and well, many more re: emails Pinged you on discord #litellm-enterprise, would appreciate a quick DM :) Best!
Author
Owner

@tjbck commented on GitHub (Sep 27, 2025):

Our web search config does not utilize chat completion endpoints and is managed entirely separately.

@tjbck commented on GitHub (Sep 27, 2025): Our web search config does not utilize chat completion endpoints and is managed entirely separately.
Author
Owner

@tjbck commented on GitHub (Sep 27, 2025):

@krrishdholakia Just re-read the whole thread, https://github.com/open-webui/open-webui/blob/main/backend/open_webui/retrieval/web/external.py here's the vendor agnostic implementation we do currently support, this might be what you're looking for. Hope that helps!

@tjbck commented on GitHub (Sep 27, 2025): @krrishdholakia Just re-read the whole thread, https://github.com/open-webui/open-webui/blob/main/backend/open_webui/retrieval/web/external.py here's the vendor agnostic implementation we do currently support, this might be what you're looking for. Hope that helps!
Author
Owner

@ishaan-jaff commented on GitHub (Oct 21, 2025):

@tjbck this is great ! thanks for sharing your implementation!

@ishaan-jaff commented on GitHub (Oct 21, 2025): @tjbck this is great ! thanks for sharing your implementation!
Author
Owner

@ishaan-jaff commented on GitHub (Oct 22, 2025):

@tjbck we support this now: https://github.com/BerriAI/litellm/pull/15780

our API is perplexity search API compatible, so if you allow someone to set an API base for perplexity it should work out of the box.

@ishaan-jaff commented on GitHub (Oct 22, 2025): @tjbck we support this now: https://github.com/BerriAI/litellm/pull/15780 our API is perplexity search API compatible, so if you allow someone to set an API base for perplexity it should work out of the box.
Author
Owner

@Classic298 commented on GitHub (Oct 22, 2025):

@Harshuqt

@Classic298 commented on GitHub (Oct 22, 2025): @Harshuqt
Author
Owner

@Harshuqt commented on GitHub (Oct 25, 2025):

Thank you for Fixing the issue, but i still can't able to use it's key out of the box any simpler way just seleting perplexity or perplexity_seach in web search, so it would able to use any model + websearch combo inside webui

Image

I Maybe Worng , your provided solutions might be more better than my Approch.

Thank you for looking into the issue.

@Harshuqt commented on GitHub (Oct 25, 2025): Thank you for Fixing the issue, but i still can't able to use it's key out of the box any simpler way just seleting perplexity or perplexity_seach in web search, so it would able to use any model + websearch combo inside webui <img width="274" height="37" alt="Image" src="https://github.com/user-attachments/assets/2e33ee85-568a-4a9f-aa8d-1cf40f025d65" /> I Maybe Worng , your provided solutions might be more better than my Approch. Thank you for looking into the issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#6507