[GH-ISSUE #10124] Feat-Req: httpProxy for websearching #54436

Closed
opened 2026-05-05 16:16:19 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @kulukami on GitHub (Feb 16, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/10124

Feature Request

Now websearching is using requests raw requests. And it is quite useful if adding httpProxy option.

https://github.com/open-webui/open-webui/blob/main/backend/open_webui/retrieval/web/google_pse.py#L46

def search_google_pse(
    api_key: str,
    search_engine_id: str,
    query: str,
    count: int,
    filter_list: Optional[list[str]] = None,
) -> list[SearchResult]:
    """Search using Google's Programmable Search Engine API and return the results as a list of SearchResult objects.
    Handles pagination for counts greater than 10.

    Args:
        api_key (str): A Programmable Search Engine API key
        search_engine_id (str): A Programmable Search Engine ID
        query (str): The query to search for
        count (int): The number of results to return (max 100, as PSE max results per query is 10 and max page is 10)
        filter_list (Optional[list[str]], optional): A list of keywords to filter out from results. Defaults to None.

    Returns:
        list[SearchResult]: A list of SearchResult objects.
    """
    url = "https://www.googleapis.com/customsearch/v1"
    headers = {"Content-Type": "application/json"}
    all_results = []
    start_index = 1  # Google PSE start parameter is 1-based

    while count > 0:
        num_results_this_page = min(count, 10)  # Google PSE max results per page is 10
        params = {
            "cx": search_engine_id,
            "q": query,
            "key": api_key,
            "num": num_results_this_page,
            "start": start_index,
        }
        response = requests.request("GET", url, headers=headers, params=params)
        response.raise_for_status()
        json_response = response.json()
        results = json_response.get("items", [])
        if results:  # check if results are returned. If not, no more pages to fetch.
            all_results.extend(results)
            count -= len(
                results
            )  # Decrement count by the number of results fetched in this page.
            start_index += 10  # Increment start index for the next page
        else:
            break  # No more results from Google PSE, break the loop

    if filter_list:
        all_results = get_filtered_results(all_results, filter_list)

    return [
        SearchResult(
            link=result["link"],
            title=result.get("title"),
            snippet=result.get("snippet"),
        )
        for result in all_results
    ]
Originally created by @kulukami on GitHub (Feb 16, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/10124 # Feature Request Now websearching is using ```requests``` raw requests. And it is quite useful if adding httpProxy option. https://github.com/open-webui/open-webui/blob/main/backend/open_webui/retrieval/web/google_pse.py#L46 ```python def search_google_pse( api_key: str, search_engine_id: str, query: str, count: int, filter_list: Optional[list[str]] = None, ) -> list[SearchResult]: """Search using Google's Programmable Search Engine API and return the results as a list of SearchResult objects. Handles pagination for counts greater than 10. Args: api_key (str): A Programmable Search Engine API key search_engine_id (str): A Programmable Search Engine ID query (str): The query to search for count (int): The number of results to return (max 100, as PSE max results per query is 10 and max page is 10) filter_list (Optional[list[str]], optional): A list of keywords to filter out from results. Defaults to None. Returns: list[SearchResult]: A list of SearchResult objects. """ url = "https://www.googleapis.com/customsearch/v1" headers = {"Content-Type": "application/json"} all_results = [] start_index = 1 # Google PSE start parameter is 1-based while count > 0: num_results_this_page = min(count, 10) # Google PSE max results per page is 10 params = { "cx": search_engine_id, "q": query, "key": api_key, "num": num_results_this_page, "start": start_index, } response = requests.request("GET", url, headers=headers, params=params) response.raise_for_status() json_response = response.json() results = json_response.get("items", []) if results: # check if results are returned. If not, no more pages to fetch. all_results.extend(results) count -= len( results ) # Decrement count by the number of results fetched in this page. start_index += 10 # Increment start index for the next page else: break # No more results from Google PSE, break the loop if filter_list: all_results = get_filtered_results(all_results, filter_list) return [ SearchResult( link=result["link"], title=result.get("title"), snippet=result.get("snippet"), ) for result in all_results ] ```
Author
Owner

@ips972 commented on GitHub (Feb 16, 2025):

So many people have been asking for this fix - including me, having to run the setup behind http proxy.

But such a simple fix - and yet not resolved.

Btw: all search engines fail behind http proxy since the engines just give the results urls and there is a common py. That fails the sites scraping.

<!-- gh-comment-id:2661453246 --> @ips972 commented on GitHub (Feb 16, 2025): So many people have been asking for this fix - including me, having to run the setup behind http proxy. But such a simple fix - and yet not resolved. Btw: all search engines fail behind http proxy since the engines just give the results urls and there is a common py. That fails the sites scraping.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#54436