[GH-ISSUE #14646] feat: Add support for AI search providers #32851

Closed
opened 2026-04-25 06:43:00 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @Davixk on GitHub (Jun 4, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/14646

Check Existing Issues

  • I have searched the existing issues and discussions.

Problem Description

Currently, the WebUI backend only supports serving search results individually as a collection (either using RAG or directly, based on user settings).

Except there are services that already perform the job of the WebUI and analyze results, then provide a summary of them.

Perplexity already does this, and currently the generated response is treated as a single snippet for one result, leaving the others empty.

Desired Solution you'd like

Having a new, optional search pipeline that instead of returning full snippets, returns the AI-generated summary, alongside other metadata (resources used, etc).

I believe this should be a toggleable option for users.

I am working on a PR for this, but would appreciate input on how this should be architecturally implemented in the backend.

Additional Context

This would also indirectly alleviate another crucial issue with the Web Search tool, which is context usage:
Currently, the model's context window isn't checked against search results, meaning many search providers can and will return more text than the model's context window allows, leading to an error showing on the client side.

Edit: the perplexity integration doesn't discard the generated result, but uses it as a single snippet for the first link it finds, which leads to it being easily misinterpreted by llms

Originally created by @Davixk on GitHub (Jun 4, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/14646 ### Check Existing Issues - [x] I have searched the existing issues and discussions. ### Problem Description Currently, the WebUI backend only supports serving search results individually as a collection (either using RAG or directly, based on user settings). Except there are services that already perform the job of the WebUI and analyze results, then provide a summary of them. Perplexity already does this, and currently the generated response is treated as a single snippet for one result, leaving the others empty. ### Desired Solution you'd like Having a new, optional search pipeline that instead of returning full snippets, returns the AI-generated summary, alongside other metadata (resources used, etc). I believe this should be a toggleable option for users. I am working on a PR for this, but would appreciate input on how this should be architecturally implemented in the backend. ### Additional Context This would also indirectly alleviate another crucial issue with the Web Search tool, which is context usage: Currently, the model's context window isn't checked against search results, meaning many search providers can and will return more text than the model's context window allows, leading to an error showing on the client side. Edit: the perplexity integration doesn't discard the generated result, but uses it as a single snippet for the first link it finds, which leads to it being easily misinterpreted by llms
Author
Owner

@rgaricano commented on GitHub (Jun 4, 2025):

Recently added: external web loader option.

<!-- gh-comment-id:2938961859 --> @rgaricano commented on GitHub (Jun 4, 2025): Recently added: external web loader option.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#32851