feat: Intelligently truncate long text #5170

Closed
opened 2025-11-11 16:13:41 -06:00 by GiteaMirror · 0 comments
Owner

Originally created by @cfy01 on GitHub (May 14, 2025).

Check Existing Issues

  • I have searched the existing issues and discussions.

Problem Description

Image
The results of file retrieval and web search cause tokens to exceed the maximum context.

Desired Solution you'd like

Support setting the maximum number of tokens in the context, which can be intelligently truncated before the uplink model to prevent triggering beyond the maximum context.

Alternatives Considered

No response

Additional Context

No response

Originally created by @cfy01 on GitHub (May 14, 2025). ### Check Existing Issues - [x] I have searched the existing issues and discussions. ### Problem Description ![Image](https://github.com/user-attachments/assets/edfed6db-0b70-47d8-9a5a-7ccb18438d13) The results of file retrieval and web search cause tokens to exceed the maximum context. ### Desired Solution you'd like Support setting the maximum number of tokens in the context, which can be intelligently truncated before the uplink model to prevent triggering beyond the maximum context. ### Alternatives Considered _No response_ ### Additional Context _No response_
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#5170