feat: Settings to specify separate models for title, retrieval, web search, etc. #5682

Open
opened 2025-11-11 16:29:14 -06:00 by GiteaMirror · 0 comments
Owner

Originally created by @VanceVagell on GitHub (Jul 2, 2025).

Check Existing Issues

  • I have searched the existing issues and discussions.

Problem Description

I have different LLM models on my local set up that are better at different things. I'd like to specify individual models for each of the "task model" features in the Settings > Interace > Tasks section. Right now it only lets you specify 1 local and/or external model to cover all tasks.

Desired Solution you'd like

Could these settings please be broken out to let us select different models per task?

For example, I'd like to use a tiny fast model for "Title Generation" because I don't especially care how accurate it is, it's more important that it not use many system resources for very long. Whereas I want a very capable model to do the task "Web Search Query Generation" and "Retrieval Query Generation". It would be great if the "Current model" option were available for each tasks, because there might be some tasks where I want the current model to do the thing (e.g. a great thinking model might be better for a strong web search query on a complex discussion, so I might leave that set to "Current model" so web search quality varies by my conversation model).

Alternatives Considered

I've tried using a tiny model for all tasks, but it does an awful job at web search queries and retrieval tasks, although it's fine for title generation.

I've tried using the current model I'm working with (i.e. if you don't specify a task model), but sometimes it's way overkill for title generation (e.g. if I'm using DeepSeek R1 then title generation takes forever).

So I'd really like to be able to specify what models to use for each task type.

Additional Context

No response

Originally created by @VanceVagell on GitHub (Jul 2, 2025). ### Check Existing Issues - [x] I have searched the existing issues and discussions. ### Problem Description I have different LLM models on my local set up that are better at different things. I'd like to specify individual models for each of the "task model" features in the Settings > Interace > Tasks section. Right now it only lets you specify 1 local and/or external model to cover all tasks. ### Desired Solution you'd like Could these settings please be broken out to let us select different models per task? For example, I'd like to use a tiny fast model for "Title Generation" because I don't especially care how accurate it is, it's more important that it not use many system resources for very long. Whereas I want a very capable model to do the task "Web Search Query Generation" and "Retrieval Query Generation". It would be great if the "Current model" option were available for each tasks, because there might be some tasks where I want the current model to do the thing (e.g. a great thinking model might be better for a strong web search query on a complex discussion, so I might leave that set to "Current model" so web search quality varies by my conversation model). ### Alternatives Considered I've tried using a tiny model for all tasks, but it does an awful job at web search queries and retrieval tasks, although it's fine for title generation. I've tried using the current model I'm working with (i.e. if you don't specify a task model), but sometimes it's way overkill for title generation (e.g. if I'm using DeepSeek R1 then title generation takes forever). So I'd really like to be able to specify what models to use for each task type. ### Additional Context _No response_
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#5682