enh: many model chat ui #997

Closed
opened 2025-11-11 14:35:07 -06:00 by GiteaMirror · 4 comments
Owner

Originally created by @chrisoutwright on GitHub (May 21, 2024).

Is your feature request related to a problem? Please describe.
I'm always frustrated when adding new models on top compresses the viewable chat canvas vertically, making it harder to read and interact with the chat. This reduces the usability and overall user experience, especially when multiple models are involved.

Describe the solution you'd like
I would like to have a toggle feature that allows the set of models to be displayed in one line or put them sideways in a separate boxing. This way, the chat canvas height remains unaffected, ensuring a consistent and readable chat interface. Ideally, this toggle could be implemented as a button or a dropdown that users can easily switch between compact and expanded views.

Describe alternatives you've considered

Horizontal Scroll: Allow the models to be displayed in a single horizontal line with a scroll bar if the models exceed the screen width.
Collapsible Menu: Implement a collapsible menu for the models that users can expand or collapse as needed.
Separate Section: Place the models in a separate section on the side of the chat interface, allowing the chat canvas to maintain its full height.

Originally created by @chrisoutwright on GitHub (May 21, 2024). **Is your feature request related to a problem? Please describe.** I'm always frustrated when adding new models on top compresses the viewable chat canvas vertically, making it harder to read and interact with the chat. This reduces the usability and overall user experience, especially when multiple models are involved. **Describe the solution you'd like** I would like to have a toggle feature that allows the set of models to be displayed in one line or put them sideways in a separate boxing. This way, the chat canvas height remains unaffected, ensuring a consistent and readable chat interface. Ideally, this toggle could be implemented as a button or a dropdown that users can easily switch between compact and expanded views. **Describe alternatives you've considered** Horizontal Scroll: Allow the models to be displayed in a single horizontal line with a scroll bar if the models exceed the screen width. Collapsible Menu: Implement a collapsible menu for the models that users can expand or collapse as needed. Separate Section: Place the models in a separate section on the side of the chat interface, allowing the chat canvas to maintain its full height.
Author
Owner

@kwekewk commented on GitHub (May 26, 2024):

i suggest similar to gemini draft, and its ability to regenerate

@kwekewk commented on GitHub (May 26, 2024): i suggest similar to gemini draft, and its ability to regenerate
Author
Owner

@dexwenway commented on GitHub (Jun 6, 2024):

This feature is very helpful for interacting with multiple models in a single conversation, and I hope that it can be improved in the future.

@dexwenway commented on GitHub (Jun 6, 2024): This feature is very helpful for interacting with multiple models in a single conversation, and I hope that it can be improved in the future.
Author
Owner

@Maralai commented on GitHub (Jun 15, 2024):

I would also like to also share some thoughts on the current implementation of running multiple models side by side in the interface. While I appreciate the effort behind this feature, I have a few suggestions and observations based on my experience.

  1. User Interface Layout:

    • Current Side-by-Side Layout: I find the current narrower side-by-side layout less intuitive compared to when each model's response filled the entire screen width. The latter provided a cleaner and more focused experience per model.
    • Single Thread per Model: Previously, each model would respond in separate threads (horizontal response), allowing users to develop divergent branches of thought specific to each model. This approach felt more intuitive and was preferred for exploring different reasoning patterns availed by different models.
  2. Prompt Targeting:

    • Issue with Subsequent Prompts: In the current setup, subsequent prompts are sent to all models, which is problematic when models provide disparate responses. Follow-up prompts are generally intended for a specific model's response rather than all at once.
    • Desired Functionality: It would be more effective if the interface allowed users to direct prompts to specific models. This could be achieved by:
      • Implementing a graphical indicator (e.g., a color border) to show which model's response a user is engaging with.
      • Providing an option to select one or multiple models for subsequent prompts.
      • Displaying a contextual label above the prompt text box to indicate which models will receive the next prompt.
      • Including a menu to select whether the prompt should be sent to one, some, or all models.
  3. Model Interaction Enhancement:

    • Incorporating a way to pass each selected model's response along with the original message could facilitate a “mixture of agents” approach, enhancing the capability to combine insights from multiple models effectively.

Ultimately, the current multi-model feature seems geared more towards zero-shot evaluation rather than iterative prompting. It would be immensely helpful if there were a setting to revert back to the earlier interface behavior, enabling more targeted and model-specific interactions.

Thank you for considering these suggestions. I’m hopeful they can enhance the usability and effectiveness of the multi-model feature.

@Maralai commented on GitHub (Jun 15, 2024): I would also like to also share some thoughts on the current implementation of running multiple models side by side in the interface. While I appreciate the effort behind this feature, I have a few suggestions and observations based on my experience. 1. **User Interface Layout**: - **Current Side-by-Side Layout**: I find the current narrower side-by-side layout less intuitive compared to when each model's response filled the entire screen width. The latter provided a cleaner and more focused experience per model. - **Single Thread per Model**: Previously, each model would respond in separate threads (horizontal response), allowing users to develop divergent branches of thought specific to each model. This approach felt more intuitive and was preferred for exploring different reasoning patterns availed by different models. 2. **Prompt Targeting**: - **Issue with Subsequent Prompts**: In the current setup, subsequent prompts are sent to all models, which is problematic when models provide disparate responses. Follow-up prompts are generally intended for a specific model's response rather than all at once. - **Desired Functionality**: It would be more effective if the interface allowed users to direct prompts to specific models. This could be achieved by: - Implementing a graphical indicator (e.g., a color border) to show which model's response a user is engaging with. - Providing an option to select one or multiple models for subsequent prompts. - Displaying a contextual label above the prompt text box to indicate which models will receive the next prompt. - Including a menu to select whether the prompt should be sent to one, some, or all models. 3. **Model Interaction Enhancement**: - Incorporating a way to pass each selected model's response along with the original message could facilitate a “mixture of agents” approach, enhancing the capability to combine insights from multiple models effectively. Ultimately, the current multi-model feature seems geared more towards zero-shot evaluation rather than iterative prompting. It would be immensely helpful if there were a setting to revert back to the earlier interface behavior, enabling more targeted and model-specific interactions. Thank you for considering these suggestions. I’m hopeful they can enhance the usability and effectiveness of the multi-model feature.
Author
Owner

@chrisoutwright commented on GitHub (Jun 23, 2024):

What I originally meant is that (before we got the side-by-side, but still with top cutoff), the vertical cutoff is not ideal:
image

Now with side-by-side I wish there would be an option to specify the minimum width till when we get the output get collapsed via an arrow. At the moment it will need to be quite slim to do so:

image

This is really a bit difficult to read this way (see how the third one is literally getting like one token per line nearly)

@chrisoutwright commented on GitHub (Jun 23, 2024): What I originally meant is that (before we got the side-by-side, but still with top cutoff), the vertical cutoff is not ideal: ![image](https://github.com/open-webui/open-webui/assets/27736055/c27cc97f-2d29-4e86-8e3f-7e68604c0e6b) Now with side-by-side I wish there would be an option to specify the minimum width till when we get the output get collapsed via an arrow. At the moment it will need to be quite slim to do so: ![image](https://github.com/open-webui/open-webui/assets/27736055/c856c01c-1021-4a7f-8e56-f0f26b63f115) This is really a bit difficult to read this way (see how the third one is literally getting like one token per line nearly)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#997