feat: Speech voices should be more granular #5233

Open
opened 2025-11-11 16:15:16 -06:00 by GiteaMirror · 1 comment
Owner

Originally created by @nonlinear on GitHub (May 19, 2025).

Check Existing Issues

  • I have searched the existing issues and discussions.

Problem Description

When using Azure TTS, you can set vocie only in system, and sometimes, some voices per user (sometimes not. some voices pass per user, some crash)

Desired Solution you'd like

It would be best if voice settings follow the suggested cascading path:

  1. system
  2. group
  3. user
  4. agent (model)
  5. conversation

with later taking precedence over first.

this way we have enough granularity for different voices and languages (my group is bilingual, my users are not)

Alternatives Considered

No response

Additional Context

No response

Originally created by @nonlinear on GitHub (May 19, 2025). ### Check Existing Issues - [x] I have searched the existing issues and discussions. ### Problem Description When using Azure TTS, you can set vocie only in system, and sometimes, some voices per user (sometimes not. some voices pass per user, some crash) ### Desired Solution you'd like It would be best if voice settings follow the suggested cascading path: 1. system 2. group 3. user 4. agent (model) 5. conversation with later taking precedence over first. this way we have enough granularity for different voices and languages (my group is bilingual, my users are not) ### Alternatives Considered _No response_ ### Additional Context _No response_
Author
Owner

@silentoplayz commented on GitHub (Nov 6, 2025):

Related - https://github.com/open-webui/open-webui/issues/15143

@silentoplayz commented on GitHub (Nov 6, 2025): Related - https://github.com/open-webui/open-webui/issues/15143
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#5233