[GH-ISSUE #11021] Native Text-to-Speech (TTS) model support #53781

Closed
opened 2026-04-29 04:45:05 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @v-byte-cpu on GitHub (Jun 8, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11021

📣 Hello Ollama team!

I’m excited about Ollama’s vision and would love to contribute by helping add native Text‑to‑Speech (TTS) support. I noticed a couple of related feature requests already:

Here’s a rough plan of what I’d be glad to tackle — happy to refine this based on your needs:

Proposed TTS Integration Roadmap

1. Model Loading & Metadata

  • Extend support for audio-generation models (e.g. Coqui TTS, Bark, Dia, Orpheus TTS)
  • Detect TTS model formats (GGUF / specialized metadata headers) and load appropriate components

2. Audio output handling

  • Most modern TTS models already embed a vocoder and produce waveform/audio directly.

  • Ollama would only need to:

    • Capture the raw audio output (WAV/MP3/flac).
    • Stream it through REST and CLI APIs.
    • Optionally perform format conversion (e.g. to standard WAV/MP3 for clients).

3. API & CLI support

  • Implement POST /v1/audio/speech endpoint — consistent with OpenAI API.

  • Support non-streaming responses initially:

    • Return audio/mp3, audio/wav or audio/flac as standard response.
  • Design API and CLI to be future-compatible with streaming responses, if supported by models.

  • CLI example:

    ollama tts run <model> --text "Hello"
    # optional future flag: --stream
    

4. Dependencies & Packaging

  • Bundle or list required audio libs (PyTorch/TensorFlow, CUDA vocoder utils)
  • Support cross-platform binary builds (Linux/macOS/Windows)

5. Testing & Benchmarks

  • Unit tests comparing generated audio to reference samples.
  • Benchmark end-to-end latency on typical hardware.

6. Docs & Example Models

  • Update docs with TTS usage examples
  • Add a sample TTS model to Ollama registry (e.g. Coqui or ChatTTS)

Let me know if it makes sense to:

  • Expand one of the existing issues (#5424 or #7353)
  • Convert this into a detailed design proposal or spike PR
  • Assign a “good-first-task” to get the ball rolling
  • Start with a draft PR adding the basic /v1/audio/speech endpoint with minimal CLI/REST support (targeting e.g. Dia TTS model first).

Please let me know which option makes the most sense.

Thanks for building such a strong local inference tool — I’m excited to help it speak! 🎙️

Originally created by @v-byte-cpu on GitHub (Jun 8, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11021 **📣 Hello Ollama team!** I’m excited about Ollama’s vision and would love to contribute by helping add **native Text‑to‑Speech (TTS)** support. I noticed a couple of related feature requests already: * **Issue #3265** * **Issue #7353** Here’s a rough plan of what I’d be glad to tackle — happy to refine this based on your needs: ### Proposed TTS Integration Roadmap #### 1. **Model Loading &amp; Metadata** * Extend support for audio-generation models (e.g. Coqui TTS, Bark, Dia, Orpheus TTS) * Detect TTS model formats (GGUF / specialized metadata headers) and load appropriate components #### 2. **Audio output handling** * Most modern TTS models already **embed a vocoder** and produce waveform/audio directly. * Ollama would only need to: * Capture the raw audio output (WAV/MP3/flac). * Stream it through REST and CLI APIs. * Optionally perform format conversion (e.g. to standard WAV/MP3 for clients). #### 3. API & CLI support * Implement `POST /v1/audio/speech` endpoint — consistent with OpenAI API. * Support **non-streaming responses** initially: * Return `audio/mp3`, `audio/wav` or `audio/flac` as standard response. * Design API and CLI to be **future-compatible with streaming responses**, if supported by models. * CLI example: ```text ollama tts run <model> --text "Hello" # optional future flag: --stream ``` #### 4. **Dependencies &amp; Packaging** * Bundle or list required audio libs (PyTorch/TensorFlow, CUDA vocoder utils) * Support cross-platform binary builds (Linux/macOS/Windows) #### 5. **Testing &amp; Benchmarks** * Unit tests comparing generated audio to reference samples. * Benchmark end-to-end latency on typical hardware. #### 6. **Docs &amp; Example Models** * Update docs with TTS usage examples * Add a sample TTS model to Ollama registry (e.g. Coqui or ChatTTS) Let me know if it makes sense to: * Expand one of the existing issues (#5424 or #7353) * Convert this into a detailed design proposal or spike PR * Assign a “good-first-task” to get the ball rolling * Start with a **draft PR** adding the basic `/v1/audio/speech` endpoint with minimal CLI/REST support (targeting e.g. Dia TTS model first). Please let me know which option makes the most sense. Thanks for building such a strong local inference tool — I’m excited to help it *speak*! 🎙️
GiteaMirror added the feature request label 2026-04-29 04:45:05 -05:00
Author
Owner

@jcc10 commented on GitHub (Jun 10, 2025):

As I stated in one of the other issues, this would be really helpful for my use-case of having one GPU and wanting only one program handling GPU usage. (In this case Ollama)

Several items to note/consider for future work:

  1. Multiple Voices: There should be a standard API variable for changing voices. There should also be a standard API for listing voices along with any metadata regarding the voices. (EG: this voice is designed for EN_GB, possibly also a quality rating or training sample size)
  2. Arbitrary Voice Description: Some models such as ParlerTTS allow for a text description of a voice as the source. This option should have some kind of standardised way to both indicate it is supported and to use it. (Possibly a prefix to the description such as DESCRIPTION:A calm and relaxing male voice...)
  3. Arbitrary Voice Cloning: Some models such as CoquiTTS support audio inputs to attempt to clone. This is probably harder to implement than the Voice Description though would potentially produce more consistent audio(?) as it would have a baseline to match instead of simply a text description. It may be a good idea to allow for a separate endpoint to upload the sample audio file to cache it then allow for generation, instead of uploading the audio file each time? Or that could be me just trying to be overly efficient.
    If this is implemented with a sample cache, we could save the sample as a temporary file with the filename as a MD5 and then use the text field with something like CLONE: followed by the MD5 of the sample.
  4. Voice Mixing: Some models such as Kokoro allow for selecting multiple voices and mixing them with ratios. I am unsure if this is worth implementing as a separate mode or if it could just be implemented similar to the suggested DESCRIPTION prefix, possibly a MIX prefix.
  5. Output formatting layer: as you stated, there should probably be a common format conversion layer if only because most models output raw WAV data which is horribly inefficient. It shouldn't be too hard to make some default FFMPEG settings for MP3 / Opus / FLAC and have them use it.

The voices settings are probably the most important parts because the OpenAI API for TTS both does not report what voices are available (It's assumed you are going to check the Docs and there is no dynamic options) but it also doesn't support arbitrary voices or voice cloning. Both of which would probably be nice to at minimum plan for how to integrate in the future, as well as how to potentially extend it in the future. Obviously my prefered option is having prefix options of voices such as DESCRIPTION:, CLONE:, MIX:, and a more generic CUSTOM:. Including the : to indicate it is not a voice but instead a feature prefix. This would also allow for future prefixes without having to modify the Ollama parts of the API.
Edit: Just made a quick example regex for how the options could be formatted with some examples here

Finally, (and this may need to turn into a separate issue,) if other media formats are added on (Image / Video / Audio input for example) it may be worth it to make a more generic file cache option to allow for uploading a input one time and running it against multiple models, or against the same model multiple times, instead of uploading the full file each time it needs to be run. Though, again, this might just be me craving efficiency and a more generic cache is likely to be outside the scope of this issue, however it may be worth having the option of storing media outputs in the cache as well in some cases, and should probably allow for retrieval back from said cache.

<!-- gh-comment-id:2960028317 --> @jcc10 commented on GitHub (Jun 10, 2025): As I stated in one of the other issues, this would be really helpful for my use-case of having one GPU and wanting only one program handling GPU usage. (In this case Ollama) Several items to note/consider for future work: 1. Multiple Voices: There should be a standard API variable for changing voices. There should also be a standard API for *listing* voices along with any metadata regarding the voices. (EG: this voice is designed for EN_GB, possibly also a quality rating or training sample size) 2. Arbitrary Voice Description: Some models such as ParlerTTS allow for a text description of a voice as the source. This option should have some kind of standardised way to both indicate it is supported and to use it. (Possibly a prefix to the description such as `DESCRIPTION:A calm and relaxing male voice...`) 3. Arbitrary Voice Cloning: Some models such as CoquiTTS support audio inputs to attempt to clone. This is probably harder to implement than the Voice Description though would potentially produce more consistent audio(?) as it would have a baseline to match instead of simply a text description. It may be a good idea to allow for a separate endpoint to upload the sample audio file to cache it then allow for generation, instead of uploading the audio file each time? Or that could be me just trying to be overly efficient. If this is implemented with a sample cache, we could save the sample as a temporary file with the filename as a MD5 and then use the text field with something like `CLONE:` followed by the MD5 of the sample. 4. Voice Mixing: Some models such as Kokoro allow for selecting multiple voices and mixing them with ratios. I am unsure if this is worth implementing as a separate mode or if it could just be implemented similar to the suggested `DESCRIPTION` prefix, possibly a `MIX` prefix. 5. Output formatting layer: as you stated, there should probably be a common format conversion layer if only because most models output raw WAV data which is horribly inefficient. It shouldn't be too hard to make some default FFMPEG settings for MP3 / Opus / FLAC and have them use it. The voices settings are probably the most important parts because the OpenAI API for TTS both does not report what voices are available (It's assumed you are going to check the Docs and there is no dynamic options) but it also doesn't support arbitrary voices or voice cloning. Both of which would probably be nice to at minimum plan for how to integrate in the future, as well as how to potentially extend it in the future. Obviously my prefered option is having prefix options of voices such as `DESCRIPTION:`, `CLONE:`, `MIX:`, and a more generic `CUSTOM:`. Including the `:` to indicate it is not a voice but instead a feature prefix. This would also allow for future prefixes without having to modify the Ollama parts of the API. Edit: Just made a quick example regex for how the options could be formatted with some examples [here](regexr.com/8fajp) Finally, (and this may need to turn into a separate issue,) if other media formats are added on (Image / Video / Audio input for example) it may be worth it to make a more generic file cache option to allow for uploading a input one time and running it against multiple models, or against the same model multiple times, instead of uploading the full file each time it needs to be run. Though, again, this might just be me craving efficiency and a more generic cache is likely to be outside the scope of this issue, however it may be worth having the option of storing media outputs in the cache as well in some cases, and should probably allow for retrieval back from said cache.
Author
Owner

@pdevine commented on GitHub (Jun 10, 2025):

Hey guys, I appreciate the comments here for TTS. We've been looking at TTS and STT but there aren't any concrete timelines as of yet. I'll go ahead and close this as a dupe.

<!-- gh-comment-id:2960194213 --> @pdevine commented on GitHub (Jun 10, 2025): Hey guys, I appreciate the comments here for TTS. We've been looking at TTS and STT but there aren't any concrete timelines as of yet. I'll go ahead and close this as a dupe.
Author
Owner

@v-byte-cpu commented on GitHub (Jun 10, 2025):

Hi! Thanks a lot for the reply and clarification. I totally understand closing this as a dupe.

Just to add: this ticket had a bit more concrete ideas on API shape, testing, and practical implementation steps, so if useful — I’d be happy to contribute those ideas back into the original ticket or help with a design doc.

Also — I’m really interested in helping to implement this feature (TTS support) — if there’s any internal discussion or early draft direction, I’d love to contribute or collaborate.

Please let me know if there’s any way I can help move this forward!

<!-- gh-comment-id:2960237491 --> @v-byte-cpu commented on GitHub (Jun 10, 2025): Hi! Thanks a lot for the reply and clarification. I totally understand closing this as a dupe. Just to add: this ticket had a bit more concrete ideas on API shape, testing, and practical implementation steps, so if useful — I’d be happy to contribute those ideas back into the original ticket or help with a design doc. Also — I’m really interested in helping to implement this feature (TTS support) — if there’s any internal discussion or early draft direction, I’d love to contribute or collaborate. Please let me know if there’s any way I can help move this forward!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53781