[GH-ISSUE #11798] Feature Request: Add Audio Input Support for Multimodal Models #69885

Open
opened 2026-05-04 19:43:06 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @ebowwa on GitHub (Aug 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11798

Feature Description

Add support for audio input to Ollama, enabling multimodal models like Qwen2-Audio to process audio files alongside text prompts, similar to how image inputs currently work.

Background

Currently, Ollama supports image inputs for vision models through the Images []ImageData field. However, audio-capable models like Qwen2-Audio can be loaded but cannot receive actual audio input, limiting their functionality.

Proposed Implementation

1. API Extension

Follow the existing image input pattern by adding:

// AudioData represents raw audio binary data
type AudioData []byte

// Add to GenerateRequest and ChatRequest:
Audio []AudioData `json:"audio,omitempty"`

2. Audio Processing Pipeline

  • WAV, MP3, OGG format support
  • Audio resampling to model requirements
  • Mel-spectrogram generation for models that need it
  • Feature extraction (MFCC, spectrograms)

3. CLI Support

ollama run qwen2-audio "What do you hear?" --audio recording.wav
ollama run qwen2-audio "Transcribe this audio" --audio speech.mp3

4. Client Library Support

# Python example
import ollama

with open('audio.wav', 'rb') as f:
    audio_data = f.read()

response = ollama.generate(
    model='qwen2-audio',
    prompt='What sounds are in this audio?',
    audio=[audio_data]
)

Use Cases

  1. Speech Transcription: Convert audio to text
  2. Audio Analysis: Identify sounds, emotions, or music
  3. Audio Q&A: Answer questions about audio content
  4. Real-time Processing: Process audio streams (future enhancement)

Benefits

  • Enables full functionality of audio models like Qwen2-Audio
  • Consistent API with existing image input pattern
  • Opens possibilities for multimodal applications
  • Community has already shown interest in audio support

Implementation Considerations

  • Memory Management: Audio processing requires additional memory allocation
  • Format Support: Start with WAV, incrementally add MP3, OGG, FLAC
  • Performance: Consider batch vs. real-time processing needs
  • Backward Compatibility: Ensure no breaking changes to existing API

Willing to Contribute

I'm willing to contribute to this implementation. I've already:

  • Set up a test environment with Qwen2-Audio
  • Created a proof-of-concept audio processing server
  • Tested RTSP stream capture and processing

Would love to collaborate with the maintainers on the best approach for integration.

Models that would benefit from this feature:

  • Qwen2-Audio (7B parameters)
  • Future OpenAI Whisper integration
  • Other audio-language models

References

Would this be a welcome addition to Ollama? Happy to discuss implementation details and submit a PR.

Originally created by @ebowwa on GitHub (Aug 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11798 ## Feature Description Add support for audio input to Ollama, enabling multimodal models like Qwen2-Audio to process audio files alongside text prompts, similar to how image inputs currently work. ## Background Currently, Ollama supports image inputs for vision models through the `Images []ImageData` field. However, audio-capable models like Qwen2-Audio can be loaded but cannot receive actual audio input, limiting their functionality. ## Proposed Implementation ### 1. API Extension Follow the existing image input pattern by adding: ```go // AudioData represents raw audio binary data type AudioData []byte // Add to GenerateRequest and ChatRequest: Audio []AudioData `json:"audio,omitempty"` ``` ### 2. Audio Processing Pipeline - WAV, MP3, OGG format support - Audio resampling to model requirements - Mel-spectrogram generation for models that need it - Feature extraction (MFCC, spectrograms) ### 3. CLI Support ```bash ollama run qwen2-audio "What do you hear?" --audio recording.wav ollama run qwen2-audio "Transcribe this audio" --audio speech.mp3 ``` ### 4. Client Library Support ```python # Python example import ollama with open('audio.wav', 'rb') as f: audio_data = f.read() response = ollama.generate( model='qwen2-audio', prompt='What sounds are in this audio?', audio=[audio_data] ) ``` ## Use Cases 1. **Speech Transcription**: Convert audio to text 2. **Audio Analysis**: Identify sounds, emotions, or music 3. **Audio Q&A**: Answer questions about audio content 4. **Real-time Processing**: Process audio streams (future enhancement) ## Benefits - Enables full functionality of audio models like Qwen2-Audio - Consistent API with existing image input pattern - Opens possibilities for multimodal applications - Community has already shown interest in audio support ## Implementation Considerations - **Memory Management**: Audio processing requires additional memory allocation - **Format Support**: Start with WAV, incrementally add MP3, OGG, FLAC - **Performance**: Consider batch vs. real-time processing needs - **Backward Compatibility**: Ensure no breaking changes to existing API ## Willing to Contribute I'm willing to contribute to this implementation. I've already: - Set up a test environment with Qwen2-Audio - Created a proof-of-concept audio processing server - Tested RTSP stream capture and processing Would love to collaborate with the maintainers on the best approach for integration. ## Related Models Models that would benefit from this feature: - Qwen2-Audio (7B parameters) - Future OpenAI Whisper integration - Other audio-language models ## References - Qwen2-Audio Model: https://huggingface.co/Qwen/Qwen2-Audio-7B-Instruct - Current workaround requires external audio processing services Would this be a welcome addition to Ollama? Happy to discuss implementation details and submit a PR.
Author
Owner

@ebowwa commented on GitHub (Aug 7, 2025):

I've found several related issues discussing audio/multimodal support that would benefit from this feature:

  • #11568 - Another audio input discussion
  • #11243 - Multi-Modal Support (broader feature request including audio)
  • #11021 - Native Text-to-Speech (TTS) model support (closed as duplicate)
  • #7233 - Earlier audio-related discussion

This shows there's significant community interest in audio capabilities for Ollama. The proposed implementation in this issue would address many of these requests by:

  1. Enabling audio input for transcription models
  2. Supporting multimodal models that combine audio + text
  3. Providing a foundation for TTS models (audio generation could follow similar patterns)
  4. Creating a consistent API pattern following the existing image input approach

I'm happy to consolidate efforts with anyone working on similar features. The implementation approach I've outlined could serve as a starting point for broader multimodal support.

<!-- gh-comment-id:3166069488 --> @ebowwa commented on GitHub (Aug 7, 2025): ## Related Issues I've found several related issues discussing audio/multimodal support that would benefit from this feature: - #11568 - Another audio input discussion - #11243 - Multi-Modal Support (broader feature request including audio) - #11021 - Native Text-to-Speech (TTS) model support (closed as duplicate) - #7233 - Earlier audio-related discussion This shows there's significant community interest in audio capabilities for Ollama. The proposed implementation in this issue would address many of these requests by: 1. Enabling audio input for transcription models 2. Supporting multimodal models that combine audio + text 3. Providing a foundation for TTS models (audio generation could follow similar patterns) 4. Creating a consistent API pattern following the existing image input approach I'm happy to consolidate efforts with anyone working on similar features. The implementation approach I've outlined could serve as a starting point for broader multimodal support.
Author
Owner

@ebowwa commented on GitHub (Aug 7, 2025):

Previous Work on Audio Support

I found some previous attempts at adding audio/speech capabilities:

Closed PRs:

  • PR #6241 - "Speech Prototype" by @royjhan (closed Nov 2024)

    • Implemented whisper.cpp integration
    • Custom GGML format for WAV audio
    • Was waiting for momentum from foundational speech models
  • PR #6121 - "Speech mod feature" by @mytechnotalent (closed Nov 2024)

    • Added speech mod functionality

Both PRs were closed, but they show there's been technical exploration of audio support. The whisper.cpp approach in #6241 is particularly relevant as it demonstrates:

  1. Integration with existing audio models (Whisper)
  2. GGML format compatibility
  3. WAV audio processing

What's Different Now:

  • More multimodal models available (Qwen2-Audio, etc.)
  • Growing community demand (multiple issues filed)
  • Clearer pattern from image input implementation
  • Better understanding of requirements

Would love to build on this previous work. @royjhan's whisper.cpp integration could be a great starting point for the audio processing pipeline.

Should we revive and expand on the whisper.cpp approach, or would a more general audio framework be preferred?

<!-- gh-comment-id:3166074317 --> @ebowwa commented on GitHub (Aug 7, 2025): ## Previous Work on Audio Support I found some previous attempts at adding audio/speech capabilities: ### Closed PRs: - **PR #6241 - "Speech Prototype"** by @royjhan (closed Nov 2024) - Implemented whisper.cpp integration - Custom GGML format for WAV audio - Was waiting for momentum from foundational speech models - **PR #6121 - "Speech mod feature"** by @mytechnotalent (closed Nov 2024) - Added speech mod functionality Both PRs were closed, but they show there's been technical exploration of audio support. The whisper.cpp approach in #6241 is particularly relevant as it demonstrates: 1. Integration with existing audio models (Whisper) 2. GGML format compatibility 3. WAV audio processing ### What's Different Now: - More multimodal models available (Qwen2-Audio, etc.) - Growing community demand (multiple issues filed) - Clearer pattern from image input implementation - Better understanding of requirements Would love to build on this previous work. @royjhan's whisper.cpp integration could be a great starting point for the audio processing pipeline. Should we revive and expand on the whisper.cpp approach, or would a more general audio framework be preferred?
Author
Owner

@ebowwa commented on GitHub (Aug 8, 2025):

Audio Support Implementation for Ollama 🎤

I've implemented initial audio input support for Ollama to address this issue. The implementation includes a custom Go audio processor with FFT, mel-spectrogram generation, and comprehensive audio feature extraction.

What's Implemented

  • Custom audio processor in Go (audio/processor.go)
  • WAV parsing, resampling, FFT, and mel-spectrogram generation
  • Modified API to accept audio in messages (api/types.go)
  • Integration in chat endpoint (server/routes.go)
  • Audio extraction in prompt processing (server/prompt.go)

Testing Results

Deployed and tested on Hetzner CAX31 (16GB RAM, ARM):

  • Successfully integrated with Qwen2-Audio-7B-Instruct GGUF
  • Audio data flows correctly through the pipeline
  • Model receives and processes audio input
  • ⚠️ Quantized model shows limited transcription accuracy
  • ⚠️ Full model requires >16GB RAM (32GB recommended)

Code Example

import requests, base64

with open('audio.wav', 'rb') as f:
    audio_data = f.read()

response = requests.post('http://localhost:11434/api/chat', json={
    'model': 'qwen2-audio',
    'messages': [{
        'role': 'user',
        'content': 'Transcribe this audio.',
        'audio': [base64.b64encode(audio_data).decode('utf-8')]
    }]
})

Fork Available

🔗 Fork with audio support: https://github.com/ebowwa/ollama/tree/audio-support

Known Limitations

  • GGUF quantization impacts audio capabilities
  • Large models need 32GB+ RAM for optimal performance
  • Currently supports WAV format (other formats planned)

Next Steps

  • Test with Whisper and smaller audio models
  • Add support for MP3, OGG, FLAC formats
  • Implement audio streaming
  • Optimize for resource-constrained environments

This is a working proof-of-concept that successfully integrates audio processing into Ollama's architecture. Further optimization and testing with different models will improve transcription accuracy.

Happy to collaborate on getting this merged or improving the implementation! 🚀

<!-- gh-comment-id:3166364324 --> @ebowwa commented on GitHub (Aug 8, 2025): ## Audio Support Implementation for Ollama 🎤 I've implemented initial audio input support for Ollama to address this issue. The implementation includes a custom Go audio processor with FFT, mel-spectrogram generation, and comprehensive audio feature extraction. ### What's Implemented - ✅ Custom audio processor in Go (`audio/processor.go`) - ✅ WAV parsing, resampling, FFT, and mel-spectrogram generation - ✅ Modified API to accept audio in messages (`api/types.go`) - ✅ Integration in chat endpoint (`server/routes.go`) - ✅ Audio extraction in prompt processing (`server/prompt.go`) ### Testing Results Deployed and tested on **Hetzner CAX31** (16GB RAM, ARM): - Successfully integrated with **Qwen2-Audio-7B-Instruct GGUF** - Audio data flows correctly through the pipeline - Model receives and processes audio input - ⚠️ Quantized model shows limited transcription accuracy - ⚠️ Full model requires >16GB RAM (32GB recommended) ### Code Example ```python import requests, base64 with open('audio.wav', 'rb') as f: audio_data = f.read() response = requests.post('http://localhost:11434/api/chat', json={ 'model': 'qwen2-audio', 'messages': [{ 'role': 'user', 'content': 'Transcribe this audio.', 'audio': [base64.b64encode(audio_data).decode('utf-8')] }] }) ``` ### Fork Available 🔗 **Fork with audio support**: https://github.com/ebowwa/ollama/tree/audio-support ### Known Limitations - GGUF quantization impacts audio capabilities - Large models need 32GB+ RAM for optimal performance - Currently supports WAV format (other formats planned) ### Next Steps - Test with Whisper and smaller audio models - Add support for MP3, OGG, FLAC formats - Implement audio streaming - Optimize for resource-constrained environments This is a working proof-of-concept that successfully integrates audio processing into Ollama's architecture. Further optimization and testing with different models will improve transcription accuracy. Happy to collaborate on getting this merged or improving the implementation\! 🚀
Author
Owner

@Vinay-Umrethe commented on GitHub (Aug 10, 2025):

LIKE AUDIO, VIDEO - INPUTS,
AND AUDIO - OUTPUTS,

TO BE ADDED, NO RESPONSE... I THINK THAT'S SIMPLY BECAUSE Ollama which is Llama.cpp BASESD DOESN'T SUPPORT IT...

I've TRIED USING MANY MODELS LIKE MINICPM-O , JANUS , QWEN2.5 OMNI 3B, WHICH ARE KINDA any-to-any models as sorted by hugging face.

Lamma.cpp is only capable of image, SO THE ISSUES IS WITH Llama.cpp

WHAT WE ACTUALLY NEED IS SIMPLY A BETTER ENGINE THAN TORCH, TRANSFORMERS WHATEVER, TO DYNAMICALLY RUN MODELS ON GPU AS WELL CPU PROPERLY WITHOUT MUCH SETUP AND EASY TO INTEGRATE API SO WE HAVE A MODEL'S ALL CAPABILITES AT ONE PLACE. LIKE AN OMNI-ENGINE.

<!-- gh-comment-id:3172570019 --> @Vinay-Umrethe commented on GitHub (Aug 10, 2025): #### ALREADY POSTED A RELATED REQUEST FOR THIS, SPECIFICALLY TO ACTUALLY ABLE TO USE OMNI MODAL CAPABILITES : > LIKE AUDIO, VIDEO - **INPUTS,** > AND AUDIO - **OUTPUTS,** TO BE ADDED, NO RESPONSE... I THINK THAT'S SIMPLY BECAUSE **Ollama** which is `Llama.cpp` BASESD DOESN'T SUPPORT IT... I've TRIED USING MANY MODELS LIKE MINICPM-O , JANUS , QWEN2.5 OMNI 3B, WHICH ARE KINDA _`any-to-any`_ models as sorted by **hugging face.** `Lamma.cpp` is only capable of image, SO THE ISSUES IS WITH` Llama.cpp` _**WHAT WE ACTUALLY NEED IS SIMPLY A BETTER ENGINE THAN TORCH, TRANSFORMERS WHATEVER, TO DYNAMICALLY RUN MODELS ON GPU AS WELL CPU PROPERLY WITHOUT MUCH SETUP AND EASY TO INTEGRATE API SO WE HAVE A MODEL'S ALL CAPABILITES AT ONE PLACE. LIKE AN OMNI-ENGINE.**_
Author
Owner

@ewb-git commented on GitHub (Oct 26, 2025):

Needge.

<!-- gh-comment-id:3448081045 --> @ewb-git commented on GitHub (Oct 26, 2025): Needge.
Author
Owner

@johnnyq commented on GitHub (Jan 19, 2026):

yes whisper like support would be great especially to allow us to use fusionPBX transcripts to transcribe Voicemails

<!-- gh-comment-id:3769436110 --> @johnnyq commented on GitHub (Jan 19, 2026): yes whisper like support would be great especially to allow us to use fusionPBX transcripts to transcribe Voicemails
Author
Owner

@benevide commented on GitHub (Apr 5, 2026):

I am also interested in this

<!-- gh-comment-id:4189467348 --> @benevide commented on GitHub (Apr 5, 2026): I am also interested in this
Author
Owner

@milindmore22 commented on GitHub (Apr 10, 2026):

Given that Gemma 4 now supports audio input, are there any plans for Ollama team to implement this feature in the near future?

<!-- gh-comment-id:4222497533 --> @milindmore22 commented on GitHub (Apr 10, 2026): Given that Gemma 4 now supports audio input, are there any plans for Ollama team to implement this feature in the near future?
Author
Owner

@Vinay-Umrethe commented on GitHub (Apr 10, 2026):

I'd advise to move to llama.cpp (which is also core of Ollama) specifically llama-server it provides WebUI + supports direct Audio, Image, PDF, inputs if the model supports it...

<!-- gh-comment-id:4225181966 --> @Vinay-Umrethe commented on GitHub (Apr 10, 2026): I'd advise to move to `llama.cpp` (which is also core of Ollama) specifically `llama-server` it provides WebUI + supports direct Audio, Image, PDF, inputs if the model supports it...
Author
Owner

@IvanAliaga commented on GitHub (Apr 13, 2026):

While a proper audio field in the API would be the right long-term fix, I got a working workaround using the images field that might be useful in the meantime.

Ollama identifies audio by checking for RIFF/WAVE magic bytes, so passing a properly encoded WAV through images works. The key requirements:

  • Must be WAV with a full RIFF header (raw PCM fails silently)
  • 16kHz mono
  • num_ctx capped at 8192 to avoid memory overflow with audio embeddings
  • Audio in the images field, placed before the text prompt

The conversion pipeline from browser recording:

browser (WebM/Ogg) → ffmpeg → WAV 16kHz mono → base64 → images[]

I built a small Zig proxy that automates this whole pipeline including retry logic for the intermittent crashes (see #15333): https://github.com/IvanAliaga/gemma4visualizer

Hopefully still useful until the API gets a native audio field.

<!-- gh-comment-id:4238664966 --> @IvanAliaga commented on GitHub (Apr 13, 2026): While a proper `audio` field in the API would be the right long-term fix, I got a working workaround using the `images` field that might be useful in the meantime. Ollama identifies audio by checking for RIFF/WAVE magic bytes, so passing a properly encoded WAV through `images` works. The key requirements: - Must be WAV with a full RIFF header (raw PCM fails silently) - 16kHz mono - `num_ctx` capped at 8192 to avoid memory overflow with audio embeddings - Audio in the `images` field, placed before the text prompt The conversion pipeline from browser recording: ``` browser (WebM/Ogg) → ffmpeg → WAV 16kHz mono → base64 → images[] ``` I built a small Zig proxy that automates this whole pipeline including retry logic for the intermittent crashes (see #15333): https://github.com/IvanAliaga/gemma4visualizer Hopefully still useful until the API gets a native audio field.
Author
Owner

@s-sergio commented on GitHub (May 2, 2026):

support for this feature request

<!-- gh-comment-id:4364182496 --> @s-sergio commented on GitHub (May 2, 2026): support for this feature request
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69885