Multi-head Efficient Decoding for Transformer-based ASR

Yael Segal-Feldman, Aviv Shamsian, Aviv Navon, Gill Hetz, Joseph Keshet

aiOla Research

ICASSP 2025

Paper Code Blog

Abstract

Large transformer-based models have significant potential for speech transcription and translation. Their self-attention mechanisms and parallel processing enable them to capture complex patterns and dependencies in audio sequences. However, this potential comes with challenges, as these large and computationally intensive models lead to slow inference speeds. Various optimization strategies have been proposed to improve performance, including efficient hardware utilization and algorithmic enhancements. In this paper, we introduce Whisper-Medusa, a novel approach designed to enhance processing speed with minimal impact on Word Error Rate (WER). The proposed model extends the OpenAI's Whisper architecture by predicting multiple tokens per iteration, resulting in a 50% reduction in latency. We showcase the effectiveness of Whisper-Medusa across different learning setups and datasets.

Medusa Decoding in Action

Model Architecture

A visualization of the Whisper-Medusa decoding. Left: decoding with Whisper-Medusa with 10 heads. Right: vanilla Whisper. Whisper-Medusa predicts multiple future tokens in parallel using several decoding heads, allowing for efficient and fast transcription.