NVIDIA has announced the release of BigVGAN v2, a groundbreaking generative AI model for zero-shot waveform audio generation, according to the NVIDIA Technical Blog. The new model represents a significant improvement in speed and quality, establishing it as the state-of-the-art solution in audio generation AI.
BigVGAN: A universal neural vocoder
BigVGAN is a general-purpose neural vocoder designed to synthesize audio waveforms from Mel spectrograms. The model uses a fully synthetic architecture with multiple upsampling blocks and residual augmented synthesis layers. The main feature is an anti-aliasing multi-periodical composition (AMP) module that is optimized to generate high-frequency and periodic sound waves, reducing artifacts in the process.
Improvements in BigVGAN v2
BigVGAN v2 introduces several improvements over its predecessors.
- Cutting edge audio quality Across a variety of measurement criteria and audio types.
- Up to 3x faster synthesis speed Through optimized CUDA kernels.
- Pre-trained checkpoints For a variety of audio configurations.
- Supports sampling rates up to 44kHzContains the highest frequencies that humans can hear.
Generate all the sounds in the world
Waveform audio generation is essential to virtual worlds and has been a major focus of research. BigVGAN v2 overcomes previous limitations by providing high-quality audio with improved fine details. Trained using NVIDIA A100 Tensor Core GPUs and a dataset 100x larger than its predecessor, BigVGAN v2 can generate high-quality sound waves in a variety of domains, including speech, environmental sounds, and music.
Reaching the highest frequency sound that the human ear can detect
Previous models were limited to sampling rates between 22kHz and 24kHz. BigVGAN v2 extends this range to 44kHz, capturing the entire human auditory spectrum. This allows the model to reproduce a wide range of soundscapes, from powerful drums to the crisp cymbals of music.
Faster synthesis using custom CUDA kernels
BigVGAN v2 also provides accelerated synthesis speeds, achieving up to 3x faster inference than the original BigVGAN using custom CUDA kernels. These kernels enable audio waveform generation up to 240x faster than real-time on a single NVIDIA A100 GPU.
Audio quality results
BigVGAN v2 demonstrates superior audio quality for speech and general audio compared to previous models, and achieves similar results to Descript Audio Codec at 44kHz sampling rate. This demonstrates the model’s ability to generate high-quality waveforms for a wide range of audio types.
conclusion
NVIDIA’s BigVGAN v2 sets a new standard for audio synthesis, achieving state-of-the-art quality across all audio types and covering the full range of human hearing. The model’s synthesis speed is now up to 3x faster, making it highly efficient for a wide range of audio configurations.
For more details, see the BigVGAN v2 model card on GitHub.
Image source: Shutterstock