Audio Super Resolution using Neural Networks

2 Aug 2017  ·  Volodymyr Kuleshov, S. Zayd Enam, Stefano Ermon ·

We introduce a new audio processing technique that increases the sampling rate of signals such as speech or music using deep convolutional neural networks. Our model is trained on pairs of low and high-quality audio examples; at test-time, it predicts missing samples within a low-resolution signal in an interpolation process similar to image super-resolution. Our method is simple and does not involve specialized audio processing techniques; in our experiments, it outperforms baselines on standard speech and music benchmarks at upscaling ratios of 2x, 4x, and 6x. The method has practical applications in telephony, compression, and text-to-speech generation; it demonstrates the effectiveness of feed-forward convolutional architectures on an audio generation task.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Audio Super-Resolution Piano U-Net Log-Spectral Distance 3.4 # 3
Audio Super-Resolution VCTK Multi-Speaker U-Net Log-Spectral Distance 3.1 # 7
Audio Super-Resolution Voice Bank corpus (VCTK) U-Net Log-Spectral Distance 3.2 # 3

Methods