Weighted Speech Distortion Losses for Neural-network-based Real-time Speech Enhancement

IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020  ·  Xia Yangyang, Braun Sebastian, Reddy Chandan K. A., Dubey Harishchandra, Cutler Ross, Tashev Ivan ·

This paper investigates several aspects of training a RNN (recurrent neural network) that impact the objective and subjective quality of enhanced speech for real-time single-channel speech enhancement. Specifically, we focus on a RNN that enhances short-time speech spectra on a single-frame-in, single-frame-out basis, a framework adopted by most classical signal processing methods. We propose two novel mean-squared-error-based learning objectives that enable separate control over the importance of speech distortion versus noise reduction. The proposed loss functions are evaluated by widely accepted objective quality and intelligibility measures and compared to other competitive online methods. In addition, we study the impact of feature normalization and varying batch sequence lengths on the objective quality of enhanced speech. Finally, we show subjective ratings for the proposed approach and a state-of-the-art real-time RNN-based method.

PDF Abstract IEEE International 2020 PDF IEEE International 2020 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Speech Enhancement Deep Noise Suppression (DNS) Challenge Proposed (0.35) PESQ-NB 2.65 # 8
PESQ-WB 2.65 # 13

Methods


No methods listed for this paper. Add relevant methods here