Perceptual Contrast Stretching on Target Feature for Speech Enhancement

31 Mar 2022  ·  Rong Chao, Cheng Yu, Szu-Wei Fu, Xugang Lu, Yu Tsao ·

Speech enhancement (SE) performance has improved considerably owing to the use of deep learning models as a base function. Herein, we propose a perceptual contrast stretching (PCS) approach to further improve SE performance. The PCS is derived based on the critical band importance function and is applied to modify the targets of the SE model. Specifically, the contrast of target features is stretched based on perceptual importance, thereby improving the overall SE performance. Compared with post-processing-based implementations, incorporating PCS into the training phase preserves performance and reduces online computation. Notably, PCS can be combined with different SE model architectures and training criteria. Furthermore, PCS does not affect the causality or convergence of SE model training. Experimental results on the VoiceBank-DEMAND dataset show that the proposed method can achieve state-of-the-art performance on both causal (PESQ score = 3.07) and noncausal (PESQ score = 3.35) SE tasks.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Speech Enhancement VoiceBank + DEMAND PCS PESQ 3.35 # 5
CSIG 4.43 # 8
COVL 3.92 # 4
STOI 95 # 5

Methods


No methods listed for this paper. Add relevant methods here