Deep Residual-Dense Lattice Network for Speech Enhancement

27 Feb 2020  ·  Mohammad Nikzad, Aaron Nicolson, Yongsheng Gao, Jun Zhou, Kuldip K. Paliwal, Fanhua Shang ·

Convolutional neural networks (CNNs) with residual links (ResNets) and causal dilated convolutional units have been the network of choice for deep learning approaches to speech enhancement. While residual links improve gradient flow during training, feature diminution of shallow layer outputs can occur due to repetitive summations with deeper layer outputs. One strategy to improve feature re-usage is to fuse both ResNets and densely connected CNNs (DenseNets). DenseNets, however, over-allocate parameters for feature re-usage. Motivated by this, we propose the residual-dense lattice network (RDL-Net), which is a new CNN for speech enhancement that employs both residual and dense aggregations without over-allocating parameters for feature re-usage. This is managed through the topology of the RDL blocks, which limit the number of outputs used for dense aggregations. Our extensive experimental investigation shows that RDL-Nets are able to achieve a higher speech enhancement performance than CNNs that employ residual and/or dense aggregations. RDL-Nets also use substantially fewer parameters and have a lower computational requirement. Furthermore, we demonstrate that RDL-Nets outperform many state-of-the-art deep learning approaches to speech enhancement.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Speech Enhancement VoiceBank + DEMAND RDL-Net 3.91M (Deep Xi - MMSE-LSA) PESQ 3.02 # 16
CSIG 4.38 # 10
CBAK 3.43 # 10
COVL 3.72 # 12
Speech Enhancement VoiceBank + DEMAND RDL-Net 1.87M (Deep Xi - MMSE-LSA) PESQ 2.93 # 19
CSIG 4.29 # 13
CBAK 3.32 # 12
COVL 3.62 # 17
Speech Enhancement VoiceBank + DEMAND RDL-Net 3.91M (Deep Xi - SRWF) PESQ 2.94 # 18
CSIG 4.36 # 11
CBAK 3.35 # 11
COVL 3.67 # 13
Speech Enhancement VoiceBank + DEMAND RDL-Net 1.87M (Deep Xi - SRWF) PESQ 2.84 # 22
CSIG 4.27 # 14
CBAK 3.23 # 15
COVL 3.56 # 18

Methods


No methods listed for this paper. Add relevant methods here