Improving Noise Robustness In Speaker Identification Using A Two-Stage Attention Model

24 Sep 2019  ·  Yanpei Shi, Qiang Huang, Thomas Hain ·

While the use of deep neural networks has significantly boosted speaker recognition performance, it is still challenging to separate speakers in poor acoustic environments. To improve robustness of speaker recognition system performance in noise, a novel two-stage attention mechanism which can be used in existing architectures such as Time Delay Neural Networks (TDNNs) and Convolutional Neural Networks (CNNs) is proposed. Noise is known to often mask important information in both time and frequency domain. The proposed mechanism allows the models to concentrate on reliable time/frequency components of the signal. The proposed approach is evaluated using the Voxceleb1 dataset, which aims at assessment of speaker recognition in real world situations. In addition three types of noise at different signal-noise-ratios (SNRs) were added for this work. The proposed mechanism is compared with three strong baselines: X-vectors, Attentive X-vector, and Resnet-34. Results on both identification and verification tasks show that the two-stage attention mechanism consistently improves upon these for all noise conditions.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here