Fully Supervised Speaker Diarization

10 Oct 2018  ·  Aonan Zhang, Quan Wang, Zhenyao Zhu, John Paisley, Chong Wang ·

In this paper, we propose a fully supervised speaker diarization approach, named unbounded interleaved-state recurrent neural networks (UIS-RNN). Given extracted speaker-discriminative embeddings (a.k.a. d-vectors) from input utterances, each individual speaker is modeled by a parameter-sharing RNN, while the RNN states for different speakers interleave in the time domain. This RNN is naturally integrated with a distance-dependent Chinese restaurant process (ddCRP) to accommodate an unknown number of speakers. Our system is fully supervised and is able to learn from examples where time-stamped speaker labels are annotated. We achieved a 7.6% diarization error rate on NIST SRE 2000 CALLHOME, which is better than the state-of-the-art method using spectral clustering. Moreover, our method decodes in an online fashion while most state-of-the-art systems rely on offline clustering.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Speaker Diarization Hub5'00 CallHome UIS-RNN V 10.6 # 1

Methods


No methods listed for this paper. Add relevant methods here