New Insights on Target Speaker Extraction

Speaker extraction (SE) aims to segregate the speech of a target speaker from a mixture of interfering speakers with the help of auxiliary information. Several forms of auxiliary information have been employed in single-channel SE, such as a speech snippet enrolled from the target speaker or visual information corresponding to the spoken utterance. The effectiveness of the auxiliary information in SE is typically evaluated by comparing the extraction performance of SE with uninformed speaker separation (SS) methods. Following this evaluation protocol, many SE studies have reported performance improvement compared to SS, attributing this to the auxiliary information. However, such studies have been conducted on a few datasets and have not considered recent deep neural network architectures for SS that have shown impressive separation performance. In this paper, we examine the role of the auxiliary information in SE for different input scenarios and over multiple datasets. Specifically, we compare the performance of two SE systems (audio-based and video-based) with SS using a common framework that utilizes the recently proposed dual-path recurrent neural network as the main learning machine. Experimental evaluation on various datasets demonstrates that the use of auxiliary information in the considered SE systems does not always lead to better extraction performance compared to the uninformed SS system. Furthermore, we offer insights into the behavior of the SE systems when provided with different and distorted auxiliary information given the same mixture input.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here