LiMuSE: Lightweight Multi-modal Speaker Extraction

7 Nov 2021  ·  Qinghua Liu, Yating Huang, Yunzhe Hao, Jiaming Xu, Bo Xu ·

Multi-modal cues, including spatial information, facial expression and voiceprint, are introduced to the speech separation and speaker extraction tasks to serve as complementary information to achieve better performance. However, the introduction of these cues brings about an increasing number of parameters and model complexity, which makes it harder to deploy these models on resource-constrained devices. In this paper, we alleviate the aforementioned problem by proposing a Lightweight Multi-modal framework for Speaker Extraction (LiMuSE). We propose to use GC-equipped TCN, which incorporates Group Communication (GC) and Temporal Convolutional Network (TCN) in the Context Codec module, the audio block and the fusion block. The experiments on the MC_GRID dataset demonstrate that LiMuSE achieves on par or better performance with a much smaller number of parameters and less model complexity. We further investigate the impacts of the quantization of LiMuSE. Our code and dataset are provided.

PDF Abstract

Datasets


Introduced in the Paper:

MC_GRID

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here