Paper

Group Communication with Context Codec for Lightweight Source Separation

Despite the recent progress on neural network architectures for speech separation, the balance between the model size, model complexity and model performance is still an important and challenging problem for the deployment of such models to low-resource platforms. In this paper, we propose two simple modules, group communication and context codec, that can be easily applied to a wide range of architectures to jointly decrease the model size and complexity without sacrificing the performance. A group communication module splits a high-dimensional feature into groups of low-dimensional features and captures the inter-group dependency. A separation module with a significantly smaller model size can then be shared by all the groups. A context codec module, containing a context encoder and a context decoder, is designed as a learnable downsampling and upsampling module to decrease the length of a sequential feature processed by the separation module. The combination of the group communication and the context codec modules is referred to as the GC3 design. Experimental results show that applying GC3 on multiple network architectures for speech separation can achieve on-par or better performance with as small as 2.5% model size and 17.6% model complexity, respectively.

Results in Papers With Code
(↓ scroll down to see all results)