Self-supervised Reflective Learning through Self-distillation and Online Clustering for Speaker Representation Learning

3 Jan 2024  ·  Danwei Cai, Zexin Cai, Ming Li ·

Speaker representation learning is critical for modern voice recognition systems. While supervised learning techniques require extensive labeled data, unsupervised methodologies can leverage vast unlabeled corpora, offering a scalable solution. This paper introduces self-supervised reflective learning (SSRL), a novel paradigm that streamlines existing iterative unsupervised frameworks. SSRL integrates self-supervised knowledge distillation with online clustering to refine pseudo labels and train the model without iterative bottlenecks. Specifically, a teacher model continually refines pseudo labels through online clustering, providing dynamic supervision signals to train the student model. The student model undergoes noisy student training with input and model noise to boost its modeling capacity. The teacher model is updated via an exponential moving average of the student, acting as an ensemble of past iterations. Further, a pseudo label queue retains historical labels for consistency, and noisy label modeling directs learning towards clean samples. Experiments on VoxCeleb show SSRL's superiority over current iterative approaches, surpassing the performance of a 5-round method in just a single training round. Ablation studies validate the contributions of key components like noisy label modeling and pseudo label queues. Moreover, consistent improvements in pseudo labeling and the convergence of cluster counts demonstrate SSRL's effectiveness in deciphering unlabeled data. This work marks an important advancement in efficient and accurate speaker representation learning through the novel reflective learning paradigm.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods