Search Results for author: Junyi Peng

Found 6 papers, 0 papers with code

Target Speech Extraction with Pre-trained Self-supervised Learning Models

no code implementations17 Feb 2024 Junyi Peng, Marc Delcroix, Tsubasa Ochiai, Oldrich Plchot, Shoko Araki, Jan Cernocky

We then extend a powerful TSE architecture by incorporating two SSL-based modules: an Adaptive Input Enhancer (AIE) and a speaker encoder.

Self-Supervised Learning Speech Extraction

Probing Self-supervised Learning Models with Target Speech Extraction

no code implementations17 Feb 2024 Junyi Peng, Marc Delcroix, Tsubasa Ochiai, Oldrich Plchot, Takanori Ashihara, Shoko Araki, Jan Cernocky

TSE uniquely requires both speaker identification and speech separation, distinguishing it from other tasks in the Speech processing Universal PERformance Benchmark (SUPERB) evaluation.

Self-Supervised Learning Speaker Identification +2

Improving Speaker Verification with Self-Pretrained Transformer Models

no code implementations17 May 2023 Junyi Peng, Oldřich Plchot, Themos Stafylakis, Ladislav Mošner, Lukáš Burget, Jan Černocký

Recently, fine-tuning large pre-trained Transformer models using downstream datasets has received a rising interest.

Speaker Verification

Probing Deep Speaker Embeddings for Speaker-related Tasks

no code implementations14 Dec 2022 Zifeng Zhao, Ding Pan, Junyi Peng, Rongzhi Gu

Results show that all deep embeddings encoded channel and content information in addition to speaker identity, but the extent could vary and their performance on speaker-related tasks can be tremendously different: ECAPA-TDNN is dominant in discriminative tasks, and d-vector leads the guiding tasks, while regulating task is less sensitive to the choice of speaker representations.

Speaker Recognition Speaker Verification

Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters

no code implementations28 Oct 2022 Junyi Peng, Themos Stafylakis, Rongzhi Gu, Oldřich Plchot, Ladislav Mošner, Lukáš Burget, Jan Černocký

Recently, the pre-trained Transformer models have received a rising interest in the field of speech processing thanks to their great success in various downstream tasks.

Speaker Verification Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.