Search Results for author: Jesus Villalba

Found 5 papers, 2 papers with code

The JHU submission to VoxSRC-21: Track 3

no code implementations28 Sep 2021 Jejin Cho, Jesus Villalba, Najim Dehak

This technical report describes Johns Hopkins University speaker recognition system submitted to Voxceleb Speaker Recognition Challenge 2021 Track 3: Self-supervised speaker verification (closed).

Contrastive Learning Speaker Recognition +1

Adversarial Attacks and Defenses for Speech Recognition Systems

no code implementations31 Mar 2021 Piotr Żelasko, Sonal Joshi, Yiwen Shao, Jesus Villalba, Jan Trmal, Najim Dehak, Sanjeev Khudanpur

We investigate two threat models: a denial-of-service scenario where fast gradient-sign method (FGSM) or weak projected gradient descent (PGD) attacks are used to degrade the model's word error rate (WER); and a targeted scenario where a more potent imperceptible attack forces the system to recognize a specific phrase.

Adversarial Robustness automatic-speech-recognition +1

Learning Speaker Embedding from Text-to-Speech

1 code implementation21 Oct 2020 Jaejin Cho, Piotr Zelasko, Jesus Villalba, Shinji Watanabe, Najim Dehak

TTS with speaker classification loss improved EER by 0. 28\% and 0. 73\% absolutely from a model using only speaker classification loss in LibriTTS and Voxceleb1 respectively.

Classification General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.