1 code implementation • 19 May 2023 • Mustafa Safa Ozdayi, Charith Peris, Jack FitzGerald, Christophe Dupuy, Jimit Majmudar, Haidar Khan, Rahil Parikh, Rahul Gupta
We present a novel approach which uses prompt-tuning to control the extraction rates of memorized content in LLMs.
no code implementations • 27 Jun 2022 • Rahil Parikh, Harshavardhan Sundar, Ming Sun, Chao Wang, Spyros Matsoukas
We conclude that this improvement in ASC performance comes from the regularization effect of using AET and not from the network's improved ability to discern between acoustic events.
no code implementations • 20 Jun 2022 • Rahil Parikh, Gaspar Rochette, Carol Espy-Wilson, Shihab Shamma
Knowing that harmonicity is a critical cue for these networks to group sources, in this work, we perform a thorough investigation on ConvTasnet and DPT-Net to analyze how they perform a harmonic analysis of the input mixture.
no code implementations • ACL 2022 • Rahil Parikh, Christophe Dupuy, Rahul Gupta
In this work, we present a version of such an attack by extracting canaries inserted in NLU training data.
no code implementations • 11 Mar 2022 • Rahil Parikh, Nadee Seneviratne, Ganesh Sivaraman, Shihab Shamma, Carol Espy-Wilson
We used U. of Wisconsin X-ray Microbeam (XRMB) database of clean speech signals to train a feed-forward deep neural network (DNN) to estimate articulatory trajectories of six tract variables.
no code implementations • 8 Mar 2022 • Rahil Parikh, Ilya Kavalerov, Carol Espy-Wilson, Shihab Shamma
We evaluate their performance with mixtures of natural speech versus slightly manipulated inharmonic speech, where harmonics are slightly frequency jittered.
Ranked #1 on Adversarial Attack on WSJ0-2mix