no code implementations • 16 Jan 2025 • Ibtihel Amara, Ahmed Imtiaz Humayun, Ivana Kajic, Zarana Parekh, Natalie Harris, Sarah Young, Chirag Nagpal, Najoung Kim, Junfeng He, Cristina Nader Vasconcelos, Deepak Ramachandran, Goolnoosh Farnadi, Katherine Heller, Mohammad Havaei, Negar Rostamzadeh
This highlights the gap in reliability of the concept erasure techniques.
no code implementations • 15 Aug 2024 • Ahmed Imtiaz Humayun, Ibtihel Amara, Candice Schumann, Golnoosh Farnadi, Negar Rostamzadeh, Mohammad Havaei
Deep generative models learn continuous representations of complex data manifolds using a finite number of samples during training.
no code implementations • 12 Dec 2023 • Ibtihel Amara, Vinija Jain, Aman Chadha
We tackle the challenging issue of aggressive fine-tuning encountered during the process of transfer learning of pre-trained language models (PLMs) with limited labeled downstream data.
no code implementations • 14 Oct 2023 • Ankitha Sudarshan, Vinay Samuel, Parth Patwa, Ibtihel Amara, Aman Chadha
Automatic Speech Recognition (ASR) has witnessed a profound research interest.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
no code implementations • 25 Dec 2022 • Ibtihel Amara, Nazanin Sepahvand, Brett H. Meyer, Warren J. Gross, James J. Clark
We address the challenge of producing trustworthy and accurate compact models for edge devices.
no code implementations • 15 Sep 2022 • Ibtihel Amara, Maryam Ziaeefard, Brett H. Meyer, Warren Gross, James J. Clark
Knowledge distillation (KD) is an effective tool for compressing deep classification models for edge devices.