no code implementations • 3 Mar 2024 • Tiantian Feng, Anil Ramakrishna, Jimit Majmudar, Charith Peris, Jixuan Wang, Clement Chung, Richard Zemel, Morteza Ziyadi, Rahul Gupta
Federated Learning (FL) is a popular algorithm to train machine learning models on user data constrained to edge devices (for example, mobile phones) due to privacy concerns.
no code implementations • 23 Oct 2023 • Jack Good, Jimit Majmudar, Christophe Dupuy, Jixuan Wang, Charith Peris, Clement Chung, Richard Zemel, Rahul Gupta
Continual Federated Learning (CFL) combines Federated Learning (FL), the decentralized learning of a central model on a number of client devices that may not communicate their data, and Continual Learning (CL), the learning of a model from a continual stream of data without keeping the entire history.
no code implementations • 4 May 2023 • Jixuan Wang, Martin Radfar, Kai Wei, Clement Chung
It is challenging to extract semantic meanings directly from audio signals in spoken language understanding (SLU), due to the lack of textual information.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • NAACL 2022 • Rahul Sharma, Anil Ramakrishna, Ansel MacLaughlin, Anna Rumshisky, Jimit Majmudar, Clement Chung, Salman Avestimehr, Rahul Gupta
Federated learning (FL) has recently emerged as a method for training ML models on edge devices using sensitive user data and is seen as a way to mitigate concerns over data privacy.
no code implementations • NAACL 2022 • Peyman Passban, Tanya Roosta, Rahul Gupta, Ankit Chadha, Clement Chung
Training mixed-domain translation models is a complex task that demands tailored architectures and costly data preparation techniques.
no code implementations • 8 Feb 2022 • Christophe Dupuy, Tanya G. Roosta, Leo Long, Clement Chung, Rahul Gupta, Salman Avestimehr
In this study, we evaluate the impact of such idiosyncrasies on Natural Language Understanding (NLU) models trained using FL.
no code implementations • 21 Dec 2020 • Jixuan Wang, Kai Wei, Martin Radfar, Weiwei Zhang, Clement Chung
We propose a novel Transformer encoder-based architecture with syntactical knowledge encoded for intent detection and slot filling.