no code implementations • 3 Mar 2024 • Tiantian Feng, Anil Ramakrishna, Jimit Majmudar, Charith Peris, Jixuan Wang, Clement Chung, Richard Zemel, Morteza Ziyadi, Rahul Gupta
Federated Learning (FL) is a popular algorithm to train machine learning models on user data constrained to edge devices (for example, mobile phones) due to privacy concerns.
no code implementations • 23 Oct 2023 • Jack Good, Jimit Majmudar, Christophe Dupuy, Jixuan Wang, Charith Peris, Clement Chung, Richard Zemel, Rahul Gupta
Continual Federated Learning (CFL) combines Federated Learning (FL), the decentralized learning of a central model on a number of client devices that may not communicate their data, and Continual Learning (CL), the learning of a model from a continual stream of data without keeping the entire history.
no code implementations • 4 May 2023 • Jixuan Wang, Martin Radfar, Kai Wei, Clement Chung
It is challenging to extract semantic meanings directly from audio signals in spoken language understanding (SLU), due to the lack of textual information.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 6 May 2022 • Jixuan Wang, Deli Qiao
In this paper, the minimization of the weighted sum average age of information (AoI) in a multi-source status update communication system is studied.
1 code implementation • Findings (ACL) 2022 • Zining Zhu, Jixuan Wang, Bai Li, Frank Rudzicz
As large and powerful neural language models are developed, researchers have been increasingly interested in developing diagnostic tools to probe them.
1 code implementation • NeurIPS 2021 • Jixuan Wang, Kuan-Chieh Wang, Frank Rudzicz, Michael Brudno
Large pretrained language models (LMs) like BERT have improved performance in many disparate natural language processing (NLP) tasks.
no code implementations • 6 Feb 2021 • Jixuan Wang, Xiong Xiao, Jian Wu, Ranjani Ramamurthy, Frank Rudzicz, Michael Brudno
Speaker attribution is required in many real-world applications, such as meeting transcription, where speaker identity is assigned to each utterance according to speaker voice profiles.
no code implementations • 21 Dec 2020 • Jixuan Wang, Kai Wei, Martin Radfar, Weiwei Zhang, Clement Chung
We propose a novel Transformer encoder-based architecture with syntactical knowledge encoded for intent detection and slot filling.
no code implementations • 22 May 2020 • Jixuan Wang, Xiong Xiao, Jian Wu, Ranjani Ramamurthy, Frank Rudzicz, Michael Brudno
Deep speaker embedding models have been commonly used as a building block for speaker diarization systems; however, the speaker embedding model is usually trained according to a global loss defined on the training data, which could be sub-optimal for distinguishing speakers locally in a specific meeting session.
no code implementations • 12 Dec 2019 • Marta Skreta, Aryan Arbabi, Jixuan Wang, Michael Brudno
Abbreviation disambiguation is important for automated clinical note processing due to the frequent use of abbreviations in clinical settings.
no code implementations • 6 Feb 2019 • Jixuan Wang, Kuan-Chieh Wang, Marc Law, Frank Rudzicz, Michael Brudno
Speaker embedding models that utilize neural networks to map utterances to a space where distances reflect similarity between speakers have driven recent progress in the speaker recognition task.