1 code implementation • NeurIPS 2021 • Jixuan Wang, Kuan-Chieh Wang, Frank Rudzicz, Michael Brudno
Large pretrained language models (LMs) like BERT have improved performance in many disparate natural language processing (NLP) tasks.
no code implementations • 6 Feb 2021 • Jixuan Wang, Xiong Xiao, Jian Wu, Ranjani Ramamurthy, Frank Rudzicz, Michael Brudno
Speaker attribution is required in many real-world applications, such as meeting transcription, where speaker identity is assigned to each utterance according to speaker voice profiles.
no code implementations • 22 May 2020 • Jixuan Wang, Xiong Xiao, Jian Wu, Ranjani Ramamurthy, Frank Rudzicz, Michael Brudno
Deep speaker embedding models have been commonly used as a building block for speaker diarization systems; however, the speaker embedding model is usually trained according to a global loss defined on the training data, which could be sub-optimal for distinguishing speakers locally in a specific meeting session.
no code implementations • 12 Dec 2019 • Marta Skreta, Aryan Arbabi, Jixuan Wang, Michael Brudno
Abbreviation disambiguation is important for automated clinical note processing due to the frequent use of abbreviations in clinical settings.
no code implementations • 6 Feb 2019 • Jixuan Wang, Kuan-Chieh Wang, Marc Law, Frank Rudzicz, Michael Brudno
Speaker embedding models that utilize neural networks to map utterances to a space where distances reflect similarity between speakers have driven recent progress in the speaker recognition task.
no code implementations • NeurIPS 2008 • Gerald Quon, Yee W. Teh, Esther Chan, Timothy Hughes, Michael Brudno, Quaid D. Morris
We address the challenge of assessing conservation of gene expression in complex, non-homogeneous datasets.