no code implementations • 9 Dec 2024 • Neel Jain, Aditya Shrivastava, Chenyang Zhu, Daben Liu, Alfy Samuel, Ashwinee Panda, Anoop Kumar, Micah Goldblum, Tom Goldstein
A key component of building safe and reliable language models is enabling the models to appropriately refuse to follow certain instructions or answer certain questions.
no code implementations • 9 Sep 2024 • Ernest Pusateri, Anmol Walia, Anirudh Kashi, Bortik Bandyopadhyay, Nadia Hyder, Sayantan Mahinder, Raviteja Anantha, Daben Liu, Sashank Gondala
In recent years, end-to-end automatic speech recognition (ASR) systems have proven themselves remarkably accurate and performant, but these systems still have a significant error rate for entity names which appear infrequently in their training data.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+4
no code implementations • 27 Aug 2021 • Zhen Huang, Xiaodan Zhuang, Daben Liu, Xiaoqiang Xiao, Yuchen Zhang, Sabato Marco Siniscalchi
To achieve such an ambitious goal, new mechanisms for foreign pronunciation generation and language model (LM) enrichment have been devised.
no code implementations • 7 Dec 2020 • Xinwei Li, Yuanyuan Zhang, Xiaodan Zhuang, Daben Liu
We demonstrate that f-SpecAugment is more effective than the utterance level SpecAugment for deep CNN based hybrid models.
no code implementations • 4 Oct 2019 • Zhen Huang, Tim Ng, Leo Liu, Henry Mason, Xiaodan Zhuang, Daben Liu
The most popular way to train very deep CNNs is to use shortcut connections (SC) together with batch normalization (BN).