Search Results for author: Shuangyu Chang

Found 9 papers, 0 papers with code

Streaming Punctuation: A Novel Punctuation Technique Leveraging Bidirectional Context for Continuous Speech Recognition

no code implementations10 Jan 2023 Piyush Behre, Sharman Tan, Padma Varadharajan, Shuangyu Chang

While speech recognition Word Error Rate (WER) has reached human parity for English, continuous speech recognition scenarios such as voice typing and meeting transcriptions still suffer from segmentation and punctuation problems, resulting from irregular pausing patterns or slow speakers.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

TRScore: A Novel GPT-based Readability Scorer for ASR Segmentation and Punctuation model evaluation and selection

no code implementations27 Oct 2022 Piyush Behre, Sharman Tan, Amy Shah, Harini Kesavamoorthy, Shuangyu Chang, Fei Zuo, Chris Basoglu, Sayan Pathak

Punctuation and Segmentation are key to readability in Automatic Speech Recognition (ASR), often evaluated using F1 scores that require high-quality human transcripts and do not reflect readability well.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Streaming Punctuation for Long-form Dictation with Transformers

no code implementations11 Oct 2022 Piyush Behre, Sharman Tan, Padma Varadharajan, Shuangyu Chang

While speech recognition Word Error Rate (WER) has reached human parity for English, long-form dictation scenarios still suffer from segmentation and punctuation problems resulting from irregular pausing patterns or slow speakers.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Multilingual Transformer Language Model for Speech Recognition in Low-resource Languages

no code implementations8 Sep 2022 Li Miao, Jian Wu, Piyush Behre, Shuangyu Chang, Sarangarajan Parthasarathy

It is challenging to train and deploy Transformer LMs for hybrid speech recognition 2nd pass re-ranking in low-resource languages due to (1) data scarcity in low-resource languages, (2) expensive computing costs for training and refreshing 100+ monolingual models, and (3) hosting inefficiency considering sparse traffic.

Language Modelling Re-Ranking +2

LSTM-LM with Long-Term History for First-Pass Decoding in Conversational Speech Recognition

no code implementations21 Oct 2020 Xie Chen, Sarangarajan Parthasarathy, William Gale, Shuangyu Chang, Michael Zeng

The context information is captured by the hidden states of LSTM-LMs across utterance and can be used to guide the first-pass search effectively.

speech-recognition Speech Recognition

Long-span language modeling for speech recognition

no code implementations11 Nov 2019 Sarangarajan Parthasarathy, William Gale, Xie Chen, George Polovets, Shuangyu Chang

We conduct language modeling and speech recognition experiments on the publicly available LibriSpeech corpus.

Language Modelling Re-Ranking +3

Cannot find the paper you are looking for? You can Submit a new open access paper.