no code implementations • 14 Apr 2024 • Taehyeon Kim, Ananda Theertha Suresh, Kishore Papineni, Michael Riley, Sanjiv Kumar, Adrian Benton
Despite the remarkable strides made by autoregressive language models, their potential is often hampered by the slow inference speeds inherent in sequential token generation.
no code implementations • 16 Dec 2023 • ChenGuang Liu, Jianjun Chen, Yunfei Chen, Ryan Payton, Michael Riley, Shuang-Hua Yang
The performance of cooperative perception is investigated in different system settings.
no code implementations • 17 Nov 2023 • ChenGuang Liu, Yunfei Chen, Jianjun Chen, Ryan Payton, Michael Riley, Shuang-Hua Yang
A new late fusion scheme is proposed to leverage the robustness of intermediate features.
no code implementations • 13 Jun 2023 • Tongzhou Chen, Cyril Allauzen, Yinghui Huang, Daniel Park, David Rybach, W. Ronny Huang, Rodrigo Cabrera, Kartik Audhkhasi, Bhuvana Ramabhadran, Pedro J. Moreno, Michael Riley
In this work, we study the impact of Large-scale Language Models (LLM) on Automated Speech Recognition (ASR) of YouTube videos, which we use as a source for long-form ASR.
1 code implementation • 25 Apr 2023 • Ke wu, Ehsan Variani, Tom Bagby, Michael Riley
We introduce LAST, a LAttice-based Speech Transducer library in JAX.
no code implementations • 22 Dec 2022 • Ehsan Variani, Ke wu, David Rybach, Cyril Allauzen, Michael Riley
Existing training criteria in automatic speech recognition(ASR) permit the model to freely explore more than one time alignments between the feature and label sequences.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
1 code implementation • 26 May 2022 • Ehsan Variani, Ke wu, Michael Riley, David Rybach, Matt Shannon, Cyril Allauzen
We introduce the Globally Normalized Autoregressive Transducer (GNAT) for addressing the label bias problem in streaming speech recognition.
no code implementations • NeurIPS 2020 • Yuhan Liu, Ananda Theertha Suresh, Felix Yu, Sanjiv Kumar, Michael Riley
If each user has $m$ samples, we show that straightforward applications of Laplace or Gaussian mechanisms require the number of users to be $\mathcal{O}(k/(m\alpha^2) + k/\epsilon\alpha)$ to achieve an $\ell_1$ distance of $\alpha$ between the true and estimated distributions, with the privacy-induced penalty $k/\epsilon\alpha$ independent of the number of samples per user $m$.
no code implementations • 12 Mar 2020 • Ehsan Variani, David Rybach, Cyril Allauzen, Michael Riley
This paper proposes and evaluates the hybrid autoregressive transducer (HAT) model, a time-synchronous encoderdecoder model that preserves the modularity of conventional automatic speech recognition systems.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+3
no code implementations • CONLL 2019 • Mingqing Chen, Ananda Theertha Suresh, Rajiv Mathews, Adeline Wong, Cyril Allauzen, Françoise Beaufays, Michael Riley
The n-gram language models trained with federated learning are compared to n-grams trained with traditional server-based algorithms using A/B tests on tens of millions of users of virtual keyboard.
no code implementations • WS 2019 • Marco Cognetta, Cyril Allauzen, Michael Riley
Indeed, a delicate balance between comprehensiveness, speed, and memory must be struck to conform to device requirements while providing a good user experience. In this paper, we describe a compression scheme for lexicons when represented as finite-state transducers.
no code implementations • WS 2019 • An Suresh, a Theertha, Brian Roark, Michael Riley, Vlad Schogol
Weighted finite automata (WFA) are often used to represent probabilistic models, such as n-gram language models, since they are efficient for recognition tasks in time and space.
no code implementations • WS 2019 • Lawrence Wolf-Sonkin, Vlad Schogol, Brian Roark, Michael Riley
The use of the Latin script for text entry of South Asian languages is common, even though there is no standard orthography for these languages in the script.
no code implementations • CL (ACL) 2021 • Ananda Theertha Suresh, Brian Roark, Michael Riley, Vlad Schogol
Weighted finite automata (WFA) are often used to represent probabilistic models, such as $n$-gram language models, since they are efficient for recognition tasks in time and space.
no code implementations • 13 Apr 2017 • Tom Ouyang, David Rybach, Françoise Beaufays, Michael Riley
We describe the general framework of what we call for short the keyboard "FST decoder" as well as the implementation details that are new compared to a speech FST decoder.