no code implementations • 20 Apr 2023 • Minghui Zhang, Alex Sokolov, Weixin Cai, Si-Qing Chen
Natural language generation (NLG) is one of the most impactful fields in NLP, and recent years have witnessed its evolution brought about by large language models (LLMs).
no code implementations • 17 Apr 2023 • Adrian de Wynter, Xun Wang, Alex Sokolov, Qilong Gu, Si-Qing Chen
We present an empirical evaluation of various outputs generated by nine of the most widely-available large language models (LLMs).
no code implementations • 12 Feb 2022 • Bolaji Yusuf, Ankur Gandhe, Alex Sokolov
There has been a recent focus on training E2E ASR models that get the performance benefits of external text data without incurring the extra cost of evaluating an external language model at inference time.
no code implementations • 25 Jun 2020 • Alex Sokolov, Denis Filimonov
Training a spoken language understanding system, as the one in Alexa, typically requires a large human-annotated corpus of data.
no code implementations • 25 Jun 2020 • Alex Sokolov, Tracy Rohlin, Ariya Rastrow
Grapheme-to-phoneme (G2P) models are a key component in Automatic Speech Recognition (ASR) systems, such as the ASR system in Alexa, as they are used to generate pronunciations for out-of-vocabulary words that do not exist in the pronunciation lexicons (mappings like "e c h o" to "E k oU").
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4