no code implementations • 27 Mar 2024 • Rricha Jalota, Lyan Verwimp, Markus Nussbaum-Thom, Amr Mousa, Arturo Argueta, Youssef Oualil
Based on this insight and leveraging the design of our production models, we introduce a new architecture for World English NNLM that meets the accuracy, latency, and memory constraints of our single-dialect models.
no code implementations • 23 May 2023 • Jan Silovsky, Liuhui Deng, Arturo Argueta, Tresi Arvizo, Roger Hsiao, Sasha Kuznietsov, Yiu-Chang Lin, Xiaoqiang Xiao, Yuanyuan Zhang
Voice technology has become ubiquitous recently.
no code implementations • 18 Jul 2022 • MingBin Xu, Congzheng Song, Ye Tian, Neha Agrawal, Filip Granqvist, Rogier Van Dalen, Xiao Zhang, Arturo Argueta, Shiyi Han, Yaqiao Deng, Leo Liu, Anmol Walia, Alex Jin
Our goal is to train a large neural network language model (NNLM) on compute-constrained devices while preserving privacy using FL and DP.
no code implementations • ACL 2019 • Arturo Argueta, David Chiang
Operations using sparse structures are common in natural language models at the input and output layers, because these models operate on sequences over discrete alphabets.
no code implementations • ACL 2018 • Arturo Argueta, David Chiang
Weighted finite-state transducers (FSTs) are frequently used in language processing to handle tasks such as part-of-speech tagging and speech recognition.
no code implementations • EACL 2017 • Arturo Argueta, David Chiang
Weighted finite automata and transducers (including hidden Markov models and conditional random fields) are widely used in natural language processing (NLP) to perform tasks such as morphological analysis, part-of-speech tagging, chunking, named entity recognition, speech recognition, and others.