no code implementations • EMNLP 2020 • Andrew Drozdov, Subendhu Rongali, Yi-Pei Chen, Tim O{'}Gorman, Mohit Iyyer, Andrew McCallum
The deep inside-outside recursive autoencoder (DIORA; Drozdov et al. 2019) is a self-supervised neural model that learns to induce syntactic tree structures for input sentences *without access to labeled training data*.
no code implementations • 29 Apr 2022 • Subendhu Rongali, Konstantine Arkoudas, Melanie Rubino, Wael Hamza
Semantic parsing is an important NLP problem, particularly for voice assistants such as Alexa and Google Assistant.
1 code implementation • EMNLP 2021 • Zhiyang Xu, Andrew Drozdov, Jay Yoon Lee, Tim O'Gorman, Subendhu Rongali, Dylan Finkbeiner, Shilpa Suresh, Mohit Iyyer, Andrew McCallum
For over thirty years, researchers have developed and analyzed methods for latent tree induction as an approach for unsupervised syntactic parsing.
no code implementations • 15 Dec 2020 • Subendhu Rongali, Beiye Liu, Liwei Cai, Konstantine Arkoudas, Chengwei Su, Wael Hamza
Since our model can process both speech and text input sequences and learn to predict a target sequence, it also allows us to do zero-shot E2E SLU by training on only text-hypothesis data (without any speech) from a new domain.
Automatic Speech Recognition
Natural Language Understanding
+2
no code implementations • Findings of the Association for Computational Linguistics 2020 • Prafull Prakash, Saurabh Kumar Shashidhar, Wenlong Zhao, Subendhu Rongali, Haidar Khan, Michael Kayser
The current state-of-the-art task-oriented semantic parsing models use BERT or RoBERTa as pretrained encoders; these models have huge memory footprints.
no code implementations • 5 Apr 2020 • Subendhu Rongali, Abhyuday Jagannatha, Bhanu Pratap Singh Rawat, Hong Yu
Pre-trained language models (LM) such as BERT, DistilBERT, and RoBERTa can be tuned for different domains (domain-tuning) by continuing the pre-training phase on a new target domain corpus.
no code implementations • 30 Jan 2020 • Subendhu Rongali, Luca Soldaini, Emilio Monti, Wael Hamza
Virtual assistants such as Amazon Alexa, Apple Siri, and Google Assistant often rely on a semantic parsing component to understand which action(s) to execute for an utterance spoken by its users.
no code implementations • 1 Dec 2015 • Amrita Saha, Sathish Indurthi, Shantanu Godbole, Subendhu Rongali, Vikas C. Raykar
We describe the problem of aggregating the label predictions of diverse classifiers using a class taxonomy.