Search Results for author: Subendhu Rongali

Found 9 papers, 2 papers with code

Unsupervised Parsing with S-DIORA: Single Tree Encoding for Deep Inside-Outside Recursive Autoencoders

no code implementations EMNLP 2020 Andrew Drozdov, Subendhu Rongali, Yi-Pei Chen, Tim O{'}Gorman, Mohit Iyyer, Andrew McCallum

The deep inside-outside recursive autoencoder (DIORA; Drozdov et al. 2019) is a self-supervised neural model that learns to induce syntactic tree structures for input sentences *without access to labeled training data*.

Constituency Grammar Induction Sentence

Low-Resource Compositional Semantic Parsing with Concept Pretraining

no code implementations24 Jan 2023 Subendhu Rongali, Mukund Sridhar, Haidar Khan, Konstantine Arkoudas, Wael Hamza, Andrew McCallum

In this work, we present an architecture to perform such domain adaptation automatically, with only a small amount of metadata about the new domain and without any new training data (zero-shot) or with very few examples (few-shot).

Domain Adaptation Semantic Parsing

Training Naturalized Semantic Parsers with Very Little Data

1 code implementation29 Apr 2022 Subendhu Rongali, Konstantine Arkoudas, Melanie Rubino, Wael Hamza

Semantic parsing is an important NLP problem, particularly for voice assistants such as Alexa and Google Assistant.

Semantic Parsing

Exploring Transfer Learning For End-to-End Spoken Language Understanding

no code implementations15 Dec 2020 Subendhu Rongali, Beiye Liu, Liwei Cai, Konstantine Arkoudas, Chengwei Su, Wael Hamza

Since our model can process both speech and text input sequences and learn to predict a target sequence, it also allows us to do zero-shot E2E SLU by training on only text-hypothesis data (without any speech) from a new domain.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Continual Domain-Tuning for Pretrained Language Models

no code implementations5 Apr 2020 Subendhu Rongali, Abhyuday Jagannatha, Bhanu Pratap Singh Rawat, Hong Yu

Pre-trained language models (LM) such as BERT, DistilBERT, and RoBERTa can be tuned for different domains (domain-tuning) by continuing the pre-training phase on a new target domain corpus.

Continual Learning

Don't Parse, Generate! A Sequence to Sequence Architecture for Task-Oriented Semantic Parsing

no code implementations30 Jan 2020 Subendhu Rongali, Luca Soldaini, Emilio Monti, Wael Hamza

Virtual assistants such as Amazon Alexa, Apple Siri, and Google Assistant often rely on a semantic parsing component to understand which action(s) to execute for an utterance spoken by its users.

Semantic Parsing slot-filling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.