Search Results for author: Shrey Desai

Found 14 papers, 6 papers with code

Contemporary NLP Modeling in Six Comprehensive Programming Assignments

no code implementations NAACL (TeachingNLP) 2021 Greg Durrett, Jifan Chen, Shrey Desai, Tanya Goyal, Lucas Kabela, Yasumasa Onoe, Jiacheng Xu

We present a series of programming assignments, adaptable to a range of experience levels from advanced undergraduate to PhD, to teach students design and implementation of modern NLP systems.

Retrieve-and-Fill for Scenario-based Task-Oriented Semantic Parsing

no code implementations2 Feb 2022 Akshat Shrivastava, Shrey Desai, Anchit Gupta, Ali Elkahky, Aleksandr Livshits, Alexander Zotov, Ahmed Aly

We tackle this problem by introducing scenario-based semantic parsing: a variant of the original task which first requires disambiguating an utterance's "scenario" (an intent-slot template with variable leaf spans) before generating its frame, complete with ontology and utterance tokens.

Frame Semantic Parsing

Assessing Data Efficiency in Task-Oriented Semantic Parsing

no code implementations10 Jul 2021 Shrey Desai, Akshat Shrivastava, Justin Rill, Brian Moran, Safiyyah Saleem, Alexander Zotov, Ahmed Aly

Data efficiency, despite being an attractive characteristic, is often challenging to measure and optimize for in task-oriented semantic parsing; unlike exact match, it can require both model- and domain-specific setups, which have, historically, varied widely across experiments.

Semantic Parsing

Diagnosing Transformers in Task-Oriented Semantic Parsing

no code implementations Findings (ACL) 2021 Shrey Desai, Ahmed Aly

Modern task-oriented semantic parsing approaches typically use seq2seq transformers to map textual utterances to semantic frames comprised of intents and slots.

Frame Semantic Parsing

Low-Resource Task-Oriented Semantic Parsing via Intrinsic Modeling

no code implementations15 Apr 2021 Shrey Desai, Akshat Shrivastava, Alexander Zotov, Ahmed Aly

Task-oriented semantic parsing models typically have high resource requirements: to support new ontologies (i. e., intents and slots), practitioners crowdsource thousands of samples for supervised fine-tuning.

Semantic Parsing

Span Pointer Networks for Non-Autoregressive Task-Oriented Semantic Parsing

no code implementations Findings (EMNLP) 2021 Akshat Shrivastava, Pierce Chuang, Arun Babu, Shrey Desai, Abhinav Arora, Alexander Zotov, Ahmed Aly

An effective recipe for building seq2seq, non-autoregressive, task-oriented parsers to map utterances to semantic frames proceeds in three steps: encoding an utterance $x$, predicting a frame's length |y|, and decoding a |y|-sized frame with utterance and ontology tokens.

Cross-Lingual Transfer Frame +3

Understanding Neural Abstractive Summarization Models via Uncertainty

1 code implementation EMNLP 2020 Jiacheng Xu, Shrey Desai, Greg Durrett

An advantage of seq2seq abstractive summarization models is that they generate text in a free-form manner, but this flexibility makes it difficult to interpret model behavior.

Abstractive Text Summarization Text Generation

Compressive Summarization with Plausibility and Salience Modeling

1 code implementation EMNLP 2020 Shrey Desai, Jiacheng Xu, Greg Durrett

Compressive summarization systems typically rely on a crafted set of syntactic rules to determine what spans of possible summary sentences can be deleted, then learn a model of what to actually delete by optimizing for content selection (ROUGE).

Accelerating Natural Language Understanding in Task-Oriented Dialog

1 code implementation WS 2020 Ojas Ahuja, Shrey Desai

Task-oriented dialog models typically leverage complex neural architectures and large-scale, pre-trained Transformers to achieve state-of-the-art performance on popular natural language understanding benchmarks.

Natural Language Understanding

Detecting Perceived Emotions in Hurricane Disasters

1 code implementation ACL 2020 Shrey Desai, Cornelia Caragea, Junyi Jessy Li

Natural disasters (e. g., hurricanes) affect millions of people each year, causing widespread destruction in their wake.

Calibration of Pre-trained Transformers

1 code implementation EMNLP 2020 Shrey Desai, Greg Durrett

Pre-trained Transformers are now ubiquitous in natural language processing, but despite their high end-task performance, little is known empirically about whether they are calibrated.

Natural Language Inference

Lightweight Convolutional Representations for On-Device Natural Language Processing

no code implementations4 Feb 2020 Shrey Desai, Geoffrey Goh, Arun Babu, Ahmed Aly

The increasing computational and memory complexities of deep neural networks have made it difficult to deploy them on low-resource electronic devices (e. g., mobile phones, tablets, wearables).

Model Compression

Evaluating Lottery Tickets Under Distributional Shifts

no code implementations WS 2019 Shrey Desai, Hongyuan Zhan, Ahmed Aly

The Lottery Ticket Hypothesis suggests large, over-parameterized neural networks consist of small, sparse subnetworks that can be trained in isolation to reach a similar (or better) test accuracy.

Adaptive Ensembling: Unsupervised Domain Adaptation for Political Document Analysis

1 code implementation IJCNLP 2019 Shrey Desai, Barea Sinno, Alex Rosenfeld, Junyi Jessy Li

Insightful findings in political science often require researchers to analyze documents of a certain subject or type, yet these documents are usually contained in large corpora that do not distinguish between pertinent and non-pertinent documents.

Text Classification Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.