no code implementations • NAACL (TeachingNLP) 2021 • Greg Durrett, Jifan Chen, Shrey Desai, Tanya Goyal, Lucas Kabela, Yasumasa Onoe, Jiacheng Xu
We present a series of programming assignments, adaptable to a range of experience levels from advanced undergraduate to PhD, to teach students design and implementation of modern NLP systems.
no code implementations • 2 Feb 2022 • Akshat Shrivastava, Shrey Desai, Anchit Gupta, Ali Elkahky, Aleksandr Livshits, Alexander Zotov, Ahmed Aly
We tackle this problem by introducing scenario-based semantic parsing: a variant of the original task which first requires disambiguating an utterance's "scenario" (an intent-slot template with variable leaf spans) before generating its frame, complete with ontology and utterance tokens.
no code implementations • 10 Jul 2021 • Shrey Desai, Akshat Shrivastava, Justin Rill, Brian Moran, Safiyyah Saleem, Alexander Zotov, Ahmed Aly
Data efficiency, despite being an attractive characteristic, is often challenging to measure and optimize for in task-oriented semantic parsing; unlike exact match, it can require both model- and domain-specific setups, which have, historically, varied widely across experiments.
no code implementations • Findings (ACL) 2021 • Shrey Desai, Ahmed Aly
Modern task-oriented semantic parsing approaches typically use seq2seq transformers to map textual utterances to semantic frames comprised of intents and slots.
no code implementations • Findings (EMNLP) 2021 • Akshat Shrivastava, Pierce Chuang, Arun Babu, Shrey Desai, Abhinav Arora, Alexander Zotov, Ahmed Aly
An effective recipe for building seq2seq, non-autoregressive, task-oriented parsers to map utterances to semantic frames proceeds in three steps: encoding an utterance $x$, predicting a frame's length |y|, and decoding a |y|-sized frame with utterance and ontology tokens.
no code implementations • 15 Apr 2021 • Shrey Desai, Akshat Shrivastava, Alexander Zotov, Ahmed Aly
Task-oriented semantic parsing models typically have high resource requirements: to support new ontologies (i. e., intents and slots), practitioners crowdsource thousands of samples for supervised fine-tuning.
1 code implementation • EMNLP 2020 • Jiacheng Xu, Shrey Desai, Greg Durrett
An advantage of seq2seq abstractive summarization models is that they generate text in a free-form manner, but this flexibility makes it difficult to interpret model behavior.
1 code implementation • EMNLP 2020 • Shrey Desai, Jiacheng Xu, Greg Durrett
Compressive summarization systems typically rely on a crafted set of syntactic rules to determine what spans of possible summary sentences can be deleted, then learn a model of what to actually delete by optimizing for content selection (ROUGE).
1 code implementation • WS 2020 • Ojas Ahuja, Shrey Desai
Task-oriented dialog models typically leverage complex neural architectures and large-scale, pre-trained Transformers to achieve state-of-the-art performance on popular natural language understanding benchmarks.
1 code implementation • ACL 2020 • Shrey Desai, Cornelia Caragea, Junyi Jessy Li
Natural disasters (e. g., hurricanes) affect millions of people each year, causing widespread destruction in their wake.
1 code implementation • EMNLP 2020 • Shrey Desai, Greg Durrett
Pre-trained Transformers are now ubiquitous in natural language processing, but despite their high end-task performance, little is known empirically about whether they are calibrated.
no code implementations • 4 Feb 2020 • Shrey Desai, Geoffrey Goh, Arun Babu, Ahmed Aly
The increasing computational and memory complexities of deep neural networks have made it difficult to deploy them on low-resource electronic devices (e. g., mobile phones, tablets, wearables).
no code implementations • WS 2019 • Shrey Desai, Hongyuan Zhan, Ahmed Aly
The Lottery Ticket Hypothesis suggests large, over-parameterized neural networks consist of small, sparse subnetworks that can be trained in isolation to reach a similar (or better) test accuracy.
1 code implementation • IJCNLP 2019 • Shrey Desai, Barea Sinno, Alex Rosenfeld, Junyi Jessy Li
Insightful findings in political science often require researchers to analyze documents of a certain subject or type, yet these documents are usually contained in large corpora that do not distinguish between pertinent and non-pertinent documents.