Search Results for author: Rushin Shah

Found 11 papers, 4 papers with code

PRESTO: A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs

1 code implementation15 Mar 2023 Rahul Goel, Waleed Ammar, Aditya Gupta, Siddharth Vashishtha, Motoki Sano, Faiz Surani, Max Chang, HyunJeong Choe, David Greene, Kyle He, Rattima Nitisaroj, Anna Trukhina, Shachi Paul, Pararth Shah, Rushin Shah, Zhou Yu

Research interest in task-oriented dialogs has increased as systems such as Google Assistant, Alexa and Siri have become ubiquitous in everyday life.

DAMP: Doubly Aligned Multilingual Parser for Task-Oriented Dialogue

1 code implementation15 Dec 2022 William Held, Christopher Hidey, Fei Liu, Eric Zhu, Rahul Goel, Diyi Yang, Rushin Shah

Modern virtual assistants use internal semantic parsing engines to convert user utterances to actionable commands.

Semantic Parsing XLM-R

Improving Top-K Decoding for Non-Autoregressive Semantic Parsing via Intent Conditioning

no code implementations COLING 2022 Geunseob Oh, Rahul Goel, Chris Hidey, Shachi Paul, Aditya Gupta, Pararth Shah, Rushin Shah

As the top-level intent largely governs the syntax and semantics of a parse, the intent conditioning allows the model to better control beam search and improves the quality and diversity of top-k outputs.

Semantic Parsing

Overcoming Conflicting Data when Updating a Neural Semantic Parser

1 code implementation EMNLP (NLP4ConvAI) 2021 David Gaddy, Alex Kouzemtchenko, Pavankumar Reddy Muddireddy, Prateek Kolhar, Rushin Shah

In this paper, we explore how to use a small amount of new data to update a task-oriented semantic parsing model when the desired output for some examples has changed.

Semantic Parsing

Update Frequently, Update Fast: Retraining Semantic Parsing Systems in a Fraction of Time

no code implementations15 Oct 2020 Vladislav Lialin, Rahul Goel, Andrey Simanovsky, Anna Rumshisky, Rushin Shah

To reduce training time, one can fine-tune the previously trained model on each patch, but naive fine-tuning exhibits catastrophic forgetting - degradation of the model performance on the data not represented in the data patch.

Continual Learning Goal-Oriented Dialogue Systems +1

Improving Robustness of Task Oriented Dialog Systems

no code implementations12 Nov 2019 Arash Einolghozati, Sonal Gupta, Mrinal Mohit, Rushin Shah

However, evaluating a model's robustness to these changes is harder for language since words are discrete and an automated change (e. g. adding `noise') to a query sometimes changes the meaning and thus labels of a query.

Adversarial Attack Data Augmentation +5

Span-based Hierarchical Semantic Parsing for Task-Oriented Dialog

no code implementations IJCNLP 2019 Panupong Pasupat, Sonal Gupta, M, Karishma yam, Rushin Shah, Mike Lewis, Luke Zettlemoyer

We propose a semantic parser for parsing compositional utterances into Task Oriented Parse (TOP), a tree representation that has intents and slots as labels of nesting tree nodes.

Semantic Parsing valid

Improving Semantic Parsing for Task Oriented Dialog

no code implementations15 Feb 2019 Arash Einolghozati, Panupong Pasupat, Sonal Gupta, Rushin Shah, Mrinal Mohit, Mike Lewis, Luke Zettlemoyer

Semantic parsing using hierarchical representations has recently been proposed for task oriented dialog with promising results [Gupta et al 2018].

Language Modelling Re-Ranking +1

Cross-Lingual Transfer Learning for Multilingual Task Oriented Dialog

no code implementations NAACL 2019 Sebastian Schuster, Sonal Gupta, Rushin Shah, Mike Lewis

We use this data set to evaluate three different cross-lingual transfer methods: (1) translating the training data, (2) using cross-lingual pre-trained embeddings, and (3) a novel method of using a multilingual machine translation encoder as contextual word representations.

Cross-Lingual Transfer Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.