Search Results for author: Pararth Shah

Found 15 papers, 4 papers with code

Multi-Action Dialog Policy Learning with Interactive Human Teaching

no code implementations SIGDIAL (ACL) 2020 Megha Jhunjhunwala, Caleb Bryant, Pararth Shah

We present a novel multi-domain, multi-action dialog policy architecture trained on MultiWOZ, and show that small amounts of online supervision can lead to significant improvement in model performance.

Imitation Learning Transfer Learning

Improving Top-K Decoding for Non-Autoregressive Semantic Parsing via Intent Conditioning

no code implementations14 Apr 2022 Geunseob Oh, Rahul Goel, Chris Hidey, Shachi Paul, Aditya Gupta, Pararth Shah, Rushin Shah

As the top-level intent largely governs the syntax and semantics of a parse, the intent conditioning allows the model to better control beam search and improves the quality and diversity of top-k outputs.

Semantic Parsing

User Memory Reasoning for Conversational Recommendation

no code implementations COLING 2020 Hu Xu, Seungwhan Moon, Honglei Liu, Pararth Shah, Bing Liu, Philip S. Yu

We study a conversational recommendation model which dynamically manages users' past (offline) preferences and current (online) requests through a structured and cumulative user memory knowledge graph, to allow for natural interactions and accurate recommendations.

Memory Graph Networks for Explainable Memory-grounded Question Answering

no code implementations CONLL 2019 Seungwhan Moon, Pararth Shah, Anuj Kumar, Rajen Subba

We introduce Episodic Memory QA, the task of answering personal user questions grounded on memory graph (MG), where episodic memories and related entity nodes are connected via relational edges.

Question Answering

Memory Grounded Conversational Reasoning

no code implementations IJCNLP 2019 Seungwhan Moon, Pararth Shah, Rajen Subba, Anuj Kumar

To implement such a system, we collect a new corpus of memory grounded conversations, which comprises human-to-human role-playing dialogs given synthetic memory graphs with simulated attributes.

Recommendation as a Communication Game: Self-Supervised Bot-Play for Goal-oriented Dialogue

1 code implementation IJCNLP 2019 Dongyeop Kang, Anusha Balakrishnan, Pararth Shah, Paul Crook, Y-Lan Boureau, Jason Weston

These issues can be alleviated by treating recommendation as an interactive dialogue task instead, where an expert recommender can sequentially ask about someone's preferences, react to their requests, and recommend more appropriate items.

Recommendation Systems

OpenDialKG: Explainable Conversational Reasoning with Attention-based Walks over Knowledge Graphs

no code implementations ACL 2019 Seungwhan Moon, Pararth Shah, Anuj Kumar, Rajen Subba

We study a conversational reasoning model that strategically traverses through a large-scale common fact knowledge graph (KG) to introduce engaging and contextually diverse entities and attributes.

Knowledge Graphs

User Modeling for Task Oriented Dialogues

no code implementations11 Nov 2018 Izzeddin Gur, Dilek Hakkani-Tur, Gokhan Tur, Pararth Shah

We further develop several variants by utilizing a latent variable model to inject random variations into user responses to promote diversity in simulated user responses and a novel goal regularization mechanism to penalize divergence of user responses from the initial user goal.

Dialogue State Tracking Task-Oriented Dialogue Systems

Bootstrapping a Neural Conversational Agent with Dialogue Self-Play, Crowdsourcing and On-Line Reinforcement Learning

no code implementations NAACL 2018 Pararth Shah, Dilek Hakkani-T{\"u}r, Bing Liu, Gokhan T{\"u}r

End-to-end neural models show great promise towards building conversational agents that are trained from data and on-line experience using supervised and reinforcement learning.

reinforcement-learning

Dialogue Learning with Human Teaching and Feedback in End-to-End Trainable Task-Oriented Dialogue Systems

1 code implementation NAACL 2018 Bing Liu, Gokhan Tur, Dilek Hakkani-Tur, Pararth Shah, Larry Heck

To address this challenge, we propose a hybrid imitation and reinforcement learning method, with which a dialogue agent can effectively learn from its interaction with users by learning from human teaching and feedback.

Dialogue State Tracking Imitation Learning +2

Building a Conversational Agent Overnight with Dialogue Self-Play

3 code implementations15 Jan 2018 Pararth Shah, Dilek Hakkani-Tür, Gokhan Tür, Abhinav Rastogi, Ankur Bapna, Neha Nayak, Larry Heck

We propose Machines Talking To Machines (M2M), a framework combining automation and crowdsourcing to rapidly bootstrap end-to-end dialogue agents for goal-oriented dialogues in arbitrary domains.

Federated Control with Hierarchical Multi-Agent Deep Reinforcement Learning

1 code implementation22 Dec 2017 Saurabh Kumar, Pararth Shah, Dilek Hakkani-Tur, Larry Heck

We present a framework combining hierarchical and multi-agent deep reinforcement learning approaches to solve coordination problems among a multitude of agents using a semi-decentralized model.

Efficient Exploration reinforcement-learning

End-to-End Optimization of Task-Oriented Dialogue Model with Deep Reinforcement Learning

no code implementations29 Nov 2017 Bing Liu, Gokhan Tur, Dilek Hakkani-Tur, Pararth Shah, Larry Heck

We show that deep RL based optimization leads to significant improvement on task success rate and reduction in dialogue length comparing to supervised training model.

reinforcement-learning

Cannot find the paper you are looking for? You can Submit a new open access paper.