1 code implementation • 8 Feb 2014 • Zachary Chase Lipton, Charles Elkan, Balakrishnan Narayanaswamy
As another special case, if the classifier is completely uninformative, then the optimal behavior is to classify all examples as positive.
5 code implementations • 29 Nov 2017 • Ruofeng Wen, Kari Torkkola, Balakrishnan Narayanaswamy, Dhruv Madeka
We propose a framework for general probabilistic multi-step time series regression.
no code implementations • 11 Apr 2018 • Rashmi Gangadharaiah, Balakrishnan Narayanaswamy, Charles Elkan
We show how to combine nearest neighbor and Seq2Seq methods in a hybrid model, where nearest neighbor is used to generate fluent responses and Seq2Seq type models ensure dialog coherency and generate accurate external actions.
no code implementations • NAACL 2018 • Rashmi Gangadharaiah, Balakrishnan Narayanaswamy, Charles Elkan
In task-oriented dialog, agents need to generate both fluent natural language responses and correct external actions like database queries and updates.
no code implementations • NAACL 2019 • Rashmi Gangadharaiah, Balakrishnan Narayanaswamy
Neural network models have recently gained traction for sentence-level intent classification and token-based slot-label identification.
2 code implementations • 24 Nov 2019 • Bharathan Balaji, Jordan Bell-Masterson, Enes Bilgin, Andreas Damianou, Pablo Moreno Garcia, Arpit Jain, Runfei Luo, Alvaro Maggiar, Balakrishnan Narayanaswamy, Chun Ye
Reinforcement Learning (RL) has achieved state-of-the-art results in domains such as robotics and games.
no code implementations • ACL 2020 • Rashmi Gangadharaiah, Balakrishnan Narayanaswamy
The Natural Language Understanding (NLU) component in task oriented dialog systems processes a user{'}s request and converts it into structured information that can be consumed by downstream components such as the Dialog State Tracker (DST).
no code implementations • 29 Nov 2022 • Rashmi Gangadharaiah, Balakrishnan Narayanaswamy
It is expensive and difficult to obtain the large number of sentence-level intent and token-level slot label annotations required to train neural network (NN)-based Natural Language Understanding (NLU) components of task-oriented dialog systems, especially for the many real world tasks that have a large and growing number of intents and slot types.
no code implementations • 2 Dec 2022 • Kaustubh Sridhar, Vikramank Singh, Balakrishnan Narayanaswamy, Abishek Sankararaman
PnC jointly trains a prediction model and a terminal Q function that approximates cost-to-go over a long horizon, by back-propagating the cost of decisions through the optimization problem \emph{and from the future}.