Search Results for author: Asish Ghoshal

Found 13 papers, 1 papers with code

FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale Generation

no code implementations EMNLP 2021 Kushal Lakhotia, Bhargavi Paranjape, Asish Ghoshal, Wen-tau Yih, Yashar Mehdad, Srinivasan Iyer

Natural language (NL) explanations of model predictions are gaining popularity as a means to understand and verify decisions made by large black-box pre-trained models, for NLP tasks such as Question Answering (QA) and Fact Verification.

Fact Verification Question Answering

Towards Understanding the Behaviors of Optimal Deep Active Learning Algorithms

1 code implementation29 Dec 2020 Yilun Zhou, Adithya Renduchintala, Xian Li, Sida Wang, Yashar Mehdad, Asish Ghoshal

Active learning (AL) algorithms may achieve better performance with fewer data because the model guides the data selection process.

Active Learning

Low-Resource Domain Adaptation for Compositional Task-Oriented Semantic Parsing

no code implementations EMNLP 2020 Xilun Chen, Asish Ghoshal, Yashar Mehdad, Luke Zettlemoyer, Sonal Gupta

Task-oriented semantic parsing is a critical component of virtual assistants, which is responsible for understanding the user's intents (set reminder, play music, etc.).

Domain Adaptation Meta-Learning +2

Direct Learning with Guarantees of the Difference DAG Between Structural Equation Models

no code implementations28 Jun 2019 Asish Ghoshal, Kevin Bello, Jean Honorio

Discovering cause-effect relationships between variables from observational data is a fundamental challenge in many scientific disciplines.

Minimax bounds for structured prediction

no code implementations2 Jun 2019 Kevin Bello, Asish Ghoshal, Jean Honorio

Structured prediction can be considered as a generalization of many standard supervised learning tasks, and is usually thought as a simultaneous prediction of multiple labels.

Structured Prediction

Learning Maximum-A-Posteriori Perturbation Models for Structured Prediction in Polynomial Time

no code implementations ICML 2018 Asish Ghoshal, Jean Honorio

In this paper, we propose a provably polynomial time randomized algorithm for learning the parameters of perturbed MAP predictors.

Structured Prediction

Learning linear structural equation models in polynomial time and sample complexity

no code implementations15 Jul 2017 Asish Ghoshal, Jean Honorio

We develop a new algorithm --- which is computationally and statistically efficient and works in the high-dimensional regime --- for learning linear SEMs from purely observational data with arbitrary noise distribution.

Causal Inference

Learning Sparse Polymatrix Games in Polynomial Time and Sample Complexity

no code implementations18 Jun 2017 Asish Ghoshal, Jean Honorio

We also show that $\Omega(d \log (pm))$ samples are necessary for any method to consistently recover a game, with the same Nash-equilibria as the true game, from observations of strategic interactions.

Learning Graphical Games from Behavioral Data: Sufficient and Necessary Conditions

no code implementations3 Mar 2017 Asish Ghoshal, Jean Honorio

In this paper we obtain sufficient and necessary conditions on the number of samples required for exact recovery of the pure-strategy Nash equilibria (PSNE) set of a graphical game from noisy observations of joint actions.

Learning Identifiable Gaussian Bayesian Networks in Polynomial Time and Sample Complexity

no code implementations NeurIPS 2017 Asish Ghoshal, Jean Honorio

In this paper we propose a provably polynomial-time algorithm for learning sparse Gaussian Bayesian networks with equal noise variance --- a class of Bayesian networks for which the DAG structure can be uniquely identified from observational data --- under high-dimensional settings.

From Behavior to Sparse Graphical Games: Efficient Recovery of Equilibria

no code implementations11 Jul 2016 Asish Ghoshal, Jean Honorio

In this paper we study the problem of exact recovery of the pure-strategy Nash equilibria (PSNE) set of a graphical game from noisy observations of joint actions of the players alone.

Information-theoretic limits of Bayesian network structure learning

no code implementations27 Jan 2016 Asish Ghoshal, Jean Honorio

In this paper, we study the information-theoretic limits of learning the structure of Bayesian networks (BNs), on discrete as well as continuous random variables, from a finite number of samples.

Variable Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.