Search Results for author: Amit Dhurandhar

Found 27 papers, 8 papers with code

Let the CAT out of the bag: Contrastive Attributed explanations for Text

no code implementations16 Sep 2021 Saneem Chemmengath, Amar Prakash Azad, Ronny Luss, Amit Dhurandhar

Contrastive explanations for understanding the behavior of black box models has gained a lot of attention recently as they provide potential for recourse.

Language Modelling

Building Accurate Simple Models with Multihop

no code implementations14 Sep 2021 Amit Dhurandhar, Tejaswini Pedapati

In this paper, we propose a meta-approach where we transfer information from the complex model to the simple model by dynamically selecting and/or constructing a sequence of intermediate models of decreasing complexity that are less intricate than the original complex model.

Explainable artificial intelligence Knowledge Distillation +2

Towards Better Model Understanding with Path-Sufficient Explanations

no code implementations13 Sep 2021 Ronny Luss, Amit Dhurandhar

To overcome these limitations, we propose a novel method called Path-Sufficient Explanations Method (PSEM) that outputs a sequence of sufficient explanations for a given input of strictly decreasing size (or value) -- from original input to a minimally sufficient explanation -- which can be thought to trace the local boundary of the model in a smooth manner, thus providing better intuition about the local model behavior for the specific input.

Explainable artificial intelligence

Treatment Effect Estimation using Invariant Risk Minimization

2 code implementations13 Mar 2021 Abhin Shah, Kartik Ahuja, Karthikeyan Shanmugam, Dennis Wei, Kush Varshney, Amit Dhurandhar

Inferring causal individual treatment effect (ITE) from observational data is a challenging problem whose difficulty is exacerbated by the presence of treatment assignment bias.

Domain Generalization

Learning to Initialize Gradient Descent Using Gradient Descent

no code implementations22 Dec 2020 Kartik Ahuja, Amit Dhurandhar, Kush R. Varshney

Non-convex optimization problems are challenging to solve; the success and computational expense of a gradient descent algorithm or variant depend heavily on the initialization strategy.

Empirical or Invariant Risk Minimization? A Sample Complexity Perspective

3 code implementations ICLR 2021 Kartik Ahuja, Jun Wang, Amit Dhurandhar, Karthikeyan Shanmugam, Kush R. Varshney

Recently, invariant risk minimization (IRM) was proposed as a promising solution to address out-of-distribution (OOD) generalization.

Linear Regression Games: Convergence Guarantees to Approximate Out-of-Distribution Solutions

3 code implementations28 Oct 2020 Kartik Ahuja, Karthikeyan Shanmugam, Amit Dhurandhar

In Ahuja et al., it was shown that solving for the Nash equilibria of a new class of "ensemble-games" is equivalent to solving IRM.

Model Agnostic Multilevel Explanations

no code implementations NeurIPS 2020 Karthikeyan Natesan Ramamurthy, Bhanukiran Vinzamuri, Yunfeng Zhang, Amit Dhurandhar

The method can also leverage side information, where users can specify points for which they may want the explanations to be similar.

Learning Global Transparent Models Consistent with Local Contrastive Explanations

no code implementations NeurIPS 2020 Tejaswini Pedapati, Avinash Balakrishnan, Karthikeyan Shanmugam, Amit Dhurandhar

Based on a key insight we propose a novel method where we create custom boolean features from sparse local contrastive explanations of the black-box model and then train a globally transparent model on just these, and showcase empirically that such models have higher local consistency compared with other known strategies, while still being close in performance to models that are trained with access to the original data.

Invariant Risk Minimization Games

1 code implementation ICML 2020 Kartik Ahuja, Karthikeyan Shanmugam, Kush R. Varshney, Amit Dhurandhar

The standard risk minimization paradigm of machine learning is brittle when operating in environments whose test distributions are different from the training distribution due to spurious correlations.

Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning

no code implementations5 Jun 2019 Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, Aleksandra Mojsilović

Using machine learning in high-stakes applications often requires predictions to be accompanied by explanations comprehensible to the domain user, who has ultimate responsibility for decisions and outcomes.

Multi-Task Learning

Model Agnostic Contrastive Explanations for Structured Data

no code implementations31 May 2019 Amit Dhurandhar, Tejaswini Pedapati, Avinash Balakrishnan, Pin-Yu Chen, Karthikeyan Shanmugam, Ruchir Puri

Recently, a method [7] was proposed to generate contrastive explanations for differentiable models such as deep neural networks, where one has complete access to the model.

Enhancing Simple Models by Exploiting What They Already Know

no code implementations ICML 2020 Amit Dhurandhar, Karthikeyan Shanmugam, Ronny Luss

Our method also leverages the per sample hardness estimate of the simple model which is not the case with the prior works which primarily consider the complex model's confidences/predictions and is thus conceptually novel.

Small Data Image Classification

Leveraging Latent Features for Local Explanations

3 code implementations29 May 2019 Ronny Luss, Pin-Yu Chen, Amit Dhurandhar, Prasanna Sattigeri, Yunfeng Zhang, Karthikeyan Shanmugam, Chun-Chen Tu

As the application of deep neural networks proliferates in numerous areas such as medical imaging, video surveillance, and self driving cars, the need for explaining the decisions of these models has become a hot research topic, both at the global and local level.

General Classification Self-Driving Cars

TED: Teaching AI to Explain its Decisions

no code implementations12 Nov 2018 Michael Hind, Dennis Wei, Murray Campbell, Noel C. F. Codella, Amit Dhurandhar, Aleksandra Mojsilović, Karthikeyan Natesan Ramamurthy, Kush R. Varshney

Artificial intelligence systems are being increasingly deployed due to their potential to increase the efficiency, scale, consistency, fairness, and accuracy of decisions.


Streaming Methods for Restricted Strongly Convex Functions with Applications to Prototype Selection

no code implementations21 Jul 2018 Karthik S. Gurumoorthy, Amit Dhurandhar

In this paper, we show that if the optimization function is restricted-strongly-convex (RSC) and restricted-smooth (RSM) -- a rich subclass of weakly submodular functions -- then a streaming algorithm with constant factor approximation guarantee is possible.

Prototype Selection

Improving Simple Models with Confidence Profiles

no code implementations NeurIPS 2018 Amit Dhurandhar, Karthikeyan Shanmugam, Ronny Luss, Peder Olsen

Our transfer method involves a theoretically justified weighting of samples during the training of the simple model using confidence scores of these intermediate layers.

Teaching Meaningful Explanations

no code implementations29 May 2018 Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, Aleksandra Mojsilovic

The adoption of machine learning in high-stakes applications such as healthcare and law has lagged in part because predictions are not accompanied by explanations comprehensible to the domain user, who often holds the ultimate responsibility for decisions and outcomes.

A Formal Framework to Characterize Interpretability of Procedures

no code implementations12 Jul 2017 Amit Dhurandhar, Vijay Iyengar, Ronny Luss, Karthikeyan Shanmugam

We provide a novel notion of what it means to be interpretable, looking past the usual association with human understanding.

Efficient Data Representation by Selecting Prototypes with Importance Weights

3 code implementations5 Jul 2017 Karthik S. Gurumoorthy, Amit Dhurandhar, Guillermo Cecchi, Charu Aggarwal

Prototypical examples that best summarizes and compactly represents an underlying complex data distribution communicate meaningful insights to humans in domains where simple explanations are hard to extract.

TIP: Typifying the Interpretability of Procedures

no code implementations9 Jun 2017 Amit Dhurandhar, Vijay Iyengar, Ronny Luss, Karthikeyan Shanmugam

This leads to the insight that the improvement in the target model is not only a function of the oracle model's performance, but also its relative complexity with respect to the target model.

Knowledge Distillation

Learning with Changing Features

no code implementations29 Apr 2017 Amit Dhurandhar, Steve Hanneke, Liu Yang

In particular, we propose an approach to provably determine the time instant from which the new/changed features start becoming relevant with respect to an output variable in an agnostic (supervised) learning setting.

Change Point Detection

Uncovering Group Level Insights with Accordant Clustering

no code implementations7 Apr 2017 Amit Dhurandhar, Margareta Ackerman, Xiang Wang

Clustering is a widely-used data mining tool, which aims to discover partitions of similar items in data.

Building an Interpretable Recommender via Loss-Preserving Transformation

no code implementations19 Jun 2016 Amit Dhurandhar, Sechan Oh, Marek Petrik

We propose a method for building an interpretable recommender system for personalizing online content and promotions.

Classification General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.