Search Results for author: Saumitra Mishra

Found 11 papers, 3 papers with code

On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations

no code implementations13 Jul 2023 Emanuele Albini, Shubham Sharma, Saumitra Mishra, Danial Dervovic, Daniele Magazzeni

Explainable Artificial Intelligence (XAI) has received widespread interest in recent years, and two of the most popular types of explanations are feature attributions, and counterfactual explanations.

counterfactual Counterfactual Explanation +3

GLOBE-CE: A Translation-Based Approach for Global Counterfactual Explanations

1 code implementation26 May 2023 Dan Ley, Saumitra Mishra, Daniele Magazzeni

Counterfactual explanations have been widely studied in explainability, with a range of application dependent methods prominent in fairness, recourse and model understanding.

counterfactual Fairness +1

Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees

1 code implementation19 May 2023 Faisal Hamman, Erfaun Noorani, Saumitra Mishra, Daniele Magazzeni, Sanghamitra Dutta

There is an emerging interest in generating robust counterfactual explanations that would remain valid if the model is updated or changed even slightly.

counterfactual valid

CLEAR: Generative Counterfactual Explanations on Graphs

no code implementations16 Oct 2022 Jing Ma, Ruocheng Guo, Saumitra Mishra, Aidong Zhang, Jundong Li

Counterfactual explanations promote explainability in machine learning models by answering the question "how should an input instance be perturbed to obtain a desired predicted label?".

counterfactual Counterfactual Explanation +1

Robust Counterfactual Explanations for Tree-Based Ensembles

no code implementations6 Jul 2022 Sanghamitra Dutta, Jason Long, Saumitra Mishra, Cecilia Tilli, Daniele Magazzeni

In this work, we propose a novel strategy -- that we call RobX -- to generate robust counterfactuals for tree-based ensembles, e. g., XGBoost.

counterfactual

Global Counterfactual Explanations: Investigations, Implementations and Improvements

no code implementations14 Apr 2022 Dan Ley, Saumitra Mishra, Daniele Magazzeni

Counterfactual explanations have been widely studied in explainability, with a range of application dependent methods emerging in fairness, recourse and model understanding.

counterfactual Counterfactual Explanation +1

Interpreting Black-boxes Using Primitive Parameterized Functions

no code implementations29 Sep 2021 Mahed Abroshan, Saumitra Mishra, Mohammad Mahdi Khalili

One approach for interpreting black-box machine learning models is to find a global approximation of the model using simple interpretable functions, which is called a metamodel (a model of the model).

Feature Importance Symbolic Regression

Reliable Local Explanations for Machine Listening

1 code implementation15 May 2020 Saumitra Mishra, Emmanouil Benetos, Bob L. Sturm, Simon Dixon

One way to analyse the behaviour of machine learning models is through local explanations that highlight input features that maximally influence model predictions.

GAN-based Generation and Automatic Selection of Explanations for Neural Networks

no code implementations21 Apr 2019 Saumitra Mishra, Daniel Stoller, Emmanouil Benetos, Bob L. Sturm, Simon Dixon

However, this requires a careful selection of hyper-parameters to generate interpretable examples for each neuron of interest, and current methods rely on a manual, qualitative evaluation of each setting, which is prohibitively slow.

Cannot find the paper you are looking for? You can Submit a new open access paper.