1 code implementation • ICML 2020 • Amr Mohamed Alexandari, Anshul Kundaje, Avanti Shrikumar
A limiting assumption of this algorithm is that p(y|x) is calibrated, which is not true of modern neural networks.
no code implementations • NeurIPS 2020 • Alex Tseng, Avanti Shrikumar, Anshul Kundaje
To address these shortcomings, we propose a novel attribution prior, where the Fourier transform of input-level attribution scores are computed at training-time, and high-frequency components of the Fourier spectrum are penalized.
no code implementations • 25 Sep 2019 • Avanti Shrikumar, Amr M. Alexandari, Anshul Kundaje
Label shift refers to the phenomenon where the marginal probability p(y) of observing a particular class changes between the training and test distributions, while the conditional probability p(x|y) stays fixed.
3 code implementations • 21 Jan 2019 • Amr Alexandari, Anshul Kundaje, Avanti Shrikumar
Label shift refers to the phenomenon where the prior class probability p(y) changes between the training and test distributions, while the conditional probability p(x|y) stays fixed.
1 code implementation • 31 Oct 2018 • Avanti Shrikumar, Katherine Tian, Žiga Avsec, Anna Shcherbina, Abhimanyu Banerjee, Mahfuza Sharmin, Surag Nair, Anshul Kundaje
TF-MoDISco (Transcription Factor Motif Discovery from Importance Scores) is an algorithm for identifying motifs from basepair-level importance scores computed on genomic sequence data.
1 code implementation • 26 Jul 2018 • Avanti Shrikumar, Jocelin Su, Anshul Kundaje
We compare Neuron Integrated Gradients to DeepLIFT, a pre-existing computationally efficient approach that is applicable to calculating internal neuron importance.
1 code implementation • 20 Feb 2018 • Amr M. Alexandari, Anshul Kundaje, Avanti Shrikumar
In this work, we present a general framework for abstention that can be applied to optimize any metric of interest, that is adaptable to label shift at test time, and that works out-of-the-box with any classifier that can be calibrated.
14 code implementations • ICML 2017 • Avanti Shrikumar, Peyton Greenside, Anshul Kundaje
Here we present DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input.
1 code implementation • 5 May 2016 • Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, Anshul Kundaje
Note: This paper describes an older version of DeepLIFT.