Search Results for author: Anshul Kundaje

Found 13 papers, 9 papers with code

Learning Important Features Through Propagating Activation Differences

14 code implementations ICML 2017 Avanti Shrikumar, Peyton Greenside, Anshul Kundaje

Here we present DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input.

Interpretable Machine Learning

Computationally Efficient Measures of Internal Neuron Importance

1 code implementation26 Jul 2018 Avanti Shrikumar, Jocelin Su, Anshul Kundaje

We compare Neuron Integrated Gradients to DeepLIFT, a pre-existing computationally efficient approach that is applicable to calculating internal neuron importance.

Technical Note on Transcription Factor Motif Discovery from Importance Scores (TF-MoDISco) version 0.5.6.5

1 code implementation31 Oct 2018 Avanti Shrikumar, Katherine Tian, Žiga Avsec, Anna Shcherbina, Abhimanyu Banerjee, Mahfuza Sharmin, Surag Nair, Anshul Kundaje

TF-MoDISco (Transcription Factor Motif Discovery from Importance Scores) is an algorithm for identifying motifs from basepair-level importance scores computed on genomic sequence data.

A General Framework for Abstention Under Label Shift

1 code implementation20 Feb 2018 Amr M. Alexandari, Anshul Kundaje, Avanti Shrikumar

In this work, we present a general framework for abstention that can be applied to optimize any metric of interest, that is adaptable to label shift at test time, and that works out-of-the-box with any classifier that can be calibrated.

Domain Adaptation General Classification +2

Maximum Likelihood with Bias-Corrected Calibration is Hard-To-Beat at Label Shift Adaptation

3 code implementations21 Jan 2019 Amr Alexandari, Anshul Kundaje, Avanti Shrikumar

Label shift refers to the phenomenon where the prior class probability p(y) changes between the training and test distributions, while the conditional probability p(x|y) stays fixed.

Diabetic Retinopathy Detection Domain Adaptation +2

Network modelling of topological domains using Hi-C data

2 code implementations30 Jul 2017 Y. X. Rachel Wang, Purnamrita Sarkar, Oana Ursu, Anshul Kundaje, Peter J. Bickel

However, one of the drawbacks of community detection is that most methods take exchangeability of the nodes in the network for granted; whereas the nodes in this case, i. e. the positions on the chromosomes, are not exchangeable.

Applications Genomics

Unsupervised Learning from Noisy Networks with Applications to Hi-C Data

no code implementations NeurIPS 2016 Bo Wang, Junjie Zhu, Armin Pourshafeie, Oana Ursu, Serafim Batzoglou, Anshul Kundaje

In this paper, we propose an optimization framework to mine useful structures from noisy networks in an unsupervised manner.

Community Detection Denoising

Fourier-transform-based attribution priors improve the interpretability and stability of deep learning models for genomics

no code implementations NeurIPS 2020 Alex Tseng, Avanti Shrikumar, Anshul Kundaje

To address these shortcomings, we propose a novel attribution prior, where the Fourier transform of input-level attribution scores are computed at training-time, and high-frequency components of the Fourier spectrum are penalized.

Adapting to Label Shift with Bias-Corrected Calibration

no code implementations25 Sep 2019 Avanti Shrikumar, Amr M. Alexandari, Anshul Kundaje

Label shift refers to the phenomenon where the marginal probability p(y) of observing a particular class changes between the training and test distributions, while the conditional probability p(x|y) stays fixed.

Diabetic Retinopathy Detection Domain Adaptation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.