no code implementations • 26 Sep 2023 • Winston Chen, William Stafford Noble, Yang Young Lu
The complexity of deep neural networks (DNNs) makes them powerful but also makes them challenging to interpret, hindering their applicability in error-intolerant domains.
no code implementations • 1 Jan 2021 • Yang Young Lu, Wenbo Guo, Xinyu Xing, William Noble
Saliency methods can make deep neural network predictions more interpretable by identifying a set of critical features in an input sample, such as pixels that contribute most strongly to a prediction made by an image classifier.
no code implementations • 25 Sep 2019 • Yang Young Lu, Wenbo Guo, Xinyu Xing, William Stafford Noble
In this work, we propose a data-driven technique that uses the distribution-preserving decoys to infer robust saliency scores in conjunction with a pre-trained convolutional neural network classifier and any off-the-shelf saliency method.
1 code implementation • NeurIPS 2018 • Yang Young Lu, Yingying Fan, Jinchi Lv, William Stafford Noble
In this paper, we describe a method to increase the interpretability and reproducibility of DNNs by incorporating the idea of feature selection with controlled error rate.