1 code implementation • 25 Feb 2021 • Walter Bennette, Sally Dufek, Karsten Maurer, Sean Sisti, Bunyod Tusmatov
In this paper we propose a generalization to the Adversarial Distance search that leverages concepts from adversarial machine learning to identify predictions for which a classifier may be overly confident.
1 code implementation • 29 Jun 2020 • Walter Bennette, Karsten Maurer, Sean Sisti
Through rigorous empirical experimentation, we demonstrate that our Adversarial Distance search discovers high-confidence errors at a rate greater than expected given model confidence.
no code implementations • 12 Oct 2018 • Karsten Maurer, Walter Bennette
Assessing the predictive accuracy of black box classifiers is challenging in the absence of labeled test datasets.