no code implementations • 12 Oct 2018 • Karsten Maurer, Walter Bennette
Assessing the predictive accuracy of black box classifiers is challenging in the absence of labeled test datasets.
1 code implementation • 29 Jun 2020 • Walter Bennette, Karsten Maurer, Sean Sisti
Through rigorous empirical experimentation, we demonstrate that our Adversarial Distance search discovers high-confidence errors at a rate greater than expected given model confidence.
1 code implementation • 25 Feb 2021 • Walter Bennette, Sally Dufek, Karsten Maurer, Sean Sisti, Bunyod Tusmatov
In this paper we propose a generalization to the Adversarial Distance search that leverages concepts from adversarial machine learning to identify predictions for which a classifier may be overly confident.
no code implementations • 30 Mar 2023 • Noah Fleischmann, Walter Bennette, Nathan Inkawhich
Machine learning models deployed in the open world may encounter observations that they were not trained to recognize, and they risk misclassifying such observations with high confidence.