How Does Knowledge of the AUC Constrain the Set of Possible Ground-truth Labelings?

7 Sep 2017  ·  Jacob Whitehill ·

Recent work on privacy-preserving machine learning has considered how data-mining competitions such as Kaggle could potentially be "hacked", either intentionally or inadvertently, by using information from an oracle that reports a classifier's accuracy on the test set. For binary classification tasks in particular, one of the most common accuracy metrics is the Area Under the ROC Curve (AUC), and in this paper we explore the mathematical structure of how the AUC is computed from an n-vector of real-valued "guesses" with respect to the ground-truth labels. We show how knowledge of a classifier's AUC on the test set can constrain the set of possible ground-truth labelings, and we derive an algorithm both to compute the exact number of such labelings and to enumerate efficiently over them. Finally, we provide empirical evidence that, surprisingly, the number of compatible labelings can actually decrease as n grows, until a test set-dependent threshold is reached.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here