How Does Knowledge of the AUC Constrain the Set of Possible Ground-truth Labelings?

7 Sep 2017Jacob Whitehill

Recent work on privacy-preserving machine learning has considered how data-mining competitions such as Kaggle could potentially be "hacked", either intentionally or inadvertently, by using information from an oracle that reports a classifier's accuracy on the test set. For binary classification tasks in particular, one of the most common accuracy metrics is the Area Under the ROC Curve (AUC), and in this paper we explore the mathematical structure of how the AUC is computed from an n-vector of real-valued "guesses" with respect to the ground-truth labels... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet