Paper

Supervised classification via minimax probabilistic transformations

Conventional techniques for supervised classification constrain the classification rules considered and use surrogate losses for classification 0-1 loss. Favored families of classification rules are those that enjoy parametric representations suitable for surrogate loss minimization, and low complexity properties suitable for overfitting control. This paper presents classification techniques based on robust risk minimization (RRM) that we call linear probabilistic classifiers (LPCs). The proposed techniques consider unconstrained classification rules, optimize the classification 0-1 loss, and provide performance bounds during learning. LPCs enable efficient learning by using linear optimization, and avoid overffiting by using RRM over polyhedral uncertainty sets of distributions. We also provide finite-sample generalization bounds for LPCs and show their competitive performance with state-of-the-art techniques using benchmark datasets.

Results in Papers With Code
(↓ scroll down to see all results)