Making Convex Loss Functions Robust to Outliers using $e$-Exponentiated Transformation

16 Feb 2019  ·  Suvadeep Hajra ·

In this paper, we propose a novel {\em $e$-exponentiated} transformation, $0 \le e<1$, for loss functions. When the transformation is applied to a convex loss function, the transformed loss function become more robust to outliers. Using a novel generalization error bound, we have theoretically shown that the transformed loss function has a tighter bound for datasets corrupted by outliers. Our empirical observation shows that the accuracy obtained using the transformed loss function can be significantly better than the same obtained using the original loss function and comparable to that obtained by some other state of the art methods in the presence of label noise.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here