Non-convex Optimization for Learning a Fair Predictor under Equalized Loss Fairness Constraint

29 Sep 2021  ·  Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan, Iman Vakilinia ·

Supervised learning models have been increasingly used in various domains such as lending, college admission, natural language processing, face recognition, etc. These models may inherit pre-existing biases from training datasets and exhibit discrimination against protected social groups. Various fairness notions have been introduced to address fairness issues. In general, finding a fair predictor leads to a constrained optimization problem, and depending on the fairness notion, it may be non-convex. In this work, we focus on Equalized Loss ($\textsf{EL}$), a fairness notion that requires the prediction error/loss to be equalized across different demographic groups. Imposing this constraint to the learning process leads to a non-convex optimization problem even if the loss function is convex. We introduce algorithms that can leverage off-the-shelf convex programming tools and efficiently find the $\textit{global}$ optimum of this non-convex problem. In particular, we first propose the $\mathtt{ELminimizer}$ algorithm, which finds the optimal $\textsf{EL}$ fair predictor by reducing the non-convex optimization problem to a sequence of convex constrained optimizations. We then propose a simple algorithm that is computationally more efficient compared to $\mathtt{ELminimizer}$ and finds a sub-optimal $\textsf{EL}$ fair predictor using $\textit{unconstrained}$ convex programming tools. Experiments on real-world data show the effectiveness of our algorithms.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here