Stochastic Learning of Additive Second-Order Penalties with Applications to Fairness

ICLR 2019  ·  Heinrich Jiang, Yifan Wu, Ofir Nachum ·

Many notions of fairness may be expressed as linear constraints, and the resulting constrained objective is often optimized by transforming the problem into its Lagrangian dual with additive linear penalties. In non-convex settings, the resulting problem may be difficult to solve as the Lagrangian is not guaranteed to have a deterministic saddle-point equilibrium. In this paper, we propose to modify the linear penalties to second-order ones, and we argue that this results in a more practical training procedure in non-convex, large-data settings. For one, the use of second-order penalties allows training the penalized objective with a fixed value of the penalty coefficient, thus avoiding the instability and potential lack of convergence associated with two-player min-max games. Secondly, we derive a method for efficiently computing the gradients associated with the second-order penalties in stochastic mini-batch settings. Our resulting algorithm performs well empirically, learning an appropriately fair classifier on a number of standard benchmarks.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here