Fair AutoML Through Multi-objective Optimization

29 Sep 2021  ·  Steven Gardner, Oleg Golovidov, Joshua Griffin, Patrick Koch, Rui Shi, Brett Wujek, Yan Xu ·

There has been a recent surge of interest in fairness measurement and bias mitigation in machine learning, given the identification of significant disparities in predictions from models in many domains. In part, this focused interest is due to early failures of simple attempts at achieving “fairness through unawareness” in practice. Non-sensitive data may be hopelessly coupled with the omitted sensitive factors and systemic bias inevitably poisons the data in ways that may not be recoverable as the resulting model seeks to describe the effects found in the data on which it is trained. An effective way of preventing bias is to provide tools to measure it from multiple perspectives and viewpoints, and to incorporate these measures within Automated Machine Learning (AutoML) algorithms in search of accurate and fair models. The emerging realization of the importance of such metrics demands a long-standing missing feature, namely the ability to handle multiple objectives and constraints at all stages of the ML pipeline. In this paper, we introduce a novel AutoML framework that naturally supports multi-objective optimization. It generates higher-dimensional Pareto fronts and permits a single optimization process to efficiently achieve a proper approximation of the global front that depicts the trade-off among multiple model fairness and model accuracy measures. We show that both model training hyperparameters and fairness mitigation hyperparameters must be explored concurrently in order to characterize this trade-off most effectively. Results from experiments on multiple commonly investigated real-world case studies validate the effectiveness of our approach.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here