Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking Fairness and Algorithm Utility

15 Jun 2020  ·  Sen Cui, Weishen Pan, Chang-Shui Zhang, Fei Wang ·

Bipartite ranking, which aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data, is widely adopted in various applications where sample prioritization is needed. Recently, there have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups defined by sensitive attributes. While there could be trade-off between fairness and performance, in this paper we propose a model agnostic post-processing framework for balancing them in the bipartite ranking scenario. Specifically, we maximize a weighted sum of the utility and fairness by directly adjusting the relative ordering of samples across groups. By formulating this problem as the identification of an optimal warping path across different protected groups, we propose a non-parametric method to search for such an optimal path through a dynamic programming process. Our method is compatible with various classification models and applicable to a variety of ranking fairness metrics. Comprehensive experiments on a suite of benchmark data sets and two real-world patient electronic health record repositories show that our method can achieve a great balance between the algorithm utility and ranking fairness. Furthermore, we experimentally verify the robustness of our method when faced with the fewer training samples and the difference between training and testing ranking score distributions.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here