Deep Generative Views to Mitigate Gender Classification Bias Across Gender-Race Groups

17 Aug 2022  ·  Sreeraj Ramachandran, Ajita Rattani ·

Published studies have suggested the bias of automated face-based gender classification algorithms across gender-race groups. Specifically, unequal accuracy rates were obtained for women and dark-skinned people. To mitigate the bias of gender classifiers, the vision community has developed several strategies. However, the efficacy of these mitigation strategies is demonstrated for a limited number of races mostly, Caucasian and African-American. Further, these strategies often offer a trade-off between bias and classification accuracy. To further advance the state-of-the-art, we leverage the power of generative views, structured learning, and evidential learning towards mitigating gender classification bias. We demonstrate the superiority of our bias mitigation strategy in improving classification accuracy and reducing bias across gender-racial groups through extensive experimental validation, resulting in state-of-the-art performance in intra- and cross dataset evaluations.

PDF Abstract

Results from the Paper

 Ranked #1 on Fairness on MORPH (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Fairness DiveFace Neighbour Learning Degree of Bias (DoB) 0.49 # 1
Facial Attribute Classification DiveFace Neighbour Learning Accuracy (%) 98.60 # 1
Fairness MORPH Neighbour Learning Degree of Bias (DoB) 6.26 # 1
Facial Attribute Classification MORPH Neighbour Learning Accuracy (%) 96.41 # 1
Fairness UTKFace Neighbour Learning Degree of Bias (DoB) 1.96 # 1
Facial Attribute Classification UTKFace Neighbour Learning Accuracy (%) 94.76 # 1


No methods listed for this paper. Add relevant methods here