FLAC: Fairness-Aware Representation Learning by Suppressing Attribute-Class Associations

27 Apr 2023  ยท  Ioannis Sarridis, Christos Koutlis, Symeon Papadopoulos, Christos Diou ยท

Bias in computer vision systems can perpetuate or even amplify discrimination against certain populations. Considering that bias is often introduced by biased visual datasets, many recent research efforts focus on training fair models using such data. However, most of them heavily rely on the availability of protected attribute labels in the dataset, which limits their applicability, while label-unaware approaches, i.e., approaches operating without such labels, exhibit considerably lower performance. To overcome these limitations, this work introduces FLAC, a methodology that minimizes mutual information between the features extracted by the model and a protected attribute, without the use of attribute labels. To do that, FLAC proposes a sampling strategy that highlights underrepresented samples in the dataset, and casts the problem of learning fair representations as a probability matching problem that leverages representations extracted by a bias-capturing classifier. It is theoretically shown that FLAC can indeed lead to fair representations, that are independent of the protected attributes. FLAC surpasses the current state-of-the-art on Biased MNIST, CelebA, and UTKFace, by 29.1%, 18.1%, and 21.9%, respectively. Additionally, FLAC exhibits 2.2% increased accuracy on ImageNet-A consisting of the most challenging samples of ImageNet. Finally, in most experiments, FLAC even outperforms the bias label-aware state-of-the-art methods.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Classification (ฯ=0.999) Biased MNIST Accuracy 94.1% # 1
Classification (ฯ=0.990) Biased MNIST Accuracy 98.7% # 1
Classification (ฯ=0.995) Biased MNIST Accuracy 98.2% # 1
Classification (ฯ=0.997) Biased MNIST Accuracy 97.8% # 1
HairColor/Bias-conflicting CelebA Accuracy 88.7% # 1
HairColor/Unbiased CelebA Accuracy 91.2% # 2
HeavyMakeup/Unbiased CelebA Accuracy 85.4% # 1
HeavyMakeup/Bias-conflicting CelebA Accuracy 79.1% # 1
Age/Bias-conflicting UTKFace Accuracy 81.1% # 1
Race/Unbiased UTKFace Accuracy 92% # 1
Race/Bias-conflicting UTKFace Accuracy 92.2% # 1
Age/Unbiased UTKFace Accuracy 80.6% # 1

Methods


No methods listed for this paper. Add relevant methods here