no code implementations • CVPR 2023 • Taeuk Jang, Xiaoqian Wang
We theoretically show that the triplet loss amplifies the bias in self-supervised representation learning.
no code implementations • NeurIPS 2021 • Taeuk Jang, Pengyi Shi, Xiaoqian Wang
As we only need an estimated probability distribution of model output instead of the classification model structure, our post-processing model can be applied to a wide range of classification models and improve fairness in a model-agnostic manner and ensure privacy.
no code implementations • 29 Sep 2021 • Taeuk Jang, Xiaoqian Wang, Heng Huang
To achieve this goal, we reformulate the data input by eliminating the sensitive information and strengthen model fairness by minimizing the marginal contribution of the sensitive feature.