no code implementations • 10 Oct 2024 • Hoin Jung, Taeuk Jang, Xiaoqian Wang
Recent advancements in Vision-Language Models (VLMs) have enabled complex multimodal tasks by processing text and image data simultaneously, significantly enhancing the field of artificial intelligence.
no code implementations • CVPR 2024 • Taeuk Jang, Xiaoqian Wang
Learning fair representation in deep learning is essential to mitigate discriminatory outcomes and enhance trustworthiness.
no code implementations • CVPR 2023 • Taeuk Jang, Xiaoqian Wang
We theoretically show that the triplet loss amplifies the bias in self-supervised representation learning.
no code implementations • NeurIPS 2021 • Taeuk Jang, Pengyi Shi, Xiaoqian Wang
As we only need an estimated probability distribution of model output instead of the classification model structure, our post-processing model can be applied to a wide range of classification models and improve fairness in a model-agnostic manner and ensure privacy.
no code implementations • 29 Sep 2021 • Taeuk Jang, Xiaoqian Wang, Heng Huang
To achieve this goal, we reformulate the data input by eliminating the sensitive information and strengthen model fairness by minimizing the marginal contribution of the sensitive feature.