Search Results for author: Kaixiang Dong

Found 2 papers, 1 papers with code

The Double-Edged Sword of Input Perturbations to Robust Accurate Fairness

no code implementations1 Apr 2024 Xuran Li, Peng Wu, Yanting Chen, Xingjun Ma, Zhen Zhang, Kaixiang Dong

Deep neural networks (DNNs) are known to be sensitive to adversarial input perturbations, leading to a reduction in either prediction accuracy or individual fairness.

Adversarial Attack Fairness

RobustFair: Adversarial Evaluation through Fairness Confusion Directed Gradient Search

1 code implementation18 May 2023 Xuran Li, Peng Wu, Kaixiang Dong, Zhen Zhang, Yanting Chen

This matrix categorizes predictions as true fair, true biased, false fair, and false biased, and the perturbations guided by it can produce a dual impact on instances and their similar counterparts to either undermine prediction accuracy (robustness) or cause biased predictions (individual fairness).

Data Augmentation Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.