1 code implementation • 30 Mar 2023 • Jerone T. A. Andrews, Przemyslaw Joniak, Alice Xiang
Few datasets contain self-identified sensitive attributes, inferring attributes risks introducing additional biases, and collecting attributes can carry legal risks.
1 code implementation • NeurIPS 2023 • Jerone T. A. Andrews, Dora Zhao, William Thong, Apostolos Modas, Orestis Papakyriakopoulos, Alice Xiang
Human-centric computer vision (HCCV) data curation practices often neglect privacy and bias concerns, leading to dataset retractions and unfair models.
1 code implementation • 21 Oct 2022 • Dora Zhao, Jerone T. A. Andrews, Alice Xiang
We show models can learn to exploit correlations with respect to multiple attributes (e. g., {$\texttt{computer}$, $\texttt{keyboard}$}), which are not accounted for by current metrics.
no code implementations • 30 Jun 2020 • Beatrice Perez, Sara R. Machado, Jerone T. A. Andrews, Nicolas Kourtellis
Donations to charity-based crowdfunding environments have been on the rise in the last few years.
1 code implementation • 18 Feb 2020 • Jerone T. A. Andrews, Yidan Zhang, Lewis D. Griffin
Model anonymization is the process of transforming these artifacts such that the apparent capture model is changed.
no code implementations • 20 Jun 2019 • Jerone T. A. Andrews, Thomas Tanay, Lewis D. Griffin
New quantitative results are presented that support an explanation in terms of the geometry of the representations spaces used by the verification systems.
no code implementations • 19 Jun 2018 • Thomas Tanay, Jerone T. A. Andrews, Lewis D. Griffin
Designing models that are robust to small adversarial perturbations of their inputs has proven remarkably difficult.