no code implementations • 7 Mar 2025 • Laura Weidinger, Inioluwa Deborah Raji, Hanna Wallach, Margaret Mitchell, Angelina Wang, Olawale Salaudeen, Rishi Bommasani, Deep Ganguli, Sanmi Koyejo, William Isaac
There is an increasing imperative to anticipate and understand the performance and safety of generative AI systems in real-world deployment contexts.
1 code implementation • 4 Feb 2025 • Angelina Wang, Michelle Phan, Daniel E. Ho, Sanmi Koyejo
Algorithmic fairness has conventionally adopted a perspective of racial color-blindness (i. e., difference unaware treatment).
2 code implementations • 6 Feb 2024 • Xuechunzi Bai, Angelina Wang, Ilia Sucholutsky, Thomas L. Griffiths
Large language models (LLMs) can pass explicit social bias tests but still harbor implicit biases, similar to humans who endorse egalitarian beliefs yet exhibit subtle biases.
no code implementations • 6 Feb 2024 • Angelina Wang, Xuechunzi Bai, Solon Barocas, Su Lin Blodgett
However, certain stereotype-violating errors are more experientially harmful for men, potentially due to perceived threats to masculinity.
1 code implementation • ICCV 2023 • Angelina Wang, Olga Russakovsky
Transfer learning is beneficial by allowing the expressive features of models pretrained on large-scale datasets to be finetuned for the target task of smaller, more domain-specific datasets.
no code implementations • ICCV 2023 • Nicole Meister, Dora Zhao, Angelina Wang, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky
Gender biases are known to exist within large-scale visual datasets and can be reflected or even amplified in downstream models.
no code implementations • 14 Jun 2022 • Angelina Wang, Solon Barocas, Kristen Laird, Hanna Wallach
We propose multiple measurement techniques for each type of harm.
1 code implementation • 10 May 2022 • Angelina Wang, Vikram V. Ramaswamy, Olga Russakovsky
In this work, we grapple with questions that arise along three stages of the machine learning pipeline when incorporating intersectionality as multiple demographic attributes: (1) which demographic attributes to include as dataset labels, (2) how to handle the progressively smaller size of subgroups during model training, and (3) how to move beyond existing evaluation metrics when benchmarking model fairness for more subgroups.
1 code implementation • ICCV 2021 • Dora Zhao, Angelina Wang, Olga Russakovsky
Image captioning is an important task for benchmarking visual reasoning and for enabling accessibility for people with vision impairments.
1 code implementation • 24 Feb 2021 • Angelina Wang, Olga Russakovsky
We introduce and analyze a new, decoupled metric for measuring bias amplification, $\text{BiasAmp}_{\rightarrow}$ (Directional Bias Amplification).
no code implementations • 1 Jan 2021 • Angelina Wang, Olga Russakovsky
The conversation around the fairness of machine learning models is growing and evolving.
2 code implementations • ECCV 2020 • Angelina Wang, Alexander Liu, Ryan Zhang, Anat Kleiman, Leslie Kim, Dora Zhao, Iroha Shirai, Arvind Narayanan, Olga Russakovsky
Machine learning models are known to perpetuate and even amplify the biases present in the data.
no code implementations • 11 May 2019 • Angelina Wang, Thanard Kurutach, Kara Liu, Pieter Abbeel, Aviv Tamar
We further demonstrate our approach on learning to imagine and execute in 3 environments, the final of which is deformable rope manipulation on a PR2 robot.
no code implementations • 22 Nov 2017 • William Wang, Angelina Wang, Aviv Tamar, Xi Chen, Pieter Abbeel
We posit that a generative approach is the natural remedy for this problem, and propose a method for classification using generative models.