Search Results for author: Angelina Wang

Found 12 papers, 5 papers with code

Safer Classification by Synthesis

no code implementations22 Nov 2017 William Wang, Angelina Wang, Aviv Tamar, Xi Chen, Pieter Abbeel

We posit that a generative approach is the natural remedy for this problem, and propose a method for classification using generative models.

Classification General Classification

Learning Robotic Manipulation through Visual Planning and Acting

no code implementations11 May 2019 Angelina Wang, Thanard Kurutach, Kara Liu, Pieter Abbeel, Aviv Tamar

We further demonstrate our approach on learning to imagine and execute in 3 environments, the final of which is deformable rope manipulation on a PR2 robot.

Visual Tracking

A Technical and Normative Investigation of Social Bias Amplification

no code implementations1 Jan 2021 Angelina Wang, Olga Russakovsky

The conversation around the fairness of machine learning models is growing and evolving.

Fairness

Directional Bias Amplification

1 code implementation24 Feb 2021 Angelina Wang, Olga Russakovsky

We introduce and analyze a new, decoupled metric for measuring bias amplification, $\text{BiasAmp}_{\rightarrow}$ (Directional Bias Amplification).

Fairness

Understanding and Evaluating Racial Biases in Image Captioning

1 code implementation ICCV 2021 Dora Zhao, Angelina Wang, Olga Russakovsky

Image captioning is an important task for benchmarking visual reasoning and for enabling accessibility for people with vision impairments.

Benchmarking Image Captioning +1

Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation

1 code implementation10 May 2022 Angelina Wang, Vikram V. Ramaswamy, Olga Russakovsky

In this work, we grapple with questions that arise along three stages of the machine learning pipeline when incorporating intersectionality as multiple demographic attributes: (1) which demographic attributes to include as dataset labels, (2) how to handle the progressively smaller size of subgroups during model training, and (3) how to move beyond existing evaluation metrics when benchmarking model fairness for more subgroups.

Attribute Benchmarking +2

Gender Artifacts in Visual Datasets

no code implementations ICCV 2023 Nicole Meister, Dora Zhao, Angelina Wang, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky

Gender biases are known to exist within large-scale visual datasets and can be reflected or even amplified in downstream models.

Overwriting Pretrained Bias with Finetuning Data

1 code implementation ICCV 2023 Angelina Wang, Olga Russakovsky

Transfer learning is beneficial by allowing the expressive features of models pretrained on large-scale datasets to be finetuned for the target task of smaller, more domain-specific datasets.

Attribute Transfer Learning

Measuring Implicit Bias in Explicitly Unbiased Large Language Models

no code implementations6 Feb 2024 Xuechunzi Bai, Angelina Wang, Ilia Sucholutsky, Thomas L. Griffiths

Large language models (LLMs) can pass explicit bias tests but still harbor implicit biases, similar to humans who endorse egalitarian beliefs yet exhibit subtle biases.

Decision Making

Measuring machine learning harms from stereotypes: requires understanding who is being harmed by which errors in what ways

no code implementations6 Feb 2024 Angelina Wang, Xuechunzi Bai, Solon Barocas, Su Lin Blodgett

However, certain stereotype-violating errors are more experientially harmful for men, potentially due to perceived threats to masculinity.

Fairness Image Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.