Search Results for author: Angelina Wang

Found 9 papers, 4 papers with code

Gender Artifacts in Visual Datasets

no code implementations18 Jun 2022 Nicole Meister, Dora Zhao, Angelina Wang, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky

Gender biases are known to exist within large-scale visual datasets and can be reflected or even amplified in downstream models.

Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation

1 code implementation10 May 2022 Angelina Wang, Vikram V. Ramaswamy, Olga Russakovsky

In this work, we grapple with questions that arise along three stages of the machine learning pipeline when incorporating intersectionality as multiple demographic attributes: (1) which demographic attributes to include as dataset labels, (2) how to handle the progressively smaller size of subgroups during model training, and (3) how to move beyond existing evaluation metrics when benchmarking model fairness for more subgroups.

BIG-bench Machine Learning Fairness

Understanding and Evaluating Racial Biases in Image Captioning

1 code implementation ICCV 2021 Dora Zhao, Angelina Wang, Olga Russakovsky

Image captioning is an important task for benchmarking visual reasoning and for enabling accessibility for people with vision impairments.

Image Captioning Visual Reasoning

Directional Bias Amplification

1 code implementation24 Feb 2021 Angelina Wang, Olga Russakovsky

We introduce and analyze a new, decoupled metric for measuring bias amplification, $\text{BiasAmp}_{\rightarrow}$ (Directional Bias Amplification).

Fairness

A Technical and Normative Investigation of Social Bias Amplification

no code implementations1 Jan 2021 Angelina Wang, Olga Russakovsky

The conversation around the fairness of machine learning models is growing and evolving.

Fairness

Learning Robotic Manipulation through Visual Planning and Acting

no code implementations11 May 2019 Angelina Wang, Thanard Kurutach, Kara Liu, Pieter Abbeel, Aviv Tamar

We further demonstrate our approach on learning to imagine and execute in 3 environments, the final of which is deformable rope manipulation on a PR2 robot.

Visual Tracking

Safer Classification by Synthesis

no code implementations22 Nov 2017 William Wang, Angelina Wang, Aviv Tamar, Xi Chen, Pieter Abbeel

We posit that a generative approach is the natural remedy for this problem, and propose a method for classification using generative models.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.