Search Results for author: Solon Barocas

Found 14 papers, 5 papers with code

Measuring machine learning harms from stereotypes: requires understanding who is being harmed by which errors in what ways

no code implementations6 Feb 2024 Angelina Wang, Xuechunzi Bai, Solon Barocas, Su Lin Blodgett

However, certain stereotype-violating errors are more experientially harmful for men, potentially due to perceived threats to masculinity.

Fairness Image Retrieval

On the Actionability of Outcome Prediction

no code implementations8 Sep 2023 Lydia T. Liu, Solon Barocas, Jon Kleinberg, Karen Levy

Through a simple model encompassing actions, latent states, and measurements, we demonstrate that pure outcome prediction rarely results in the most effective policy for taking actions, even when combined with other measurements.

Multi-Target Multiplicity: Flexibility and Fairness in Target Specification under Resource Constraints

1 code implementation23 Jun 2023 Jamelle Watson-Daniels, Solon Barocas, Jake M. Hofman, Alexandra Chouldechova

Along the way, we refine the study of single-target multiplicity by introducing notions of multiplicity that respect resource constraints -- a feature of many real-world tasks that is not captured by existing notions of predictive multiplicity.

Decision Making Fairness

Informational Diversity and Affinity Bias in Team Growth Dynamics

no code implementations28 Jan 2023 Hoda Heidari, Solon Barocas, Jon Kleinberg, Karen Levy

Prior work has provided strong evidence that, within organizational settings, teams that bring a diversity of information and perspectives to a task are more effective than teams that do not.

Diversity

Mimetic Models: Ethical Implications of AI that Acts Like You

no code implementations19 Jul 2022 Reid McIlroy-Young, Jon Kleinberg, Siddhartha Sen, Solon Barocas, Ashton Anderson

An emerging theme in artificial intelligence research is the creation of models to simulate the decisions and behavior of specific people, in domains including game-playing, text generation, and artistic expression.

Text Generation

REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research

1 code implementation5 May 2022 Jessie J. Smith, Saleema Amershi, Solon Barocas, Hanna Wallach, Jennifer Wortman Vaughan

Transparency around limitations can improve the scientific rigor of research, help ensure appropriate interpretation of research findings, and make research claims more credible.

BIG-bench Machine Learning

An Uncommon Task: Participatory Design in Legal AI

no code implementations8 Mar 2022 Fernando Delgado, Solon Barocas, Karen Levy

Despite growing calls for participation in AI design, there are to date few empirical studies of what these processes look like and how they can be structured for meaningful engagement with domain experts.

Text Retrieval

Computer Vision and Conflicting Values: Describing People with Automated Alt Text

no code implementations26 May 2021 Margot Hanley, Solon Barocas, Karen Levy, Shiri Azenkot, Helen Nissenbaum

In this paper, we investigate the ethical dilemmas faced by companies that have adopted the use of computer vision for producing alt text: textual descriptions of images for blind and low vision people, We use Facebook's automatic alt text tool as our primary case study.

Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs

no code implementations10 Mar 2021 Solon Barocas, Anhong Guo, Ece Kamar, Jacquelyn Krones, Meredith Ringel Morris, Jennifer Wortman Vaughan, Duncan Wadsworth, Hanna Wallach

Disaggregated evaluations of AI systems, in which system performance is assessed and reported separately for different groups of people, are conceptually simple.

Language (Technology) is Power: A Critical Survey of ``Bias'' in NLP

no code implementations ACL 2020 Su Lin Blodgett, Solon Barocas, Hal Daum{\'e} III, Hanna Wallach

We survey 146 papers analyzing {``}bias{''} in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing {``}bias{''} is an inherently normative process.

Language (Technology) is Power: A Critical Survey of "Bias" in NLP

1 code implementation28 May 2020 Su Lin Blodgett, Solon Barocas, Hal Daumé III, Hanna Wallach

We survey 146 papers analyzing "bias" in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing "bias" is an inherently normative process.

Cannot find the paper you are looking for? You can Submit a new open access paper.