Search Results for author: Maria De-Arteaga

Found 27 papers, 5 papers with code

Diverse, but Divisive: LLMs Can Exaggerate Gender Differences in Opinion Related to Harms of Misinformation

no code implementations29 Jan 2024 Terrence Neumann, Sooyong Lee, Maria De-Arteaga, Sina Fazelpour, Matthew Lease

We pose two central questions: (1) To what extent do prompts with explicit gender references reflect gender differences in opinion in the United States on topics of social relevance?

Fact Checking Language Modelling +2

A Critical Survey on Fairness Benefits of XAI

no code implementations15 Oct 2023 Luca Deck, Jakob Schoeffer, Maria De-Arteaga, Niklas Kühl

In this critical survey, we analyze typical claims on the relationship between explainable AI (XAI) and fairness to disentangle the multidimensional relationship between these two concepts.

Fairness

Mitigating Label Bias via Decoupled Confident Learning

no code implementations18 Jul 2023 Yunyi Li, Maria De-Arteaga, Maytal Saar-Tsechansky

While the presence of labeling bias has been discussed conceptually, there is a lack of methodologies to address this problem.

Fairness Hate Speech Detection

Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection

1 code implementation14 Feb 2023 Soumyajit Gupta, Sooyong Lee, Maria De-Arteaga, Matthew Lease

We propose framing toxicity detection as multi-task learning (MTL), allowing a model to specialize on the relationships that are relevant to each demographic group while also leveraging shared properties across groups.

Multi-Task Learning

Learning Complementary Policies for Human-AI Teams

no code implementations6 Feb 2023 Ruijiang Gao, Maytal Saar-Tsechansky, Maria De-Arteaga, Ligong Han, Wei Sun, Min Kyung Lee, Matthew Lease

We then extend our approach to leverage opportunities and mitigate risks that arise in important contexts in practice: 1) when a team is composed of multiple humans with differential and potentially complementary abilities, 2) when the observational data includes consistent deterministic actions, and 3) when the covariate distribution of future decisions differ from that in the historical data.

Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making

no code implementations23 Sep 2022 Jakob Schoeffer, Maria De-Arteaga, Niklas Kuehl

In this work, we study the effects of feature-based explanations on distributive fairness of AI-assisted decisions, specifically focusing on the task of predicting occupations from short textual bios.

Attribute Decision Making +1

Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness

1 code implementation13 Aug 2022 Vincent Jeanselme, Maria De-Arteaga, Zhe Zhang, Jessica Barrett, Brian Tom

Machine learning risks reinforcing biases present in data, and, as we argue in this work, in what is absent from data.

Fairness Imputation

Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables

no code implementations28 Jul 2022 Kenneth Holstein, Maria De-Arteaga, Lakshmi Tumati, Yanghuidi Cheng

Our findings indicate that presenting prompts about unobservables can change how humans integrate model outputs and unobservables, but do not necessarily lead to improved performance.

Algorithmic Fairness in Business Analytics: Directions for Research and Practice

no code implementations22 Jul 2022 Maria De-Arteaga, Stefan Feuerriegel, Maytal Saar-Tsechansky

The extensive adoption of business analytics (BA) has brought financial gains and increased efficiencies.

Fairness

More Data Can Lead Us Astray: Active Data Acquisition in the Presence of Label Bias

no code implementations15 Jul 2022 Yunyi Li, Maria De-Arteaga, Maytal Saar-Tsechansky

We then empirically show that, when overlooking label bias, collecting more data can aggravate bias, and imposing fairness constraints that rely on the observed labels in the data collection process may not address the problem.

Active Learning Fairness

Doubting AI Predictions: Influence-Driven Second Opinion Recommendation

no code implementations29 Apr 2022 Maria De-Arteaga, Alexandra Chouldechova, Artur Dubrawski

Effective human-AI collaboration requires a system design that provides humans with meaningful ways to make sense of and critically evaluate algorithmic recommendations.

Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms

no code implementations28 Apr 2022 Terrence Neumann, Maria De-Arteaga, Sina Fazelpour

Faced with the scale and surge of misinformation on social media, many platforms and fact-checking organizations have turned to algorithms for automating key parts of misinformation detection pipelines.

Fact Checking Fairness +1

Social Norm Bias: Residual Harms of Fairness-Aware Algorithms

no code implementations25 Aug 2021 Myra Cheng, Maria De-Arteaga, Lester Mackey, Adam Tauman Kalai

Many modern machine learning algorithms mitigate bias by enforcing fairness constraints across coarsely-defined groups related to a sensitive attribute like gender or race.

Attribute Decision Making +1

The effect of differential victim crime reporting on predictive policing systems

1 code implementation30 Jan 2021 Nil-Jana Akpinar, Maria De-Arteaga, Alexandra Chouldechova

Our analysis is based on a simulation patterned after district-level victimization and crime reporting survey data for Bogot\'a, Colombia.

Fairness

Leveraging Expert Consistency to Improve Algorithmic Decision Support

no code implementations24 Jan 2021 Maria De-Arteaga, Vincent Jeanselme, Artur Dubrawski, Alexandra Chouldechova

However, there is frequently a gap between decision objectives and what is captured in the observed outcomes used as labels to train ML models.

BIG-bench Machine Learning

Killings of social leaders in the Colombian post-conflict: Data analysis for investigative journalism

1 code implementation19 Jun 2019 Maria De-Arteaga, Benedikt Boecking

After the peace agreement of 2016 with FARC, the killings of social leaders have emerged as an important post-conflict challenge for Colombia.

Applications Computers and Society

What's in a Name? Reducing Bias in Bios without Access to Protected Attributes

no code implementations NAACL 2019 Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, Adam Tauman Kalai

In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual's true occupation and a word embedding of their name.

Word Embeddings

Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting

4 code implementations27 Jan 2019 Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai

We present a large-scale study of gender bias in occupation classification, a task where the use of machine learning may lead to negative outcomes on peoples' lives.

Classification General Classification

Proceedings of NeurIPS 2018 Workshop on Machine Learning for the Developing World: Achieving Sustainable Impact

no code implementations21 Dec 2018 Maria De-Arteaga, Amanda Coston, William Herlands

This is the Proceedings of NeurIPS 2018 Workshop on Machine Learning for the Developing World: Achieving Sustainable Impact, held in Montreal, Canada on December 8, 2018

BIG-bench Machine Learning

Learning under selective labels in the presence of expert consistency

no code implementations2 Jul 2018 Maria De-Arteaga, Artur Dubrawski, Alexandra Chouldechova

We explore the problem of learning under selective labels in the context of algorithm-assisted decision making.

Data Augmentation Decision Making +1

Proceedings of NIPS 2017 Workshop on Machine Learning for the Developing World

no code implementations27 Nov 2017 Maria De-Arteaga, William Herlands

This is the Proceedings of NIPS 2017 Workshop on Machine Learning for the Developing World, held in Long Beach, California, USA on December 8, 2017

BIG-bench Machine Learning

Canonical Autocorrelation Analysis

no code implementations19 Nov 2015 Maria De-Arteaga, Artur Dubrawski, Peter Huggins

We present an extension of sparse Canonical Correlation Analysis (CCA) designed for finding multiple-to-multiple linear correlations within a single set of variables.

Anomaly Detection

Lass-0: sparse non-convex regression by local search

no code implementations13 Nov 2015 William Herlands, Maria De-Arteaga, Daniel Neill, Artur Dubrawski

We compute approximate solutions to L0 regularized linear regression using L1 regularization, also known as the Lasso, as an initialization step.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.