Search Results for author: Maria De-Arteaga

Found 25 papers, 5 papers with code

Mitigating Label Bias via Decoupled Confident Learning

no code implementations18 Jul 2023 Yunyi Li, Maria De-Arteaga, Maytal Saar-Tsechansky

While the presence of labeling bias has been discussed conceptually, there is a lack of methodologies to address this problem.

Fairness Hate Speech Detection

Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection

1 code implementation14 Feb 2023 Soumyajit Gupta, Sooyong Lee, Maria De-Arteaga, Matthew Lease

We propose framing toxicity detection as multi-task learning (MTL), allowing a model to specialize on the relationships that are relevant to each demographic group while also leveraging shared properties across groups.

Multi-Task Learning

Learning Complementary Policies for Human-AI Teams

no code implementations6 Feb 2023 Ruijiang Gao, Maytal Saar-Tsechansky, Maria De-Arteaga, Ligong Han, Wei Sun, Min Kyung Lee, Matthew Lease

We then extend our approach to leverage opportunities and mitigate risks that arise in important contexts in practice: 1) when a team is composed of multiple humans with differential and potentially complementary abilities, 2) when the observational data includes consistent deterministic actions, and 3) when the covariate distribution of future decisions differ from that in the historical data.

On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making

no code implementations23 Sep 2022 Jakob Schoeffer, Maria De-Arteaga, Niklas Kuehl

In this paper, we conduct an empirical study to examine the relationship between feature-based explanations and distributive fairness, mediated by human perceptions and reliance on AI recommendations.

Decision Making Fairness

Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness

1 code implementation13 Aug 2022 Vincent Jeanselme, Maria De-Arteaga, Zhe Zhang, Jessica Barrett, Brian Tom

Machine learning risks reinforcing biases present in data, and, as we argue in this work, in what is absent from data.

Fairness Imputation

Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables

no code implementations28 Jul 2022 Kenneth Holstein, Maria De-Arteaga, Lakshmi Tumati, Yanghuidi Cheng

Our findings indicate that presenting prompts about unobservables can change how humans integrate model outputs and unobservables, but do not necessarily lead to improved performance.

Algorithmic Fairness in Business Analytics: Directions for Research and Practice

no code implementations22 Jul 2022 Maria De-Arteaga, Stefan Feuerriegel, Maytal Saar-Tsechansky

The extensive adoption of business analytics (BA) has brought financial gains and increased efficiencies.


More Data Can Lead Us Astray: Active Data Acquisition in the Presence of Label Bias

no code implementations15 Jul 2022 Yunyi Li, Maria De-Arteaga, Maytal Saar-Tsechansky

We then empirically show that, when overlooking label bias, collecting more data can aggravate bias, and imposing fairness constraints that rely on the observed labels in the data collection process may not address the problem.

Active Learning Fairness

Doubting AI Predictions: Influence-Driven Second Opinion Recommendation

no code implementations29 Apr 2022 Maria De-Arteaga, Alexandra Chouldechova, Artur Dubrawski

Effective human-AI collaboration requires a system design that provides humans with meaningful ways to make sense of and critically evaluate algorithmic recommendations.

Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms

no code implementations28 Apr 2022 Terrence Neumann, Maria De-Arteaga, Sina Fazelpour

Faced with the scale and surge of misinformation on social media, many platforms and fact-checking organizations have turned to algorithms for automating key parts of misinformation detection pipelines.

Fact Checking Fairness +1

Social Norm Bias: Residual Harms of Fairness-Aware Algorithms

no code implementations25 Aug 2021 Myra Cheng, Maria De-Arteaga, Lester Mackey, Adam Tauman Kalai

Many modern machine learning algorithms mitigate bias by enforcing fairness constraints across coarsely-defined groups related to a sensitive attribute like gender or race.

Decision Making Fairness

The effect of differential victim crime reporting on predictive policing systems

1 code implementation30 Jan 2021 Nil-Jana Akpinar, Maria De-Arteaga, Alexandra Chouldechova

Our analysis is based on a simulation patterned after district-level victimization and crime reporting survey data for Bogot\'a, Colombia.


Leveraging Expert Consistency to Improve Algorithmic Decision Support

no code implementations24 Jan 2021 Maria De-Arteaga, Vincent Jeanselme, Artur Dubrawski, Alexandra Chouldechova

However, there is frequently a gap between decision objectives and what is captured in the observed outcomes used as labels to train ML models.

BIG-bench Machine Learning

Killings of social leaders in the Colombian post-conflict: Data analysis for investigative journalism

1 code implementation19 Jun 2019 Maria De-Arteaga, Benedikt Boecking

After the peace agreement of 2016 with FARC, the killings of social leaders have emerged as an important post-conflict challenge for Colombia.

Applications Computers and Society

What's in a Name? Reducing Bias in Bios without Access to Protected Attributes

no code implementations NAACL 2019 Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, Adam Tauman Kalai

In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual's true occupation and a word embedding of their name.

Word Embeddings

Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting

4 code implementations27 Jan 2019 Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai

We present a large-scale study of gender bias in occupation classification, a task where the use of machine learning may lead to negative outcomes on peoples' lives.

Classification General Classification

Proceedings of NeurIPS 2018 Workshop on Machine Learning for the Developing World: Achieving Sustainable Impact

no code implementations21 Dec 2018 Maria De-Arteaga, Amanda Coston, William Herlands

This is the Proceedings of NeurIPS 2018 Workshop on Machine Learning for the Developing World: Achieving Sustainable Impact, held in Montreal, Canada on December 8, 2018

BIG-bench Machine Learning

Learning under selective labels in the presence of expert consistency

no code implementations2 Jul 2018 Maria De-Arteaga, Artur Dubrawski, Alexandra Chouldechova

We explore the problem of learning under selective labels in the context of algorithm-assisted decision making.

Data Augmentation Decision Making +1

Proceedings of NIPS 2017 Workshop on Machine Learning for the Developing World

no code implementations27 Nov 2017 Maria De-Arteaga, William Herlands

This is the Proceedings of NIPS 2017 Workshop on Machine Learning for the Developing World, held in Long Beach, California, USA on December 8, 2017

BIG-bench Machine Learning

Canonical Autocorrelation Analysis

no code implementations19 Nov 2015 Maria De-Arteaga, Artur Dubrawski, Peter Huggins

We present an extension of sparse Canonical Correlation Analysis (CCA) designed for finding multiple-to-multiple linear correlations within a single set of variables.

Anomaly Detection

Lass-0: sparse non-convex regression by local search

no code implementations13 Nov 2015 William Herlands, Maria De-Arteaga, Daniel Neill, Artur Dubrawski

We compute approximate solutions to L0 regularized linear regression using L1 regularization, also known as the Lasso, as an initialization step.


Cannot find the paper you are looking for? You can Submit a new open access paper.