no code implementations • 18 Jul 2023 • Yunyi Li, Maria De-Arteaga, Maytal Saar-Tsechansky
While the presence of labeling bias has been discussed conceptually, there is a lack of methodologies to address this problem.
no code implementations • 16 Feb 2023 • Mohammad Tahaei, Marios Constantinides, Daniele Quercia, Sean Kennedy, Michael Muller, Simone Stumpf, Q. Vera Liao, Ricardo Baeza-Yates, Lora Aroyo, Jess Holbrook, Ewa Luger, Michael Madaio, Ilana Golbin Blumenfeld, Maria De-Arteaga, Jessica Vitak, Alexandra Olteanu
In recent years, the CHI community has seen significant growth in research on Human-Centered Responsible Artificial Intelligence.
1 code implementation • 14 Feb 2023 • Soumyajit Gupta, Sooyong Lee, Maria De-Arteaga, Matthew Lease
We propose framing toxicity detection as multi-task learning (MTL), allowing a model to specialize on the relationships that are relevant to each demographic group while also leveraging shared properties across groups.
no code implementations • 6 Feb 2023 • Ruijiang Gao, Maytal Saar-Tsechansky, Maria De-Arteaga, Ligong Han, Wei Sun, Min Kyung Lee, Matthew Lease
We then extend our approach to leverage opportunities and mitigate risks that arise in important contexts in practice: 1) when a team is composed of multiple humans with differential and potentially complementary abilities, 2) when the observational data includes consistent deterministic actions, and 3) when the covariate distribution of future decisions differ from that in the historical data.
no code implementations • 23 Sep 2022 • Jakob Schoeffer, Maria De-Arteaga, Niklas Kuehl
In this paper, we conduct an empirical study to examine the relationship between feature-based explanations and distributive fairness, mediated by human perceptions and reliance on AI recommendations.
1 code implementation • 13 Aug 2022 • Vincent Jeanselme, Maria De-Arteaga, Zhe Zhang, Jessica Barrett, Brian Tom
Machine learning risks reinforcing biases present in data, and, as we argue in this work, in what is absent from data.
no code implementations • 28 Jul 2022 • Kenneth Holstein, Maria De-Arteaga, Lakshmi Tumati, Yanghuidi Cheng
Our findings indicate that presenting prompts about unobservables can change how humans integrate model outputs and unobservables, but do not necessarily lead to improved performance.
no code implementations • 22 Jul 2022 • Maria De-Arteaga, Stefan Feuerriegel, Maytal Saar-Tsechansky
The extensive adoption of business analytics (BA) has brought financial gains and increased efficiencies.
no code implementations • 15 Jul 2022 • Yunyi Li, Maria De-Arteaga, Maytal Saar-Tsechansky
We then empirically show that, when overlooking label bias, collecting more data can aggravate bias, and imposing fairness constraints that rely on the observed labels in the data collection process may not address the problem.
no code implementations • 29 Apr 2022 • Maria De-Arteaga, Alexandra Chouldechova, Artur Dubrawski
Effective human-AI collaboration requires a system design that provides humans with meaningful ways to make sense of and critically evaluate algorithmic recommendations.
no code implementations • 28 Apr 2022 • Terrence Neumann, Maria De-Arteaga, Sina Fazelpour
Faced with the scale and surge of misinformation on social media, many platforms and fact-checking organizations have turned to algorithms for automating key parts of misinformation detection pipelines.
no code implementations • 27 Apr 2022 • Jakob Schoeffer, Maria De-Arteaga, Niklas Kuehl
It is known that recommendations of AI-based systems can be incorrect or unfair.
no code implementations • 25 Aug 2021 • Myra Cheng, Maria De-Arteaga, Lester Mackey, Adam Tauman Kalai
Many modern machine learning algorithms mitigate bias by enforcing fairness constraints across coarsely-defined groups related to a sensitive attribute like gender or race.
1 code implementation • 30 Jan 2021 • Nil-Jana Akpinar, Maria De-Arteaga, Alexandra Chouldechova
Our analysis is based on a simulation patterned after district-level victimization and crime reporting survey data for Bogot\'a, Colombia.
no code implementations • 24 Jan 2021 • Maria De-Arteaga, Vincent Jeanselme, Artur Dubrawski, Alexandra Chouldechova
However, there is frequently a gap between decision objectives and what is captured in the observed outcomes used as labels to train ML models.
1 code implementation • 19 Jun 2019 • Maria De-Arteaga, Benedikt Boecking
After the peace agreement of 2016 with FARC, the killings of social leaders have emerged as an important post-conflict challenge for Colombia.
Applications Computers and Society
no code implementations • NAACL 2019 • Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, Adam Tauman Kalai
In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual's true occupation and a word embedding of their name.
4 code implementations • 27 Jan 2019 • Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai
We present a large-scale study of gender bias in occupation classification, a task where the use of machine learning may lead to negative outcomes on peoples' lives.
no code implementations • 21 Dec 2018 • Maria De-Arteaga, Amanda Coston, William Herlands
This is the Proceedings of NeurIPS 2018 Workshop on Machine Learning for the Developing World: Achieving Sustainable Impact, held in Montreal, Canada on December 8, 2018
no code implementations • 20 Dec 2018 • Nathaniel Swinger, Maria De-Arteaga, Neil Thomas Heffernan IV, Mark DM Leiserson, Adam Tauman Kalai
The inputs to our algorithm are a list of target tokens, e. g. names, and a word embedding.
no code implementations • 2 Jul 2018 • Maria De-Arteaga, Artur Dubrawski, Alexandra Chouldechova
We explore the problem of learning under selective labels in the context of algorithm-assisted decision making.
no code implementations • 27 Nov 2017 • Maria De-Arteaga, William Herlands
This is the Proceedings of NIPS 2017 Workshop on Machine Learning for the Developing World, held in Long Beach, California, USA on December 8, 2017
no code implementations • 19 Nov 2015 • Maria De-Arteaga, Artur Dubrawski, Peter Huggins
We present an extension of sparse Canonical Correlation Analysis (CCA) designed for finding multiple-to-multiple linear correlations within a single set of variables.
no code implementations • 13 Nov 2015 • William Herlands, Maria De-Arteaga, Daniel Neill, Artur Dubrawski
We compute approximate solutions to L0 regularized linear regression using L1 regularization, also known as the Lasso, as an initialization step.