no code implementations • 29 Apr 2022 • Maria De-Arteaga, Alexandra Chouldechova, Artur Dubrawski
Effective human-AI collaboration requires a system design that provides humans with meaningful ways to make sense of and critically evaluate algorithmic recommendations.
no code implementations • 28 Apr 2022 • Terrence Neumann, Maria De-Arteaga, Sina Fazelpour
Faced with the scale and surge of misinformation on social media, many platforms and fact-checking organizations have turned to algorithms for automating key parts of misinformation detection pipelines.
no code implementations • 27 Apr 2022 • Jakob Schoeffer, Maria De-Arteaga, Niklas Kuehl
It is known that recommendations of AI-based systems can be incorrect or unfair.
no code implementations • 25 Aug 2021 • Myra Cheng, Maria De-Arteaga, Lester Mackey, Adam Tauman Kalai
We study this issue through the lens of gender bias in occupation classification from biographies.
1 code implementation • 30 Jan 2021 • Nil-Jana Akpinar, Maria De-Arteaga, Alexandra Chouldechova
Our analysis is based on a simulation patterned after district-level victimization and crime reporting survey data for Bogot\'a, Colombia.
no code implementations • 24 Jan 2021 • Maria De-Arteaga, Artur Dubrawski, Alexandra Chouldechova
However, the nature of the labels available for training these models often hampers the usefulness of predictive models for decision support.
1 code implementation • 19 Jun 2019 • Maria De-Arteaga, Benedikt Boecking
After the peace agreement of 2016 with FARC, the killings of social leaders have emerged as an important post-conflict challenge for Colombia.
Applications Computers and Society
no code implementations • NAACL 2019 • Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, Adam Tauman Kalai
In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual's true occupation and a word embedding of their name.
2 code implementations • 27 Jan 2019 • Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai
We present a large-scale study of gender bias in occupation classification, a task where the use of machine learning may lead to negative outcomes on peoples' lives.
no code implementations • 21 Dec 2018 • Maria De-Arteaga, Amanda Coston, William Herlands
This is the Proceedings of NeurIPS 2018 Workshop on Machine Learning for the Developing World: Achieving Sustainable Impact, held in Montreal, Canada on December 8, 2018
no code implementations • 20 Dec 2018 • Nathaniel Swinger, Maria De-Arteaga, Neil Thomas Heffernan IV, Mark DM Leiserson, Adam Tauman Kalai
The inputs to our algorithm are a list of target tokens, e. g. names, and a word embedding.
no code implementations • 2 Jul 2018 • Maria De-Arteaga, Artur Dubrawski, Alexandra Chouldechova
We explore the problem of learning under selective labels in the context of algorithm-assisted decision making.
no code implementations • 27 Nov 2017 • Maria De-Arteaga, William Herlands
This is the Proceedings of NIPS 2017 Workshop on Machine Learning for the Developing World, held in Long Beach, California, USA on December 8, 2017
no code implementations • 19 Nov 2015 • Maria De-Arteaga, Artur Dubrawski, Peter Huggins
We present an extension of sparse Canonical Correlation Analysis (CCA) designed for finding multiple-to-multiple linear correlations within a single set of variables.
no code implementations • 13 Nov 2015 • William Herlands, Maria De-Arteaga, Daniel Neill, Artur Dubrawski
We compute approximate solutions to L0 regularized linear regression using L1 regularization, also known as the Lasso, as an initialization step.