1 code implementation • 23 Jun 2023 • Jamelle Watson-Daniels, Solon Barocas, Jake M. Hofman, Alexandra Chouldechova
Along the way, we refine the study of single-target multiplicity by introducing notions of multiplicity that respect resource constraints -- a feature of many real-world tasks that is not captured by existing notions of predictive multiplicity.
no code implementations • 30 May 2023 • Anjalie Field, Amanda Coston, Nupoor Gandhi, Alexandra Chouldechova, Emily Putnam-Hornstein, David Steier, Yulia Tsvetkov
Given well-established racial bias in this setting, we investigate possible ways deployed NLP is liable to increase racial disparities.
no code implementations • 20 Jun 2022 • Emily Black, Hadi Elzayn, Alexandra Chouldechova, Jacob Goldin, Daniel E. Ho
First, we show how the use of more flexible machine learning (classification) methods -- as opposed to simpler models -- shifts audit burdens from high to middle-income taxpayers.
no code implementations • 18 May 2022 • Logan Stapleton, Min Hun Lee, Diana Qing, Marya Wright, Alexandra Chouldechova, Kenneth Holstein, Zhiwei Steven Wu, Haiyi Zhu
In this work, we conducted a set of seven design workshops with 35 stakeholders who have been impacted by the child welfare system or who work in it to understand their beliefs and concerns around PRMs, and to engage them in imagining new uses of data and technologies in the child welfare system.
no code implementations • 29 Apr 2022 • Maria De-Arteaga, Alexandra Chouldechova, Artur Dubrawski
Effective human-AI collaboration requires a system design that provides humans with meaningful ways to make sense of and critically evaluate algorithmic recommendations.
1 code implementation • 17 Feb 2022 • Kate Donahue, Alexandra Chouldechova, Krishnaram Kenthapadi
In many settings, however, the final prediction or decision of a system is under the control of a human, who uses an algorithm's output along with their own personal expertise in order to produce a combined prediction.
1 code implementation • 3 Sep 2021 • Riccardo Fogliato, Alexandra Chouldechova, Zachary Lipton
As algorithmic risk assessment instruments (RAIs) are increasingly adopted to assist decision makers, their predictive performance and potential to promote inequity have come under scrutiny.
1 code implementation • 30 Jan 2021 • Nil-Jana Akpinar, Maria De-Arteaga, Alexandra Chouldechova
Our analysis is based on a simulation patterned after district-level victimization and crime reporting survey data for Bogot\'a, Colombia.
no code implementations • 24 Jan 2021 • Maria De-Arteaga, Vincent Jeanselme, Artur Dubrawski, Alexandra Chouldechova
However, there is frequently a gap between decision objectives and what is captured in the observed outcomes used as labels to train ML models.
1 code implementation • 2 Jan 2021 • Amanda Coston, Ashesh Rambachan, Alexandra Chouldechova
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance, or "the set of good models."
no code implementations • NeurIPS 2020 • Amanda Coston, Edward H. Kennedy, Alexandra Chouldechova
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
1 code implementation • 30 Aug 2019 • Amanda Coston, Alan Mishler, Edward H. Kennedy, Alexandra Chouldechova
These tools thus reflect risk under the historical policy, rather than under the different decision options that the tool is intended to inform.
no code implementations • NAACL 2019 • Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, Adam Tauman Kalai
In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual's true occupation and a word embedding of their name.
4 code implementations • 27 Jan 2019 • Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai
We present a large-scale study of gender bias in occupation classification, a task where the use of machine learning may lead to negative outcomes on peoples' lives.
no code implementations • 20 Oct 2018 • Alexandra Chouldechova, Aaron Roth
The last few years have seen an explosion of academic and popular interest in algorithmic fairness.
no code implementations • 2 Jul 2018 • Maria De-Arteaga, Artur Dubrawski, Alexandra Chouldechova
We explore the problem of learning under selective labels in the context of algorithm-assisted decision making.
1 code implementation • NeurIPS 2018 • Zachary C. Lipton, Alexandra Chouldechova, Julian McAuley
Following related work in law and policy, two notions of disparity have come to shape the study of fairness in algorithmic decision-making.
no code implementations • 30 Jun 2017 • Alexandra Chouldechova, Max G'Sell
Complex statistical machine learning models are increasingly being used or considered for use in high-stakes decision-making pipelines in domains such as financial services, health care, criminal justice and human services.
no code implementations • 28 Feb 2017 • Alexandra Chouldechova
Recidivism prediction instruments (RPI's) provide decision makers with an assessment of the likelihood that a criminal defendant will reoffend at a future point in time.
no code implementations • 24 Oct 2016 • Alexandra Chouldechova
Recidivism prediction instruments provide decision makers with an assessment of the likelihood that a criminal defendant will reoffend at a future point in time.
no code implementations • 11 Jun 2015 • Alexandra Chouldechova, Trevor Hastie
We introduce GAMSEL (Generalized Additive Model Selection), a penalized likelihood approach for fitting sparse generalized additive models in high dimension.