no code implementations • 29 Apr 2022 • Maria De-Arteaga, Alexandra Chouldechova, Artur Dubrawski
Effective human-AI collaboration requires a system design that provides humans with meaningful ways to make sense of and critically evaluate algorithmic recommendations.
1 code implementation • 17 Feb 2022 • Kate Donahue, Alexandra Chouldechova, Krishnaram Kenthapadi
In many settings, however, the final prediction or decision of a system is under the control of a human, who uses an algorithm's output along with their own personal expertise in order to produce a combined prediction.
1 code implementation • 3 Sep 2021 • Riccardo Fogliato, Alexandra Chouldechova, Zachary Lipton
As algorithmic risk assessment instruments (RAIs) are increasingly adopted to assist decision makers, their predictive performance and potential to promote inequity have come under scrutiny.
1 code implementation • 30 Jan 2021 • Nil-Jana Akpinar, Maria De-Arteaga, Alexandra Chouldechova
Our analysis is based on a simulation patterned after district-level victimization and crime reporting survey data for Bogot\'a, Colombia.
no code implementations • 24 Jan 2021 • Maria De-Arteaga, Artur Dubrawski, Alexandra Chouldechova
However, the nature of the labels available for training these models often hampers the usefulness of predictive models for decision support.
1 code implementation • 2 Jan 2021 • Amanda Coston, Ashesh Rambachan, Alexandra Chouldechova
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance, or "the set of good models."
no code implementations • NeurIPS 2020 • Amanda Coston, Edward H. Kennedy, Alexandra Chouldechova
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
1 code implementation • 30 Aug 2019 • Amanda Coston, Alan Mishler, Edward H. Kennedy, Alexandra Chouldechova
These tools thus reflect risk under the historical policy, rather than under the different decision options that the tool is intended to inform.
no code implementations • NAACL 2019 • Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, Adam Tauman Kalai
In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual's true occupation and a word embedding of their name.
2 code implementations • 27 Jan 2019 • Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai
We present a large-scale study of gender bias in occupation classification, a task where the use of machine learning may lead to negative outcomes on peoples' lives.
no code implementations • 20 Oct 2018 • Alexandra Chouldechova, Aaron Roth
The last few years have seen an explosion of academic and popular interest in algorithmic fairness.
no code implementations • 2 Jul 2018 • Maria De-Arteaga, Artur Dubrawski, Alexandra Chouldechova
We explore the problem of learning under selective labels in the context of algorithm-assisted decision making.
no code implementations • NeurIPS 2018 • Zachary C. Lipton, Alexandra Chouldechova, Julian McAuley
Following related work in law and policy, two notions of disparity have come to shape the study of fairness in algorithmic decision-making.
no code implementations • 30 Jun 2017 • Alexandra Chouldechova, Max G'Sell
Complex statistical machine learning models are increasingly being used or considered for use in high-stakes decision-making pipelines in domains such as financial services, health care, criminal justice and human services.
no code implementations • 28 Feb 2017 • Alexandra Chouldechova
Recidivism prediction instruments (RPI's) provide decision makers with an assessment of the likelihood that a criminal defendant will reoffend at a future point in time.
no code implementations • 24 Oct 2016 • Alexandra Chouldechova
Recidivism prediction instruments provide decision makers with an assessment of the likelihood that a criminal defendant will reoffend at a future point in time.
no code implementations • 11 Jun 2015 • Alexandra Chouldechova, Trevor Hastie
We introduce GAMSEL (Generalized Additive Model Selection), a penalized likelihood approach for fitting sparse generalized additive models in high dimension.