no code implementations • 21 May 2024 • Anna Kawakami, Amanda Coston, Hoda Heidari, Kenneth Holstein, Haiyi Zhu
As public sector agencies rapidly introduce new AI tools in high-stakes domains like social services, it becomes critical to understand how decisions to adopt these tools are made in practice.
1 code implementation • 1 Apr 2024 • Luke Guerdan, Amanda Coston, Kenneth Holstein, Zhiwei Steven Wu
However, it is challenging to compare predictive performance against an existing decision-making policy that is generally under-specified and dependent on unobservable factors.
no code implementations • 30 May 2023 • Anjalie Field, Amanda Coston, Nupoor Gandhi, Alexandra Chouldechova, Emily Putnam-Hornstein, David Steier, Yulia Tsvetkov
Given well-established racial bias in this setting, we investigate possible ways deployed NLP is liable to increase racial disparities.
no code implementations • 26 Mar 2023 • Anna Kawakami, Amanda Coston, Haiyi Zhu, Hoda Heidari, Kenneth Holstein
AI-based decision-making tools are rapidly spreading across a range of real-world, complex domains like healthcare, criminal justice, and child welfare.
1 code implementation • 22 Feb 2023 • Luke Guerdan, Amanda Coston, Kenneth Holstein, Zhiwei Steven Wu
We also develop a method for estimating treatment-dependent measurement error parameters when these are unknown in advance.
no code implementations • 13 Feb 2023 • Luke Guerdan, Amanda Coston, Zhiwei Steven Wu, Kenneth Holstein
In this paper, we identify five sources of target variable bias that can impact the validity of proxy labels in human-AI decision-making tasks.
no code implementations • 19 Dec 2022 • Ashesh Rambachan, Amanda Coston, Edward Kennedy
Predictive algorithms inform consequential decisions in settings where the outcome is selectively observed given choices made by human decision makers.
no code implementations • 19 Jul 2022 • Amanda Coston, Edward H. Kennedy
We provide a new definition of collapsibility that makes this choice of aggregation method explicit, and we demonstrate that the odds ratio is collapsible under geometric aggregation.
no code implementations • 30 Jun 2022 • Amanda Coston, Anna Kawakami, Haiyi Zhu, Ken Holstein, Hoda Heidari
Recent research increasingly brings to question the appropriateness of using predictive tools in complex, real-world tasks.
1 code implementation • 2 Jan 2021 • Amanda Coston, Ashesh Rambachan, Alexandra Chouldechova
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance, or "the set of good models."
1 code implementation • 15 Jul 2020 • George H. Chen, Linhong Li, Ren Zuo, Amanda Coston, Jeremy C. Weiss
We present a neural network framework for learning a survival model to predict a time-to-event outcome while simultaneously learning a topic model that reveals feature relationships.
no code implementations • NeurIPS 2020 • Amanda Coston, Edward H. Kennedy, Alexandra Chouldechova
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
no code implementations • ICLR 2020 • George H. Chen, Linhong Li, Ren Zuo, Amanda Coston, Jeremy C. Weiss
The two approaches we propose differ in the generality of topic models they can learn.
1 code implementation • ICLR 2020 • Han Zhao, Amanda Coston, Tameem Adel, Geoffrey J. Gordon
We propose a novel algorithm for learning fair representations that can simultaneously mitigate two notions of disparity among different demographic subgroups in the classification setting.
1 code implementation • 30 Aug 2019 • Amanda Coston, Alan Mishler, Edward H. Kennedy, Alexandra Chouldechova
These tools thus reflect risk under the historical policy, rather than under the different decision options that the tool is intended to inform.
no code implementations • 21 Dec 2018 • Maria De-Arteaga, Amanda Coston, William Herlands
This is the Proceedings of NeurIPS 2018 Workshop on Machine Learning for the Developing World: Achieving Sustainable Impact, held in Montreal, Canada on December 8, 2018