no code implementations • 24 Aug 2023 • Avni Kothari, Bogdan Kulynych, Tsui-Wei Weng, Berk Ustun
In turn, models can assign predictions that are fixed, meaning that consumers who are denied loans, interviews, or benefits may be permanently locked out from access to credit, employment, or assistance.
no code implementations • 15 May 2023 • Jennifer Chien, Margaret Roberts, Berk Ustun
In applications like consumer finance, this results in groups of applicants that are persistently denied and thus never enter into the training data.
no code implementations • 8 Feb 2023 • Hailey James, Chirag Nagpal, Katherine Heller, Berk Ustun
These models use information about people, but do not facilitate nor inform their \emph{consent}.
no code implementations • 4 Jun 2022 • Vinith M. Suriyakumar, Marzyeh Ghassemi, Berk Ustun
In this work, we show models that are personalized with group attributes can reduce performance at a group level.
no code implementations • 2 Jun 2022 • Jamelle Watson-Daniels, David C. Parkes, Berk Ustun
We demonstrate the incidence and prevalence of predictive multiplicity in real-world tasks.
no code implementations • 29 Jan 2022 • Sejoon Oh, Berk Ustun, Julian McAuley, Srijan Kumar
We introduce a measure of stability for recommender systems, called Rank List Sensitivity (RLS), which measures how rank lists generated by a given recommender system at test time change as a result of a perturbation in the training data.
1 code implementation • NeurIPS 2021 • Haoran Zhang, Quaid Morris, Berk Ustun, Marzyeh Ghassemi
Our results show that our method can fit simple predictive checklists that perform well and that can easily be customized to obey a rich class of custom constraints.
3 code implementations • ICML 2020 • Charles T. Marx, Flavio du Pin Calmon, Berk Ustun
We apply our tools to measure predictive multiplicity in recidivism prediction problems.
1 code implementation • 29 Jan 2019 • Hao Wang, Berk Ustun, Flavio P. Calmon
When the performance of a machine learning model varies over groups defined by sensitive attributes (e. g., gender or ethnicity), the performance disparity can be expressed in terms of the probability distributions of the input and output variables over each group.
3 code implementations • 18 Sep 2018 • Berk Ustun, Alexander Spangher, Yang Liu
We present integer programming tools to ensure recourse in linear classification problems without interfering in model development.
no code implementations • 16 Jan 2018 • Hao Wang, Berk Ustun, Flavio P. Calmon
In the context of machine learning, disparate impact refers to a form of systematic discrimination whereby the output distribution of a model depends on the value of a sensitive attribute (e. g., race or gender).
2 code implementations • 1 Oct 2016 • Berk Ustun, Cynthia Rudin
Risk scores are simple classification models that let users make quick risk predictions by adding and subtracting a few small numbers.
no code implementations • 26 Mar 2015 • Jiaming Zeng, Berk Ustun, Cynthia Rudin
We investigate a long-debated question, which is how to create predictive models of recidivism that are sufficiently accurate, transparent, and interpretable to use for decision-making.
2 code implementations • 15 Feb 2015 • Berk Ustun, Cynthia Rudin
Scoring systems are linear classification models that only require users to add, subtract and multiply a few small numbers in order to make a prediction.
no code implementations • 16 May 2014 • Berk Ustun, Cynthia Rudin
We present an integer programming framework to build accurate and interpretable discrete linear classification models.
no code implementations • 27 Jun 2013 • Berk Ustun, Stefano Tracà, Cynthia Rudin
We illustrate the practical and interpretable nature of SLIM scoring systems through applications in medicine and criminology, and show that they are are accurate and sparse in comparison to state-of-the-art classification models using numerical experiments.
no code implementations • 25 Jun 2013 • Berk Ustun, Stefano Traca, Cynthia Rudin
We introduce Supersparse Linear Integer Models (SLIM) as a tool to create scoring systems for binary classification.