In turn, models can assign predictions that are fixed, meaning that consumers who are denied loans, interviews, or benefits may be permanently locked out from access to credit, employment, or assistance.
In this work, we show models that are personalized with group attributes can reduce performance at a group level.
We introduce a measure of stability for recommender systems, called Rank List Sensitivity (RLS), which measures how rank lists generated by a given recommender system at test time change as a result of a perturbation in the training data.
Our results show that our method can fit simple predictive checklists that perform well and that can easily be customized to obey a rich class of custom constraints.
When the performance of a machine learning model varies over groups defined by sensitive attributes (e. g., gender or ethnicity), the performance disparity can be expressed in terms of the probability distributions of the input and output variables over each group.
We present integer programming tools to ensure recourse in linear classification problems without interfering in model development.
In the context of machine learning, disparate impact refers to a form of systematic discrimination whereby the output distribution of a model depends on the value of a sensitive attribute (e. g., race or gender).
We investigate a long-debated question, which is how to create predictive models of recidivism that are sufficiently accurate, transparent, and interpretable to use for decision-making.
Scoring systems are linear classification models that only require users to add, subtract and multiply a few small numbers in order to make a prediction.
We illustrate the practical and interpretable nature of SLIM scoring systems through applications in medicine and criminology, and show that they are are accurate and sparse in comparison to state-of-the-art classification models using numerical experiments.
We introduce Supersparse Linear Integer Models (SLIM) as a tool to create scoring systems for binary classification.