Search Results for author: Berk Ustun

Found 20 papers, 6 papers with code

Predictive Churn with the Set of Good Models

no code implementations12 Feb 2024 Jamelle Watson-Daniels, Flavio du Pin Calmon, Alexander D'Amour, Carol Long, David C. Parkes, Berk Ustun

And we characterize expected churn over model updates via the Rashomon set, pairing our analysis with empirical results on real-world datasets -- showing how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.

Learning from Time Series under Temporal Label Noise

no code implementations6 Feb 2024 Sujay Nagaraj, Walter Gerych, Sana Tonekaboni, Anna Goldenberg, Berk Ustun, Thomas Hartvigsen

We first demonstrate the importance of modelling the temporal nature of the label noise function and how existing methods will consistently underperform.

Time Series

FINEST: Stabilizing Recommendations by Rank-Preserving Fine-Tuning

no code implementations5 Feb 2024 Sejoon Oh, Berk Ustun, Julian McAuley, Srijan Kumar

Modern recommender systems may output considerably different recommendations due to small perturbations in the training data.

Recommendation Systems

Prediction without Preclusion: Recourse Verification with Reachable Sets

no code implementations24 Aug 2023 Avni Kothari, Bogdan Kulynych, Tsui-Wei Weng, Berk Ustun

In turn, models can assign predictions that are fixed, meaning that consumers who are denied loans, interviews, or benefits may be permanently locked out from access to credit, employment, or assistance.

Adversarial Robustness

Algorithmic Censoring in Dynamic Learning Systems

no code implementations15 May 2023 Jennifer Chien, Margaret Roberts, Berk Ustun

In applications like consumer finance, this results in groups of applicants that are persistently denied and thus never enter into the training data.

When Personalization Harms: Reconsidering the Use of Group Attributes in Prediction

no code implementations4 Jun 2022 Vinith M. Suriyakumar, Marzyeh Ghassemi, Berk Ustun

In this work, we show models that are personalized with group attributes can reduce performance at a group level.

Predictive Multiplicity in Probabilistic Classification

no code implementations2 Jun 2022 Jamelle Watson-Daniels, David C. Parkes, Berk Ustun

We demonstrate the incidence and prevalence of predictive multiplicity in real-world tasks.

Classification

Rank List Sensitivity of Recommender Systems to Interaction Perturbations

no code implementations29 Jan 2022 Sejoon Oh, Berk Ustun, Julian McAuley, Srijan Kumar

We introduce a measure of stability for recommender systems, called Rank List Sensitivity (RLS), which measures how rank lists generated by a given recommender system at test time change as a result of a perturbation in the training data.

Recommendation Systems

Learning Optimal Predictive Checklists

1 code implementation NeurIPS 2021 Haoran Zhang, Quaid Morris, Berk Ustun, Marzyeh Ghassemi

Our results show that our method can fit simple predictive checklists that perform well and that can easily be customized to obey a rich class of custom constraints.

Fairness

Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions

1 code implementation29 Jan 2019 Hao Wang, Berk Ustun, Flavio P. Calmon

When the performance of a machine learning model varies over groups defined by sensitive attributes (e. g., gender or ethnicity), the performance disparity can be expressed in terms of the probability distributions of the input and output variables over each group.

counterfactual Fairness

Actionable Recourse in Linear Classification

3 code implementations18 Sep 2018 Berk Ustun, Alexander Spangher, Yang Liu

We present integer programming tools to ensure recourse in linear classification problems without interfering in model development.

Classification Credit score +2

On the Direction of Discrimination: An Information-Theoretic Analysis of Disparate Impact in Machine Learning

no code implementations16 Jan 2018 Hao Wang, Berk Ustun, Flavio P. Calmon

In the context of machine learning, disparate impact refers to a form of systematic discrimination whereby the output distribution of a model depends on the value of a sensitive attribute (e. g., race or gender).

Attribute BIG-bench Machine Learning +1

Learning Optimized Risk Scores

2 code implementations1 Oct 2016 Berk Ustun, Cynthia Rudin

Risk scores are simple classification models that let users make quick risk predictions by adding and subtracting a few small numbers.

Seizure prediction

Interpretable Classification Models for Recidivism Prediction

no code implementations26 Mar 2015 Jiaming Zeng, Berk Ustun, Cynthia Rudin

We investigate a long-debated question, which is how to create predictive models of recidivism that are sufficiently accurate, transparent, and interpretable to use for decision-making.

BIG-bench Machine Learning Classification +2

Supersparse Linear Integer Models for Optimized Medical Scoring Systems

2 code implementations15 Feb 2015 Berk Ustun, Cynthia Rudin

Scoring systems are linear classification models that only require users to add, subtract and multiply a few small numbers in order to make a prediction.

Interpretable Machine Learning

Methods and Models for Interpretable Linear Classification

no code implementations16 May 2014 Berk Ustun, Cynthia Rudin

We present an integer programming framework to build accurate and interpretable discrete linear classification models.

Classification General Classification

Supersparse Linear Integer Models for Interpretable Classification

no code implementations27 Jun 2013 Berk Ustun, Stefano Tracà, Cynthia Rudin

We illustrate the practical and interpretable nature of SLIM scoring systems through applications in medicine and criminology, and show that they are are accurate and sparse in comparison to state-of-the-art classification models using numerical experiments.

Classification General Classification

Supersparse Linear Integer Models for Predictive Scoring Systems

no code implementations25 Jun 2013 Berk Ustun, Stefano Traca, Cynthia Rudin

We introduce Supersparse Linear Integer Models (SLIM) as a tool to create scoring systems for binary classification.

Binary Classification Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.