no code implementations • 3 Mar 2024 • Hyewon Jeong, Sarah Jabbour, Yuzhe Yang, Rahul Thapta, Hussein Mozannar, William Jongwon Han, Nikita Mehandru, Michael Wornow, Vladislav Lialin, Xin Liu, Alejandro Lozano, Jiacheng Zhu, Rafal Dariusz Kocielnik, Keith Harrigian, Haoran Zhang, Edward Lee, Milos Vukadinovic, Aparna Balagopalan, Vincent Jeanselme, Katherine Matton, Ilker Demirel, Jason Fries, Parisa Rashidi, Brett Beaulieu-Jones, Xuhai Orson Xu, Matthew McDermott, Tristan Naumann, Monica Agrawal, Marinka Zitnik, Berk Ustun, Edward Choi, Kristen Yeom, Gamze Gursoy, Marzyeh Ghassemi, Emma Pierson, George Chen, Sanjat Kanjilal, Michael Oberst, Linying Zhang, Harvineet Singh, Tom Hartvigsen, Helen Zhou, Chinasa T. Okolo
The organization of the research roundtables at the conference involved 17 Senior Chairs and 19 Junior Chairs across 11 tables.
no code implementations • 12 Feb 2024 • Jamelle Watson-Daniels, Flavio du Pin Calmon, Alexander D'Amour, Carol Long, David C. Parkes, Berk Ustun
And we characterize expected churn over model updates via the Rashomon set, pairing our analysis with empirical results on real-world datasets -- showing how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
no code implementations • 6 Feb 2024 • Sujay Nagaraj, Walter Gerych, Sana Tonekaboni, Anna Goldenberg, Berk Ustun, Thomas Hartvigsen
We first demonstrate the importance of modelling the temporal nature of the label noise function and how existing methods will consistently underperform.
no code implementations • 5 Feb 2024 • Sejoon Oh, Berk Ustun, Julian McAuley, Srijan Kumar
Modern recommender systems may output considerably different recommendations due to small perturbations in the training data.
1 code implementation • 24 Aug 2023 • Avni Kothari, Bogdan Kulynych, Tsui-Wei Weng, Berk Ustun
As a result, they can assign predictions that are fixed $-$ meaning that individuals who are denied loans and interviews are, in fact, precluded from access to credit and employment.
no code implementations • 15 May 2023 • Jennifer Chien, Margaret Roberts, Berk Ustun
In applications like consumer finance, this results in groups of applicants that are persistently denied and thus never enter into the training data.
no code implementations • 4 Jun 2022 • Vinith M. Suriyakumar, Marzyeh Ghassemi, Berk Ustun
In this work, we show models that are personalized with group attributes can reduce performance at a group level.
no code implementations • 2 Jun 2022 • Jamelle Watson-Daniels, David C. Parkes, Berk Ustun
We demonstrate the incidence and prevalence of predictive multiplicity in real-world tasks.
no code implementations • 29 Jan 2022 • Sejoon Oh, Berk Ustun, Julian McAuley, Srijan Kumar
We introduce a measure of stability for recommender systems, called Rank List Sensitivity (RLS), which measures how rank lists generated by a given recommender system at test time change as a result of a perturbation in the training data.
1 code implementation • NeurIPS 2021 • Haoran Zhang, Quaid Morris, Berk Ustun, Marzyeh Ghassemi
Our results show that our method can fit simple predictive checklists that perform well and that can easily be customized to obey a rich class of custom constraints.
3 code implementations • ICML 2020 • Charles T. Marx, Flavio du Pin Calmon, Berk Ustun
We apply our tools to measure predictive multiplicity in recidivism prediction problems.
1 code implementation • 29 Jan 2019 • Hao Wang, Berk Ustun, Flavio P. Calmon
When the performance of a machine learning model varies over groups defined by sensitive attributes (e. g., gender or ethnicity), the performance disparity can be expressed in terms of the probability distributions of the input and output variables over each group.
3 code implementations • 18 Sep 2018 • Berk Ustun, Alexander Spangher, Yang Liu
We present integer programming tools to ensure recourse in linear classification problems without interfering in model development.
no code implementations • 16 Jan 2018 • Hao Wang, Berk Ustun, Flavio P. Calmon
In the context of machine learning, disparate impact refers to a form of systematic discrimination whereby the output distribution of a model depends on the value of a sensitive attribute (e. g., race or gender).
2 code implementations • 1 Oct 2016 • Berk Ustun, Cynthia Rudin
Risk scores are simple classification models that let users make quick risk predictions by adding and subtracting a few small numbers.
no code implementations • 26 Mar 2015 • Jiaming Zeng, Berk Ustun, Cynthia Rudin
We investigate a long-debated question, which is how to create predictive models of recidivism that are sufficiently accurate, transparent, and interpretable to use for decision-making.
2 code implementations • 15 Feb 2015 • Berk Ustun, Cynthia Rudin
Scoring systems are linear classification models that only require users to add, subtract and multiply a few small numbers in order to make a prediction.
no code implementations • 16 May 2014 • Berk Ustun, Cynthia Rudin
We present an integer programming framework to build accurate and interpretable discrete linear classification models.
no code implementations • 27 Jun 2013 • Berk Ustun, Stefano Tracà, Cynthia Rudin
We illustrate the practical and interpretable nature of SLIM scoring systems through applications in medicine and criminology, and show that they are are accurate and sparse in comparison to state-of-the-art classification models using numerical experiments.
no code implementations • 25 Jun 2013 • Berk Ustun, Stefano Traca, Cynthia Rudin
We introduce Supersparse Linear Integer Models (SLIM) as a tool to create scoring systems for binary classification.