1 code implementation • NeurIPS 2023 • Jiachang Liu, Sam Rosen, Chudi Zhong, Cynthia Rudin
We consider an important problem in scientific discovery, namely identifying sparse governing equations for nonlinear dynamical systems.
1 code implementation • NeurIPS 2023 • Chudi Zhong, Zhi Chen, Jiachang Liu, Margo Seltzer, Cynthia Rudin
In real applications, interaction between machine learning models and domain experts is critical; however, the classical machine learning paradigm that usually produces only a single model does not facilitate such interaction.
1 code implementation • 12 Oct 2022 • Jiachang Liu, Chudi Zhong, Boxuan Li, Margo Seltzer, Cynthia Rudin
Specifically, our approach produces a pool of almost-optimal sparse continuous solutions, each with a different support set, using a beam-search algorithm.
3 code implementations • 19 Sep 2022 • Zijie J. Wang, Chudi Zhong, Rui Xin, Takuya Takagi, Zhi Chen, Duen Horng Chau, Cynthia Rudin, Margo Seltzer
Given thousands of equally accurate machine learning (ML) models, how can users choose among them?
2 code implementations • 16 Sep 2022 • Rui Xin, Chudi Zhong, Zhi Chen, Takuya Takagi, Margo Seltzer, Cynthia Rudin
We show three applications of the Rashomon set: 1) it can be used to study variable importance for the set of almost-optimal trees (as opposed to a single tree), 2) the Rashomon set for accuracy enables enumeration of the Rashomon sets for balanced accuracy and F1-score, and 3) the Rashomon set for a full dataset can be used to produce Rashomon sets constructed with only subsets of the data set.
2 code implementations • 23 Feb 2022 • Jiachang Liu, Chudi Zhong, Margo Seltzer, Cynthia Rudin
For fast sparse logistic regression, our computational speed-up over other best-subset search techniques owes to linear and quadratic surrogate cuts for the logistic loss that allow us to efficiently screen features for elimination, as well as use of a priority queue that favors a more uniform exploration of features.
3 code implementations • 1 Dec 2021 • Hayden McTavish, Chudi Zhong, Reto Achermann, Ilias Karimalis, Jacques Chen, Cynthia Rudin, Margo Seltzer
We show that by using these guesses, we can reduce the run time by multiple orders of magnitude, while providing bounds on how far the resulting trees can deviate from the black box's accuracy and expressive power.
no code implementations • 20 Mar 2021 • Cynthia Rudin, Chaofan Chen, Zhi Chen, Haiyang Huang, Lesia Semenova, Chudi Zhong
Interpretability in machine learning (ML) is crucial for high stakes decisions and troubleshooting.
2 code implementations • ICML 2020 • Jimmy Lin, Chudi Zhong, Diane Hu, Cynthia Rudin, Margo Seltzer
Decision tree optimization is notoriously difficult from a computational perspective but essential for the field of interpretable machine learning.