Search Results for author: Cynthia Rudin

Found 107 papers, 54 papers with code

Data Poisoning Attacks on Off-Policy Policy Evaluation Methods

no code implementations6 Apr 2024 Elita Lobo, Harvineet Singh, Marek Petrik, Cynthia Rudin, Himabindu Lakkaraju

Off-policy Evaluation (OPE) methods are a crucial tool for evaluating policies in high-stakes domains such as healthcare, where exploration is often infeasible, unethical, or expensive.

Data Poisoning Off-policy evaluation

What is different between these datasets?

no code implementations8 Mar 2024 Varun Babbar, Zhicheng Guo, Cynthia Rudin

The performance of machine learning models heavily depends on the quality of input data, yet real-world applications often encounter various data-related challenges.

Sparse and Faithful Explanations Without Sparse Models

no code implementations15 Feb 2024 Yiyang Sun, Zhi Chen, Vittorio Orlandi, Tong Wang, Cynthia Rudin

In the loan denial example above, the SEV is 1 because only one factor is needed to explain why the loan was denied.

Optimal Sparse Survival Trees

1 code implementation27 Jan 2024 Rui Zhang, Rui Xin, Margo Seltzer, Cynthia Rudin

Interpretability is crucial for doctors, hospitals, pharmaceutical companies and biotechnology corporations to analyze and make decisions for high stakes problems that involve human health.

Survival Analysis

Interpretable Causal Inference for Analyzing Wearable, Sensor, and Distributional Data

1 code implementation17 Dec 2023 Srikar Katta, Harsh Parikh, Cynthia Rudin, Alexander Volfovsky

Many modern causal questions ask how treatments affect complex outcomes that are measured using wearable devices and sensors.

Causal Inference Decision Making

Reconsideration on evaluation of machine learning models in continuous monitoring using wearables

no code implementations4 Dec 2023 Cheng Ding, Zhicheng Guo, Cynthia Rudin, Ran Xiao, Fadi B Nahab, Xiao Hu

This paper explores the challenges in evaluating machine learning (ML) models for continuous health monitoring using wearable devices beyond conventional metrics.

ProtoEEGNet: An Interpretable Approach for Detecting Interictal Epileptiform Discharges

no code implementations3 Dec 2023 Dennis Tang, Frank Willard, Ronan Tegerdine, Luke Triplett, Jon Donnelly, Luke Moffett, Lesia Semenova, Alina Jade Barnett, Jin Jing, Cynthia Rudin, Brandon Westover

In high-stakes medical applications, it is critical to have interpretable models so that experts can validate the reasoning of the model before making important diagnoses.

Decision Making EEG

Fast and Interpretable Mortality Risk Scores for Critical Care Patients

1 code implementation21 Nov 2023 Chloe Qinyu Zhu, Muhang Tian, Lesia Semenova, Jiachang Liu, Jack Xu, Joseph Scarpa, Cynthia Rudin

Both of these have disadvantages: black box models are unacceptable for use in hospitals, whereas manual creation of models (including hand-tuning of logistic regression parameters) relies on humans to perform high-dimensional constrained optimization, which leads to a loss in performance.

Safe and Interpretable Estimation of Optimal Treatment Regimes

1 code implementation23 Oct 2023 Harsh Parikh, Quinn Lanners, Zade Akras, Sahar F. Zafar, M. Brandon Westover, Cynthia Rudin, Alexander Volfovsky

Our work operationalizes a safe and interpretable framework to identify optimal treatment regimes.

The Rashomon Importance Distribution: Getting RID of Unstable, Single Model-based Variable Importance

1 code implementation NeurIPS 2023 Jon Donnelly, Srikar Katta, Cynthia Rudin, Edward P. Browne

However, for a given dataset, there may be many models that explain the target outcome equally well; without accounting for all possible explanations, different researchers may arrive at many conflicting yet equally valid conclusions given the same data.

A Self-Supervised Algorithm for Denoising Photoplethysmography Signals for Heart Rate Estimation from Wearables

no code implementations7 Jul 2023 Pranay Jain, Cheng Ding, Cynthia Rudin, Xiao Hu

Smart watches and other wearable devices are equipped with photoplethysmography (PPG) sensors for monitoring heart rate and other aspects of cardiovascular health.

Denoising Heart rate estimation +1

Learned Kernels for Sparse, Interpretable, and Efficient Medical Time Series Processing

1 code implementation6 Jul 2023 Sully F. Chen, Zhicheng Guo, Cheng Ding, Xiao Hu, Cynthia Rudin

Results: Our interpretable method achieves greater than 99% of the performance of the state-of-the-art methods on the PPG artifact detection task, and even outperforms the state-of-the-art on a challenging out-of-distribution test set, while using dramatically fewer parameters (2% of the parameters of Segade, and about half of the parameters of Tiny-PPG).

Artifact Detection Atrial Fibrillation Detection +2

OKRidge: Scalable Optimal k-Sparse Ridge Regression

1 code implementation NeurIPS 2023 Jiachang Liu, Sam Rosen, Chudi Zhong, Cynthia Rudin

We consider an important problem in scientific discovery, namely identifying sparse governing equations for nonlinear dynamical systems.

regression

Matched Machine Learning: A Generalized Framework for Treatment Effect Inference With Learned Metrics

no code implementations3 Apr 2023 Marco Morucci, Cynthia Rudin, Alexander Volfovsky

We introduce Matched Machine Learning, a framework that combines the flexibility of machine learning black boxes with the interpretability of matching, a longstanding tool in observational causal inference.

Causal Inference

Exploring and Interacting with the Set of Good Sparse Generalized Additive Models

1 code implementation NeurIPS 2023 Chudi Zhong, Zhi Chen, Jiachang Liu, Margo Seltzer, Cynthia Rudin

In real applications, interaction between machine learning models and domain experts is critical; however, the classical machine learning paradigm that usually produces only a single model does not facilitate such interaction.

Additive models

Variable Importance Matching for Causal Inference

1 code implementation23 Feb 2023 Quinn Lanners, Harsh Parikh, Alexander Volfovsky, Cynthia Rudin, David Page

Our goal is to produce methods for observational causal inference that are auditable, easy to troubleshoot, accurate for treatment effect estimation, and scalable to high-dimensional data.

Causal Inference Feature Importance

Optimal Sparse Regression Trees

1 code implementation28 Nov 2022 Rui Zhang, Rui Xin, Margo Seltzer, Cynthia Rudin

Regression trees are one of the oldest forms of AI models, and their predictions can be made without a calculator, which makes them broadly useful, particularly for high-stakes applications.

Clustering regression

Interpretable Machine Learning System to EEG Patterns on the Ictal-Interictal-Injury Continuum

no code implementations9 Nov 2022 Alina Jade Barnett, Zhicheng Guo, Jin Jing, Wendong Ge, Cynthia Rudin, M. Brandon Westover

To address these challenges, we propose a novel interpretable deep learning model that not only predicts the presence of harmful brainwave patterns but also provides high-quality case-based explanations of its decisions.

EEG Interpretable Machine Learning

Learning From Alarms: A Robust Learning Approach for Accurate Photoplethysmography-Based Atrial Fibrillation Detection using Eight Million Samples Labeled with Imprecise Arrhythmia Alarms

1 code implementation7 Nov 2022 Cheng Ding, Zhicheng Guo, Cynthia Rudin, Ran Xiao, Amit Shah, Duc H. Do, Randall J Lee, Gari Clifford, Fadi B Nahab, Xiao Hu

To address this challenge, in this study, we propose to leverage AF alarms from bedside patient monitors to label concurrent PPG signals, resulting in the largest PPG-AF dataset so far (8. 5M 30-second records from 24100 patients) and demonstrating a practical approach to build large labeled PPG datasets.

Atrial Fibrillation Detection Computational Efficiency +2

Fast Optimization of Weighted Sparse Decision Trees for use in Optimal Treatment Regimes and Optimal Policy Design

no code implementations13 Oct 2022 Ali Behrouz, Mathias Lecuyer, Cynthia Rudin, Margo Seltzer

Specifically, they rely on the discreteness of the loss function, which means that real-valued weights cannot be directly used.

FasterRisk: Fast and Accurate Interpretable Risk Scores

1 code implementation12 Oct 2022 Jiachang Liu, Chudi Zhong, Boxuan Li, Margo Seltzer, Cynthia Rudin

Specifically, our approach produces a pool of almost-optimal sparse continuous solutions, each with a different support set, using a beam-search algorithm.

Exploring the Whole Rashomon Set of Sparse Decision Trees

2 code implementations16 Sep 2022 Rui Xin, Chudi Zhong, Zhi Chen, Takuya Takagi, Margo Seltzer, Cynthia Rudin

We show three applications of the Rashomon set: 1) it can be used to study variable importance for the set of almost-optimal trees (as opposed to a single tree), 2) the Rashomon set for accuracy enables enumeration of the Rashomon sets for balanced accuracy and F1-score, and 3) the Rashomon set for a full dataset can be used to produce Rashomon sets constructed with only subsets of the data set.

SegDiscover: Visual Concept Discovery via Unsupervised Semantic Segmentation

no code implementations22 Apr 2022 Haiyang Huang, Zhi Chen, Cynthia Rudin

Experimental results provide evidence that our method can discover multiple concepts within a single image and outperforms state-of-the-art unsupervised methods on complex datasets such as Cityscapes and COCO-Stuff.

Unsupervised Semantic Segmentation

Effects of Epileptiform Activity on Discharge Outcome in Critically Ill Patients

no code implementations9 Mar 2022 Harsh Parikh, Kentaro Hoffman, Haoqi Sun, Wendong Ge, Jin Jing, Rajesh Amerineni, Lin Liu, Jimeng Sun, Sahar Zafar, Aaron Struck, Alexander Volfovsky, Cynthia Rudin, M. Brandon Westover

Having a maximum EA burden greater than 75% when untreated had a 22% increased chance of a poor outcome (severe disability or death), and mild but long-lasting EA increased the risk of a poor outcome by 14%.

Causal Inference Decision Making

Fast Sparse Classification for Generalized Linear and Additive Models

2 code implementations23 Feb 2022 Jiachang Liu, Chudi Zhong, Margo Seltzer, Cynthia Rudin

For fast sparse logistic regression, our computational speed-up over other best-subset search techniques owes to linear and quadratic surrogate cuts for the logistic loss that allow us to efficiently screen features for elimination, as well as use of a priority queue that favors a more uniform exploration of features.

Additive models Classification

Fast Sparse Decision Tree Optimization via Reference Ensembles

3 code implementations1 Dec 2021 Hayden McTavish, Chudi Zhong, Reto Achermann, Ilias Karimalis, Jacques Chen, Cynthia Rudin, Margo Seltzer

We show that by using these guesses, we can reduce the run time by multiple orders of magnitude, while providing bounds on how far the resulting trees can deviate from the black box's accuracy and expressive power.

Interpretable Machine Learning

How to See Hidden Patterns in Metamaterials with Interpretable Machine Learning

1 code implementation10 Nov 2021 Zhi Chen, Alexander Ogren, Chiara Daraio, L. Catherine Brinson, Cynthia Rudin

Machine learning models can assist with metamaterials design by approximating computationally expensive simulators or solving inverse design problems.

Band Gap BIG-bench Machine Learning +1

BacHMMachine: An Interpretable and Scalable Model for Algorithmic Harmonization for Four-part Baroque Chorales

no code implementations15 Sep 2021 Yunyao Zhu, Stephen Hahn, Simon Mak, Yue Jiang, Cynthia Rudin

Algorithmic harmonization - the automated harmonization of a musical piece given its melodic line - is a challenging problem that has garnered much interest from both music theorists and computer scientists.

Interpretable Mammographic Image Classification using Case-Based Reasoning and Deep Learning

no code implementations12 Jul 2021 Alina Jade Barnett, Fides Regina Schwartz, Chaofan Tao, Chaofan Chen, Yinhao Ren, Joseph Y. Lo, Cynthia Rudin

Compared to other methods, our model detects clinical features (mass margins) with equal or higher accuracy, provides a more detailed explanation of its prediction, and is better able to differentiate the classification-relevant parts of the image.

Image Classification

A Holistic Approach to Interpretability in Financial Lending: Models, Visualizations, and Summary-Explanations

no code implementations4 Jun 2021 Chaofan Chen, Kangcheng Lin, Cynthia Rudin, Yaron Shaposhnik, Sijia Wang, Tong Wang

We propose a framework for such decisions, including a globally interpretable machine learning model, an interactive visualization of it, and several types of summaries and explanations for any given decision.

BIG-bench Machine Learning Interpretable Machine Learning

Playing Codenames with Language Graphs and Word Embeddings

1 code implementation12 May 2021 Divya Koyyalagunta, Anna Sun, Rachel Lea Draelos, Cynthia Rudin

Although board games and video games have been studied for decades in artificial intelligence research, challenging word games remain relatively unexplored.

Board Games Common Sense Reasoning +1

IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography

no code implementations23 Mar 2021 Alina Jade Barnett, Fides Regina Schwartz, Chaofan Tao, Chaofan Chen, Yinhao Ren, Joseph Y. Lo, Cynthia Rudin

Mammography poses important challenges that are not present in other computer vision tasks: datasets are small, confounding information is present, and it can be difficult even for a radiologist to decide between watchful waiting and biopsy based on a mammogram alone.

BIG-bench Machine Learning Interpretable Machine Learning

There Once Was a Really Bad Poet, It Was Automated but You Didn't Know It

1 code implementation5 Mar 2021 Jianyou Wang, Xiaoxuan Zhang, Yuren Zhou, Christopher Suh, Cynthia Rudin

Limerick generation exemplifies some of the most difficult challenges faced in poetry generation, as the poems must tell a story in only five lines, with constraints on rhyme, stress, and meter.

Understanding How Dimension Reduction Tools Work: An Empirical Approach to Deciphering t-SNE, UMAP, TriMAP, and PaCMAP for Data Visualization

2 code implementations8 Dec 2020 Yingfan Wang, Haiyang Huang, Cynthia Rudin, Yaron Shaposhnik

In this work, our main goal is to understand what aspects of DR methods are important for preserving both local and global structure: it is difficult to design a better method without a true understanding of the choices we make in our algorithms and their empirical impact on the lower-dimensional embeddings they produce.

Data Visualization Dimensionality Reduction

Cryo-ZSSR: multiple-image super-resolution based on deep internal learning

no code implementations22 Nov 2020 Qinwen Huang, Ye Zhou, Xiaochen Du, Reed Chen, Jianyou Wang, Cynthia Rudin, Alberto Bartesaghi

Single-particle cryo-electron microscopy (cryo-EM) is an emerging imaging modality capable of visualizing proteins and macro-molecular complexes at near-atomic resolution.

Image Super-Resolution

Bandits for BMO Functions

no code implementations ICML 2020 Tianyu Wang, Cynthia Rudin

We study the bandit problem where the underlying expected reward is a Bounded Mean Oscillation (BMO) function.

Metaphor Detection Using Contextual Word Embeddings From Transformers

no code implementations WS 2020 Jerry Liu, Nathan O{'}Hara, Alex Rubin, er, Rachel Draelos, Cynthia Rudin

The detection of metaphors can provide valuable information about a given text and is crucial to sentiment analysis and machine translation.

Machine Translation Sentiment Analysis +2

Generalized and Scalable Optimal Sparse Decision Trees

2 code implementations ICML 2020 Jimmy Lin, Chudi Zhong, Diane Hu, Cynthia Rudin, Margo Seltzer

Decision tree optimization is notoriously difficult from a computational perspective but essential for the field of interpretable machine learning.

Interpretable Machine Learning

In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction

1 code implementation8 May 2020 Caroline Wang, Bin Han, Bhrij Patel, Cynthia Rudin

We compared predictive performance and fairness of these models against two methods that are currently used in the justice system to predict pretrial recidivism: the Arnold PSA and COMPAS.

BIG-bench Machine Learning Fairness +1

PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

16 code implementations CVPR 2020 Sachit Menon, Alexandru Damian, Shijia Hu, Nikhil Ravi, Cynthia Rudin

We present an algorithm addressing this problem, PULSE (Photo Upsampling via Latent Space Exploration), which generates high-resolution, realistic images at resolutions previously unseen in the literature.

Face Hallucination Hallucination +1

Adaptive Hyper-box Matching for Interpretable Individualized Treatment Effect Estimation

1 code implementation3 Mar 2020 Marco Morucci, Vittorio Orlandi, Sudeepa Roy, Cynthia Rudin, Alexander Volfovsky

We propose a matching method for observational data that matches units with others in unit-specific, hyper-box-shaped regions of the covariate space.

Almost-Matching-Exactly for Treatment Effect Estimation under Network Interference

no code implementations2 Mar 2020 M. Usaid Awan, Marco Morucci, Vittorio Orlandi, Sudeepa Roy, Cynthia Rudin, Alexander Volfovsky

We propose a matching method that recovers direct treatment effects from randomized experiments where units are connected in an observed network, and units that share edges can potentially influence each others' outcomes.

Concept Whitening for Interpretable Image Recognition

2 code implementations5 Feb 2020 Zhi Chen, Yijie Bei, Cynthia Rudin

What does a neural network encode about a concept as we traverse through the layers?

On the Existence of Simpler Machine Learning Models

no code implementations5 Aug 2019 Lesia Semenova, Cynthia Rudin, Ronald Parr

We hypothesize that there is an important reason that simple-yet-accurate models often do exist.

BIG-bench Machine Learning Fairness +2

Reducing Exploration of Dying Arms in Mortal Bandits

2 code implementations4 Jul 2019 Stefano Tracà, Cynthia Rudin, Weiyu Yan

Mortal bandits have proven to be extremely useful for providing news article recommendations, running automated online advertising campaigns, and for other applications where the set of available options changes over time.

Interpretable Almost-Matching-Exactly With Instrumental Variables

1 code implementation27 Jun 2019 M. Usaid Awan, Yameng Liu, Marco Morucci, Sudeepa Roy, Cynthia Rudin, Alexander Volfovsky

Uncertainty in the estimation of the causal effect in observational studies is often due to unmeasured confounding, i. e., the presence of unobserved covariates linking treatments and outcomes.

Interpretable Image Recognition with Hierarchical Prototypes

1 code implementation25 Jun 2019 Peter Hase, Chaofan Chen, Oscar Li, Cynthia Rudin

Hence, we may find distinct explanations for the prediction an image receives at each level of the taxonomy.

General Classification

Optimal Sparse Decision Trees

2 code implementations NeurIPS 2019 Xiyang Hu, Cynthia Rudin, Margo Seltzer

Decision tree algorithms have been among the most popular algorithms for interpretable (transparent) machine learning since the early 1980's.

Towards Practical Lipschitz Bandits

no code implementations26 Jan 2019 Tianyu Wang, Weicheng Ye, Dawei Geng, Cynthia Rudin

Stochastic Lipschitz bandit algorithms balance exploration and exploitation, and have been used for a variety of important task domains.

Gaussian Processes

Variable Importance Clouds: A Way to Explore Variable Importance for the Set of Good Models

1 code implementation10 Jan 2019 Jiayun Dong, Cynthia Rudin

Variable importance is central to scientific studies, including the social sciences and causal inference, healthcare, and other domains.

Causal Inference Image Classification +1

A robust approach to quantifying uncertainty in matching problems of causal inference

2 code implementations5 Dec 2018 Marco Morucci, Md. Noor-E-Alam, Cynthia Rudin

However, as we show in this work, there is a typical source of uncertainty that is essentially never considered in observational causal studies: the choice of match assignment for matched groups, that is, which unit is matched to which other unit before a hypothesis test is conducted.

Methodology

An Interpretable Model with Globally Consistent Explanations for Credit Risk

no code implementations30 Nov 2018 Chaofan Chen, Kangcheng Lin, Cynthia Rudin, Yaron Shaposhnik, Sijia Wang, Tong Wang

We propose a possible solution to a public challenge posed by the Fair Isaac Corporation (FICO), which is to provide an explainable model for credit risk assessment.

Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead

3 code implementations26 Nov 2018 Cynthia Rudin

Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains.

BIG-bench Machine Learning Decision Making +1

MALTS: Matching After Learning to Stretch

no code implementations18 Nov 2018 Harsh Parikh, Cynthia Rudin, Alexander Volfovsky

In this work, we learn an interpretable distance metric for matching, which leads to substantially higher quality matches.

Causal Inference

This Looks Like That: Deep Learning for Interpretable Image Recognition

3 code implementations NeurIPS 2019 Chaofan Chen, Oscar Li, Chaofan Tao, Alina Jade Barnett, Jonathan Su, Cynthia Rudin

In this work, we introduce a deep network architecture -- prototypical part network (ProtoPNet), that reasons in a similar way: the network dissects the image by finding prototypical parts, and combines evidence from the prototypes to make a final classification.

General Classification Image Classification

Interpretable Almost Matching Exactly for Causal Inference

3 code implementations18 Jun 2018 Yameng Liu, Aw Dieng, Sudeepa Roy, Cynthia Rudin, Alexander Volfovsky

Notable advantages of our method over existing matching procedures are its high-quality matches, versatility in handling different data distributions that may have irrelevant variables, and ability to handle missing data by matching on as many available covariates as possible.

Causal Inference

New Techniques for Preserving Global Structure and Denoising with Low Information Loss in Single-Image Super-Resolution

1 code implementation9 May 2018 Yijie Bei, Alex Damian, Shijia Hu, Sachit Menon, Nikhil Ravi, Cynthia Rudin

This work identifies and addresses two important technical challenges in single-image super-resolution: (1) how to upsample an image without magnifying noise and (2) how to preserve large scale structure when upsampling.

Denoising Image Super-Resolution

A Theory of Statistical Inference for Ensuring the Robustness of Scientific Results

2 code implementations23 Apr 2018 Beau Coker, Cynthia Rudin, Gary King

We introduce hacking intervals, which are the range of a summary statistic one may obtain given a class of possible endogenous manipulations of the data.

A Minimax Surrogate Loss Approach to Conditional Difference Estimation

1 code implementation10 Mar 2018 Siong Thye Goh, Cynthia Rudin

We present a new machine learning approach to estimate personalized treatment effects in the classical potential outcomes framework with binary outcomes.

Direct Learning to Rank and Rerank

no code implementations21 Feb 2018 Cynthia Rudin, Yining Wang

Learning-to-rank techniques have proven to be extremely useful for prioritization problems, where we rank items in order of their estimated probabilities, and dedicate our limited resources to the top-ranked items.

Learning-To-Rank

Model Class Reliance: Variable Importance Measures for any Machine Learning Model Class, from the "Rashomon" Perspective

3 code implementations4 Jan 2018 Aaron Fisher, Cynthia Rudin, Francesca Dominici

Expanding on MR, we propose Model Class Reliance (MCR) as the upper and lower bounds on the degree to which any well-performing prediction model within a class may rely on a variable of interest, or set of variables of interest.

Methodology

Extreme Dimension Reduction for Handling Covariate Shift

no code implementations29 Nov 2017 Fulton Wang, Cynthia Rudin

In the covariate shift learning scenario, the training and test covariate distributions differ, so that a predictor's average loss over the training and test distributions also differ.

Dimensionality Reduction

Causal Rule Sets for Identifying Subgroups with Enhanced Treatment Effect

no code implementations16 Oct 2017 Tong Wang, Cynthia Rudin

The Bayesian model has tunable parameters that can characterize subgroups with various sizes, providing users with more flexible choices of models from the \emph{treatment efficient frontier}.

Causal Inference Subgroup Discovery

Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions

5 code implementations13 Oct 2017 Oscar Li, Hao liu, Chaofan Chen, Cynthia Rudin

This architecture contains an autoencoder and a special prototype layer, where each unit of that layer stores a weight vector that resembles an encoded training input.

General Classification

An Optimization Approach to Learning Falling Rule Lists

1 code implementation6 Oct 2017 Chaofan Chen, Cynthia Rudin

A falling rule list is a probabilistic decision list for binary classification, consisting of a series of if-then rules with antecedents in the if clauses and probabilities of the desired outcome ("1") in the then clauses.

Binary Classification General Classification

Learning Certifiably Optimal Rule Lists for Categorical Data

5 code implementations6 Apr 2017 Elaine Angelino, Nicholas Larus-Stone, Daniel Alabi, Margo Seltzer, Cynthia Rudin

We present the design and implementation of a custom discrete optimization technique for building rule lists over a categorical feature space.

Learning Cost-Effective and Interpretable Regimes for Treatment Recommendation

no code implementations23 Nov 2016 Himabindu Lakkaraju, Cynthia Rudin

We formulate this as a problem of learning a decision list -- a sequence of if-then-else rules -- which maps characteristics of subjects (eg., diagnostic test results of patients) to treatments.

Learning Cost-Effective Treatment Regimes using Markov Decision Processes

no code implementations21 Oct 2016 Himabindu Lakkaraju, Cynthia Rudin

We formulate this as a problem of learning a decision list -- a sequence of if-then-else rules -- which maps characteristics of subjects (eg., diagnostic test results of patients) to treatments.

Learning Optimized Risk Scores

2 code implementations1 Oct 2016 Berk Ustun, Cynthia Rudin

Risk scores are simple classification models that let users make quick risk predictions by adding and subtracting a few small numbers.

Seizure prediction

Scalable Bayesian Rule Lists

7 code implementations ICML 2017 Hongyu Yang, Cynthia Rudin, Margo Seltzer

They have a logical structure that is a sequence of IF-THEN rules, identical to a decision list or one-sided decision tree.

Computational Efficiency

Learning Optimized Or's of And's

no code implementations6 Nov 2015 Tong Wang, Cynthia Rudin

Or's of And's (OA) models are comprised of a small number of disjunctions of conjunctions, also called disjunctive normal form.

Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model

2 code implementations5 Nov 2015 Benjamin Letham, Cynthia Rudin, Tyler H. McCormick, David Madigan

We introduce a generative model called Bayesian Rule Lists that yields a posterior distribution over possible decision lists.

Causal Falling Rule Lists

no code implementations18 Oct 2015 Fulton Wang, Cynthia Rudin

A causal falling rule list (CFRL) is a sequence of if-then rules that specifies heterogeneous treatment effects, where (i) the order of rules determines the treatment effect subgroup a subject belongs to, and (ii) the treatment effect decreases monotonically down the list.

Model Selection

Regulating Greed Over Time in Multi-Armed Bandits

1 code implementation21 May 2015 Stefano Tracà, Cynthia Rudin, Weiyu Yan

In the corrected methods, exploitation (greed) is regulated over time, so that more exploitation occurs during higher reward periods, and more exploration occurs in periods of low reward.

Multi-Armed Bandits Time Series Analysis

Or's of And's for Interpretable Classification, with Application to Context-Aware Recommender Systems

no code implementations28 Apr 2015 Tong Wang, Cynthia Rudin, Finale Doshi-Velez, Yimin Liu, Erica Klampfl, Perry MacNeille

In both cases, there are prior parameters that the user can set to encourage the model to have a desired size and shape, to conform with a domain-specific definition of interpretability.

Attribute General Classification +1

Modeling Recovery Curves With Application to Prostatectomy

1 code implementation27 Apr 2015 Fulton Wang, Tyler H. McCormick, Cynthia Rudin, John Gore

We propose a Bayesian model that predicts recovery curves based on information available before the disruptive event.

Interpretable Classification Models for Recidivism Prediction

no code implementations26 Mar 2015 Jiaming Zeng, Berk Ustun, Cynthia Rudin

We investigate a long-debated question, which is how to create predictive models of recidivism that are sufficiently accurate, transparent, and interpretable to use for decision-making.

BIG-bench Machine Learning Classification +2

The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification

no code implementations NeurIPS 2014 Been Kim, Cynthia Rudin, Julie Shah

We present the Bayesian Case Model (BCM), a general framework for Bayesian case-based reasoning (CBR) and prototype classification and clustering.

Classification Clustering +1

Supersparse Linear Integer Models for Optimized Medical Scoring Systems

2 code implementations15 Feb 2015 Berk Ustun, Cynthia Rudin

Scoring systems are linear classification models that only require users to add, subtract and multiply a few small numbers in order to make a prediction.

Interpretable Machine Learning

Falling Rule Lists

no code implementations21 Nov 2014 Fulton Wang, Cynthia Rudin

Falling rule lists are classification models consisting of an ordered list of if-then rules, where (i) the order of rules determines which example should be classified by each rule, and (ii) the estimated probability of success decreases monotonically down the list.

General Classification

Robust Optimization using Machine Learning for Uncertainty Sets

1 code implementation4 Jul 2014 Theja Tulabandhula, Cynthia Rudin

Our goal is to build robust optimization problems for making decisions based on complex data from the past.

BIG-bench Machine Learning Decision Making +1

Generalization Bounds for Learning with Linear, Polygonal, Quadratic and Conic Side Knowledge

1 code implementation30 May 2014 Theja Tulabandhula, Cynthia Rudin

In this paper, we consider a supervised learning setting where side knowledge is provided about the labels of unlabeled examples.

Generalization Bounds

Methods and Models for Interpretable Linear Classification

no code implementations16 May 2014 Berk Ustun, Cynthia Rudin

We present an integer programming framework to build accurate and interpretable discrete linear classification models.

Classification General Classification

Box Drawings for Learning with Imbalanced Data

1 code implementation13 Mar 2014 Siong Thye Goh, Cynthia Rudin

The vast majority of real world classification problems are imbalanced, meaning there are far fewer data from the class of interest (the positive class) than from other classes.

General Classification imbalanced classification

A Statistical Learning Theory Framework for Supervised Pattern Discovery

no code implementations2 Jul 2013 Jonathan H. Huggins, Cynthia Rudin

This paper formalizes a latent variable inference problem we call {\em supervised pattern discovery}, the goal of which is to find sets of observations that belong to a single ``pattern.''

Learning Theory

Supersparse Linear Integer Models for Interpretable Classification

no code implementations27 Jun 2013 Berk Ustun, Stefano Tracà, Cynthia Rudin

We illustrate the practical and interpretable nature of SLIM scoring systems through applications in medicine and criminology, and show that they are are accurate and sparse in comparison to state-of-the-art classification models using numerical experiments.

Classification General Classification

Supersparse Linear Integer Models for Predictive Scoring Systems

no code implementations25 Jun 2013 Berk Ustun, Stefano Traca, Cynthia Rudin

We introduce Supersparse Linear Integer Models (SLIM) as a tool to create scoring systems for binary classification.

Binary Classification Classification +1

Learning About Meetings

no code implementations8 Jun 2013 Been Kim, Cynthia Rudin

Most people participate in meetings almost every day, multiple times a day.

On Combining Machine Learning with Decision Making

no code implementations27 Apr 2011 Theja Tulabandhula, Cynthia Rudin

We present a new application and covering number bound for the framework of "Machine Learning with Operational Costs (MLOC)," which is an exploratory form of decision theory.

BIG-bench Machine Learning Decision Making +1

Cannot find the paper you are looking for? You can Submit a new open access paper.