no code implementations • NeurIPS 2023 • Fulton Wang, Julius Adebayo, Sarah Tan, Diego Garcia-Olano, Narine Kokhlikyan
We present a method for identifying groups of test examples -- slices -- on which a model under-performs, a task now known as slice discovery.
no code implementations • 28 Mar 2023 • Xuhai Xu, Mengjie Yu, Tanya R. Jonker, Kashyap Todi, Feiyu Lu, Xun Qian, João Marcelo Evangelista Belo, Tianyi Wang, Michelle Li, Aran Mun, Te-Yen Wu, Junxiao Shen, Ting Zhang, Narine Kokhlikyan, Fulton Wang, Paul Sorenson, Sophie Kahyun Kim, Hrvoje Benko
The framework was based on a multi-disciplinary literature review of XAI and HCI research, a large-scale survey probing 500+ end-users' preferences for AR-based explanations, and three workshops with 12 experts collecting their insights about XAI design in AR.
no code implementations • 26 Dec 2022 • Narine Kokhlikyan, Bilal Alsallakh, Fulton Wang, Vivek Miglani, Oliver Aobo Yang, David Adkins
We propose a fairness-aware learning framework that mitigates intersectional subgroup bias associated with protected attributes.
no code implementations • 29 Nov 2017 • Fulton Wang, Cynthia Rudin
In the covariate shift learning scenario, the training and test covariate distributions differ, so that a predictor's average loss over the training and test distributions also differ.
no code implementations • 18 Oct 2015 • Fulton Wang, Cynthia Rudin
A causal falling rule list (CFRL) is a sequence of if-then rules that specifies heterogeneous treatment effects, where (i) the order of rules determines the treatment effect subgroup a subject belongs to, and (ii) the treatment effect decreases monotonically down the list.
1 code implementation • 27 Apr 2015 • Fulton Wang, Tyler H. McCormick, Cynthia Rudin, John Gore
We propose a Bayesian model that predicts recovery curves based on information available before the disruptive event.
no code implementations • 21 Nov 2014 • Fulton Wang, Cynthia Rudin
Falling rule lists are classification models consisting of an ordered list of if-then rules, where (i) the order of rules determines which example should be classified by each rule, and (ii) the estimated probability of success decreases monotonically down the list.