no code implementations • 8 Apr 2021 • Carlos Fernández-Loría, Foster Provost
Recently, we have seen an acceleration of research related to CDM and causal effect estimation (CEE) using machine-learned models.
1 code implementation • 24 Apr 2020 • Carlos Fernández-Loría, Foster Provost, Jesse Anderton, Benjamin Carterette, Praveen Chandar
This study presents a systematic comparison of methods for individual treatment assignment, a general problem that arises in many applications and has received significant attention from economists, computer scientists, and social scientists.
no code implementations • 21 Jan 2020 • Carlos Fernández-Loría, Foster Provost, Xintian Han
We examine counterfactual explanations for explaining the decisions made by model-based AI systems.
3 code implementations • 4 Dec 2019 • Yanou Ramon, David Martens, Foster Provost, Theodoros Evgeniou
This study aligns the recently proposed Linear Interpretable Model-agnostic Explainer (LIME) and Shapley Additive Explanations (SHAP) with the notion of counterfactual explanations, and empirically benchmarks their effectiveness and efficiency against SEDC using a collection of 13 data sets.
no code implementations • 21 Jul 2016 • Julie Moeyersoms, Brian d'Alessandro, Foster Provost, David Martens
We evaluate these alternatives in terms of explanation "bang for the buck,", i. e., how many examples' inferences are explained for a given number of features listed.
no code implementations • 26 Jun 2016 • Daizhuo Chen, Samuel P. Fraiberger, Robert Moakler, Foster Provost
Recent studies have shown that information disclosed on social network sites (such as Facebook) can be used to predict personal characteristics with surprisingly high accuracy.