1 code implementation • 6 Oct 2022 • Kristin Blesch, David S. Watson, Marvin N. Wright
The CPI enables conditional FI measurement that controls for any feature dependencies by sampling valid knockoffs - hence, generating synthetic data with similar statistical properties - for the data to be analyzed.
1 code implementation • 19 May 2022 • David S. Watson, Kristin Blesch, Jan Kapar, Marvin N. Wright
We propose methods for density estimation and data synthesis using a novel form of unsupervised random forests.
1 code implementation • 11 May 2022 • David S. Watson, Ricardo Silva
Under a structural assumption called the $\textit{confounder blanket principle}$, which we argue is essential for tractable causal discovery in high dimensions, our method accommodates graphs of low or high sparsity while maintaining polynomial time complexity.
1 code implementation • 18 Jun 2021 • David S. Watson
Explaining the predictions of opaque machine learning algorithms is an important and challenging task, especially as complex models are increasingly used to assist in high-stakes decisions such as those arising in healthcare and finance.
1 code implementation • 9 Jun 2021 • Limor Gultchin, David S. Watson, Matt J. Kusner, Ricardo Silva
We examine the problem of causal response estimation for complex objects (e. g., text, images, genomics).
3 code implementations • 28 Jan 2019 • David S. Watson, Marvin N. Wright
We propose the conditional predictive impact (CPI), a consistent and unbiased estimator of the association between one or several features and a given outcome, conditional on a reduced feature set.