Search Results for author: James M. Robins

Found 11 papers, 2 papers with code

Assumptions and Bounds in the Instrumental Variable Model

no code implementations24 Jan 2024 Thomas S. Richardson, James M. Robins

In this note we give proofs for results relating to the Instrumental Variable (IV) model with binary response $Y$ and binary treatment $X$, but with an instrument $Z$ with $K$ states.

Assumption-lean falsification tests of rate double-robustness of double-machine-learning estimators

no code implementations18 Jun 2023 Lin Liu, Rajarshi Mukherjee, James M. Robins

In many instances, an analyst justifies her claim by imposing complexity-reducing assumptions on $b$ and $p$ to ensure "rate double-robustness".

valid

Deep Learning Methods for Proximal Inference via Maximum Moment Restriction

1 code implementation19 May 2022 Benjamin Kompa, David R. Bellamy, Thomas Kolokotrones, James M. Robins, Andrew L. Beam

In this work, we introduce a flexible and scalable method based on a deep neural network to estimate causal effects in the presence of unmeasured confounding using proximal inference.

Multivariate Counterfactual Systems And Causal Graphical Models

no code implementations13 Aug 2020 Ilya Shpitser, Thomas S. Richardson, James M. Robins

Among Judea Pearl's many contributions to Causality and Statistics, the graphical d-separation} criterion, the do-calculus and the mediation formula stand out.

Methodology 62P10

Rejoinder: On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning

no code implementations7 Aug 2020 Lin Liu, Rajarshi Mukherjee, James M. Robins

This is the rejoinder to the discussion by Kennedy, Balakrishnan and Wasserman on the paper "On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning" published in Statistical Science.

BIG-bench Machine Learning

Identification In Missing Data Models Represented By Directed Acyclic Graphs

no code implementations29 Jun 2019 Rohit Bhattacharya, Razieh Nabi, Ilya Shpitser, James M. Robins

Missing data is a pervasive problem in data analyses, resulting in datasets that contain censored realizations of a target distribution.

Causal Inference

On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning

no code implementations8 Apr 2019 Lin Liu, Rajarshi Mukherjee, James M. Robins

In this paper, we introduce essentially assumption-free tests that (i) can falsify the null hypothesis that the bias of $\hat{\psi}_{1}$ is of smaller order than its standard error, (ii) can provide an upper confidence bound on the true coverage of the Wald interval, and (iii) are valid under the null under no smoothness/sparsity assumptions on the nuisance parameters.

BIG-bench Machine Learning valid

A unifying approach for doubly-robust $\ell_1$ regularized estimation of causal contrasts

no code implementations7 Apr 2019 Ezequiel Smucler, Andrea Rotnitzky, James M. Robins

We focus on a class of parameters that have influence function which depends on two infinite dimensional nuisance functions and such that the bias of the one-step estimator of the parameter of interest is the expectation of the product of the estimation errors of the two nuisance functions.

Causal Inference Econometrics +1

Nonparametric von Mises Estimators for Entropies, Divergences and Mutual Informations

no code implementations NeurIPS 2015 Kirthevasan Kandasamy, Akshay Krishnamurthy, Barnabas Poczos, Larry Wasserman, James M. Robins

We propose and analyse estimators for statistical functionals of one or moredistributions under nonparametric assumptions. Our estimators are derived from the von Mises expansion andare based on the theory of influence functions, which appearin the semiparametric statistics literature. We show that estimators based either on data-splitting or a leave-one-out techniqueenjoy fast rates of convergence and other favorable theoretical properties. We apply this framework to derive estimators for several popular informationtheoretic quantities, and via empirical evaluation, show the advantage of thisapproach over existing estimators.

Sparse Nested Markov models with Log-linear Parameters

no code implementations26 Sep 2013 Ilya Shpitser, Robin J. Evans, Thomas S. Richardson, James M. Robins

To make modeling and inference with nested Markov models practical, it is necessary to limit the number of parameters in the model, while still correctly capturing the constraints in the marginal of a DAG model.

Causal Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.