Search Results for author: Abhishek Chakrabortty

Found 9 papers, 2 papers with code

The Decaying Missing-at-Random Framework: Doubly Robust Causal Inference with Partially Labeled Data

no code implementations22 May 2023 Yuqian Zhang, Abhishek Chakrabortty, Jelena Bradic

Notably, we relax the need for a positivity condition, commonly required in the missing data literature, and allow uniform decay of labeling propensity scores with sample size, accommodating faster growth of unlabeled data.

Causal Inference Partially Labeled Datasets +1

Semi-Supervised Quantile Estimation: Robust and Efficient Inference in High Dimensional Settings

no code implementations25 Jan 2022 Abhishek Chakrabortty, Guorong Dai, Raymond J. Carroll

We propose a family of semi-supervised estimators for the response quantile(s) based on the two data sets, to improve the estimation accuracy compared to the supervised estimator, i. e., the sample quantile from the labeled data.

Dimensionality Reduction Imputation

A General Framework for Treatment Effect Estimation in Semi-Supervised and High Dimensional Settings

no code implementations3 Jan 2022 Abhishek Chakrabortty, Guorong Dai, Eric Tchetgen Tchetgen

Specifically, we consider two such estimands: (a) the average treatment effect and (b) the quantile treatment effect, as prototype cases, in an SS setting, characterized by two available data sets: (i) a labeled data set of size $n$, providing observations for a response and a set of high dimensional covariates, as well as a binary treatment indicator; and (ii) an unlabeled data set of size $N$, much larger than $n$, but without the response observed.

Causal Inference

Double Robust Semi-Supervised Inference for the Mean: Selection Bias under MAR Labeling with Decaying Overlap

1 code implementation14 Apr 2021 Yuqian Zhang, Abhishek Chakrabortty, Jelena Bradic

Apart from a moderate-sized labeled data, L, the SS setting is characterized by an additional, much larger sized, unlabeled data, U.

Causal Inference Selection bias

High Dimensional M-Estimation with Missing Outcomes: A Semi-Parametric Framework

no code implementations26 Nov 2019 Abhishek Chakrabortty, Jiarui Lu, T. Tony Cai, Hongzhe Li

Under mild tail assumptions and arbitrarily chosen (working) models for the propensity score (PS) and the outcome regression (OR) estimators, satisfying only some high-level conditions, we establish finite sample performance bounds for the DDR estimator showing its (optimal) $L_2$ error rate to be $\sqrt{s (\log d)/ n}$ when both models are correct, and its consistency and DR properties when only one of them is correct.

Causal Inference regression +1

Inference for Individual Mediation Effects and Interventional Effects in Sparse High-Dimensional Causal Graphical Models

1 code implementation27 Sep 2018 Abhishek Chakrabortty, Preetam Nandy, Hongzhe Li

In particular, we assume that the causal structure of the treatment, the confounders, the potential mediators and the response is a (possibly unknown) directed acyclic graph (DAG).

Moving Beyond Sub-Gaussianity in High-Dimensional Statistics: Applications in Covariance Estimation and Linear Regression

no code implementations8 Apr 2018 Arun Kumar Kuchibhotla, Abhishek Chakrabortty

The third example concerns the restricted eigenvalue condition, required in HD linear regression, which we verify for all sub-Weibull random vectors through a unified analysis, and also prove a more general result related to restricted strong convexity in the process.

regression

Surrogate Aided Unsupervised Recovery of Sparse Signals in Single Index Models for Binary Outcomes

no code implementations18 Jan 2017 Abhishek Chakrabortty, Matey Neykov, Raymond Carroll, Tianxi Cai

We consider the recovery of regression coefficients, denoted by $\boldsymbol{\beta}_0$, for a single index model (SIM) relating a binary outcome $Y$ to a set of possibly high dimensional covariates $\boldsymbol{X}$, based on a large but 'unlabeled' dataset $\mathcal{U}$, with $Y$ never observed.

Efficient and Adaptive Linear Regression in Semi-Supervised Settings

no code implementations17 Jan 2017 Abhishek Chakrabortty, Tianxi Cai

It is often of interest to investigate if and when the unlabeled data can be exploited to improve estimation of the regression parameter in the adopted linear model.

Imputation regression

Cannot find the paper you are looking for? You can Submit a new open access paper.