Search Results for author: Rajarshi Mukherjee

Found 12 papers, 2 papers with code

Assumption-lean falsification tests of rate double-robustness of double-machine-learning estimators

no code implementations18 Jun 2023 Lin Liu, Rajarshi Mukherjee, James M. Robins

In many instances, an analyst justifies her claim by imposing complexity-reducing assumptions on $b$ and $p$ to ensure "rate double-robustness".

valid

On Undersmoothing and Sample Splitting for Estimating a Doubly Robust Functional

no code implementations30 Dec 2022 Sean McGrath, Rajarshi Mukherjee

We consider the problem of constructing minimax rate-optimal estimators for a doubly robust nonparametric functional that has witnessed applications across the causal inference and conditional independence testing literature.

Causal Inference

Sparse Signal Detection in Heteroscedastic Gaussian Sequence Models: Sharp Minimax Rates

no code implementations15 Nov 2022 Julien Chhor, Rajarshi Mukherjee, Subhabrata Sen

Given a heterogeneous Gaussian sequence model with unknown mean $\theta \in \mathbb R^d$ and known covariance matrix $\Sigma = \operatorname{diag}(\sigma_1^2,\dots, \sigma_d^2)$, we study the signal detection problem against sparse alternatives, for known sparsity $s$.

Towards a Unified Framework for Uncertainty-aware Nonlinear Variable Selection with Theoretical Guarantees

no code implementations15 Apr 2022 Wenying Deng, Beau Coker, Rajarshi Mukherjee, Jeremiah Zhe Liu, Brent A. Coull

We develop a simple and unified framework for nonlinear variable selection that incorporates uncertainty in the prediction function and is compatible with a wide range of machine learning models (e. g., tree ensembles, kernel methods, neural networks, etc).

Variable Selection

On the Existence of Universal Lottery Tickets

1 code implementation ICLR 2022 Rebekka Burkholz, Nilanjana Laha, Rajarshi Mukherjee, Alkis Gotovos

The lottery ticket hypothesis conjectures the existence of sparse subnetworks of large randomly initialized deep neural networks that can be successfully trained in isolation.

Cross-Cluster Weighted Forests

1 code implementation17 May 2021 Maya Ramchandran, Rajarshi Mukherjee, Giovanni Parmigiani

Adapting machine learning algorithms to better handle clustering or batch effects within training data sets is important across a wide variety of biological applications.

Clustering

Detecting Structured Signals in Ising Models

no code implementations10 Dec 2020 Nabarun Deb, Rajarshi Mukherjee, Sumit Mukherjee, Ming Yuan

In this paper, we study the effect of dependence on detecting a class of signals in Ising models, where the signals are present in a structured way.

Probability Statistics Theory Statistics Theory 62G10, 62G20, 62C20

Semi-Supervised Off Policy Reinforcement Learning

no code implementations9 Dec 2020 Aaron Sonabend-W, Nilanjana Laha, Ashwin N. Ananthakrishnan, Tianxi Cai, Rajarshi Mukherjee

2) The surrogate variables we leverage in the modified SSL framework are predictive of the outcome but not informative to the optimal policy or value function.

Imputation Q-Learning +2

Rejoinder: On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning

no code implementations7 Aug 2020 Lin Liu, Rajarshi Mukherjee, James M. Robins

This is the rejoinder to the discussion by Kennedy, Balakrishnan and Wasserman on the paper "On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning" published in Statistical Science.

BIG-bench Machine Learning

On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning

no code implementations8 Apr 2019 Lin Liu, Rajarshi Mukherjee, James M. Robins

In this paper, we introduce essentially assumption-free tests that (i) can falsify the null hypothesis that the bias of $\hat{\psi}_{1}$ is of smaller order than its standard error, (ii) can provide an upper confidence bound on the true coverage of the Wald interval, and (iii) are valid under the null under no smoothness/sparsity assumptions on the nuisance parameters.

BIG-bench Machine Learning valid

On Estimation of $L_{r}$-Norms in Gaussian White Noise Models

no code implementations11 Oct 2017 Yanjun Han, Jiantao Jiao, Rajarshi Mukherjee

We provide a complete picture of asymptotically minimax estimation of $L_r$-norms (for any $r\ge 1$) of the mean in Gaussian white noise model over Nikolskii-Besov spaces.

Cannot find the paper you are looking for? You can Submit a new open access paper.