Paper

On $L_2$-consistency of nearest neighbor matching

Biased sampling and missing data complicates statistical problems ranging from causal inference to reinforcement learning. We often correct for biased sampling of summary statistics with matching methods and importance weighting. In this paper, we study nearest neighbor matching (NNM), which makes estimates of population quantities from biased samples by substituting unobserved variables with their nearest neighbors in the biased sample. We show that NNM is $L_2$-consistent in the absence of smoothness and boundedness assumptions in finite dimensions. We discuss applications of NNM, outline the barriers to generalizing this work to separable metric spaces, and compare this result to inverse probability weighting.

Results in Papers With Code
(↓ scroll down to see all results)