Two-sample testing

61 papers with code • 5 benchmarks • 1 datasets

In statistical hypothesis testing, a two-sample test is a test performed on the data of two random samples, each independently obtained from a different given population. The purpose of the test is to determine whether the difference between these two populations is statistically significant. The statistics used in two-sample tests can be used to solve many machine learning problems, such as domain adaptation, covariate shift and generative adversarial networks.

Most implemented papers

PacGAN: The power of two samples in generative adversarial networks

fjxmlzn/PacGAN NeurIPS 2018

Generative adversarial networks (GANs) are innovative techniques for learning generative models of complex data distributions from samples.

Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing

dgl-prc/m_testing_adversatial_sample 14 Dec 2018

We thus first propose a measure of `sensitivity' and show empirically that normal samples and adversarial samples have distinguishable sensitivity.

hyppo: A Multivariate Hypothesis Testing Python Package

neurodata/mgcpy 3 Jul 2019

We introduce hyppo, a unified library for performing multivariate hypothesis testing, including independence, two-sample, and k-sample testing.

Generative Moment Matching Networks

yujiali/gmmn 10 Feb 2015

We consider the problem of learning deep generative models from data.

Gaussian Differential Privacy

woodyx218/Deep-Learning-with-GDP-Tensorflow 7 May 2019

More precisely, the privacy guarantees of \emph{any} hypothesis testing based definition of privacy (including original DP) converges to GDP in the limit under composition.

Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm

mateuszbuda/brain-segmentation 9 Jun 2019

Based on automatic deep learning segmentations, we extracted three features which quantify two-dimensional and three-dimensional characteristics of the tumors.

Online Robust Principal Component Analysis with Change Point Detection

wxiao0421/onlineRPCA 19 Feb 2017

Robust PCA methods are typically batch algorithms which requires loading all observations into memory before processing.

Statistical Anomaly Detection via Composite Hypothesis Testing for Markov Models

hbhzwj/SADIT 27 Feb 2017

Under Markovian assumptions, we leverage a Central Limit Theorem (CLT) for the empirical measure in the test statistic of the composite hypothesis Hoeffding test so as to establish weak convergence results for the test statistic, and, thereby, derive a new estimator for the threshold needed by the test.

Scalable and Efficient Hypothesis Testing with Random Forests

tim-coleman/SURFTest 16 Apr 2019

Throughout the last decade, random forests have established themselves as among the most accurate and popular supervised learning methods.

Comparing distributions: $\ell_1$ geometry improves kernel two-sample testing

meyerscetbon/l1_two_sample_test 19 Sep 2019

Here, we show that $L^p$ distances (with $p\geq 1$) between these distribution representatives give metrics on the space of distributions that are well-behaved to detect differences between distributions as they metrize the weak convergence.