We show how this alignment produces a positive transfer: networks pre-trained with random labels train faster downstream compared to training from scratch even after accounting for simple effects, such as weight scaling.
In semi-supervised classification, one is given access both to labeled and unlabeled data.
The estimation of an f-divergence between two probability distributions based on samples is a fundamental problem in statistics and machine learning.
A common assumption in causal modeling posits that the data is generated by a set of independent mechanisms, and algorithms should aim to recover this structure.
First, releasing (an estimate of) the kernel mean embedding of the data generating random variable instead of the database itself still allows third-parties to construct consistent estimators of a wide class of population statistics.
We consider the problem of learning the functions computing children from parents in a Structural Causal Model once the underlying causal graph has been identified.
We study unsupervised generative modeling in terms of the optimal transport (OT) problem between true (but unknown) data distribution $P_X$ and the latent variable model distribution $P_G$.
We provide a theoretical foundation for non-parametric estimation of functions of random variables using kernel mean embeddings.
Transductive learning considers a training set of $m$ labeled samples and a test set of $u$ unlabeled samples, with the goal of best labeling that particular test set.
This paper introduces a new complexity measure for transductive learning called Permutational Rademacher Complexity (PRC) and studies its properties.