Search Results for author: Ilya Tolstikhin

Found 20 papers, 9 papers with code

Fine-Grained Distribution-Dependent Learning Curves

no code implementations31 Aug 2022 Olivier Bousquet, Steve Hanneke, Shay Moran, Jonathan Shafer, Ilya Tolstikhin

We solve this problem in a principled manner, by introducing a combinatorial dimension called VCL that characterizes the best $d'$ for which $d'/n$ is a strong minimax lower bound.

Learning Theory PAC learning

A Generalized Lottery Ticket Hypothesis

no code implementations3 Jul 2021 Ibrahim Alabdulmohsin, Larisa Markeeva, Daniel Keysers, Ilya Tolstikhin

We introduce a generalization to the lottery ticket hypothesis in which the notion of "sparsity" is relaxed by choosing an arbitrary basis in the space of parameters.

What Do Neural Networks Learn When Trained With Random Labels?

no code implementations NeurIPS 2020 Hartmut Maennel, Ibrahim Alabdulmohsin, Ilya Tolstikhin, Robert J. N. Baldock, Olivier Bousquet, Sylvain Gelly, Daniel Keysers

We show how this alignment produces a positive transfer: networks pre-trained with random labels train faster downstream compared to training from scratch even after accounting for simple effects, such as weight scaling.

Memorization

Predicting Neural Network Accuracy from Weights

1 code implementation26 Feb 2020 Thomas Unterthiner, Daniel Keysers, Sylvain Gelly, Olivier Bousquet, Ilya Tolstikhin

Furthermore, the predictors are able to rank networks trained on different, unobserved datasets and with different architectures.

When can unlabeled data improve the learning rate?

no code implementations28 May 2019 Christina Göpfert, Shai Ben-David, Olivier Bousquet, Sylvain Gelly, Ilya Tolstikhin, Ruth Urner

In semi-supervised classification, one is given access both to labeled and unlabeled data.

Practical and Consistent Estimation of f-Divergences

1 code implementation NeurIPS 2019 Paul K. Rubenstein, Olivier Bousquet, Josip Djolonga, Carlos Riquelme, Ilya Tolstikhin

The estimation of an f-divergence between two probability distributions based on samples is a fundamental problem in statistics and machine learning.

BIG-bench Machine Learning Mutual Information Estimation +1

GeNet: Deep Representations for Metagenomics

5 code implementations30 Jan 2019 Mateo Rojas-Carulla, Ilya Tolstikhin, Guillermo Luque, Nicholas Youngblut, Ruth Ley, Bernhard Schölkopf

We introduce GeNet, a method for shotgun metagenomic classification from raw DNA sequences that exploits the known hierarchical structure between labels for training.

General Classification

Competitive Training of Mixtures of Independent Deep Generative Models

no code implementations30 Apr 2018 Francesco Locatello, Damien Vincent, Ilya Tolstikhin, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf

A common assumption in causal modeling posits that the data is generated by a set of independent mechanisms, and algorithms should aim to recover this structure.

Clustering

On the Latent Space of Wasserstein Auto-Encoders

no code implementations11 Feb 2018 Paul K. Rubenstein, Bernhard Schoelkopf, Ilya Tolstikhin

We study the role of latent space dimensionality in Wasserstein auto-encoders (WAEs).

Disentanglement

Wasserstein Auto-Encoders

13 code implementations ICLR 2018 Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, Bernhard Schoelkopf

We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution.

Differentially Private Database Release via Kernel Mean Embeddings

1 code implementation ICML 2018 Matej Balog, Ilya Tolstikhin, Bernhard Schölkopf

First, releasing (an estimate of) the kernel mean embedding of the data generating random variable instead of the database itself still allows third-parties to construct consistent estimators of a wide class of population statistics.

Probabilistic Active Learning of Functions in Structural Causal Models

no code implementations30 Jun 2017 Paul K. Rubenstein, Ilya Tolstikhin, Philipp Hennig, Bernhard Schoelkopf

We consider the problem of learning the functions computing children from parents in a Structural Causal Model once the underlying causal graph has been identified.

Active Learning Causal Discovery

From optimal transport to generative modeling: the VEGAN cookbook

1 code implementation22 May 2017 Olivier Bousquet, Sylvain Gelly, Ilya Tolstikhin, Carl-Johann Simon-Gabriel, Bernhard Schoelkopf

We study unsupervised generative modeling in terms of the optimal transport (OT) problem between true (but unknown) data distribution $P_X$ and the latent variable model distribution $P_G$.

AdaGAN: Boosting Generative Models

1 code implementation NeurIPS 2017 Ilya Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann Simon-Gabriel, Bernhard Schölkopf

Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) are an effective method for training generative models of complex data such as natural images.

Minimax Lower Bounds for Realizable Transductive Classification

no code implementations9 Feb 2016 Ilya Tolstikhin, David Lopez-Paz

Transductive learning considers a training set of $m$ labeled samples and a test set of $u$ unlabeled samples, with the goal of best labeling that particular test set.

Binary Classification Classification +2

Permutational Rademacher Complexity: a New Complexity Measure for Transductive Learning

no code implementations12 May 2015 Ilya Tolstikhin, Nikita Zhivotovskiy, Gilles Blanchard

This paper introduces a new complexity measure for transductive learning called Permutational Rademacher Complexity (PRC) and studies its properties.

Learning Theory Transductive Learning

Localized Complexities for Transductive Learning

no code implementations26 Nov 2014 Ilya Tolstikhin, Gilles Blanchard, Marius Kloft

We show two novel concentration inequalities for suprema of empirical processes when sampling without replacement, which both take the variance of the functions into account.

Learning Theory Transductive Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.