Search Results for author: Sylvain Gelly

Found 50 papers, 24 papers with code

Comparing Transfer and Meta Learning Approaches on a Unified Few-Shot Classification Benchmark

1 code implementation6 Apr 2021 Vincent Dumoulin, Neil Houlsby, Utku Evci, Xiaohua Zhai, Ross Goroshin, Sylvain Gelly, Hugo Larochelle

To bridge this gap, we perform a cross-family study of the best transfer and meta learners on both a large-scale meta-learning benchmark (Meta-Dataset, MD), and a transfer learning benchmark (Visual Task Adaptation Benchmark, VTAB).

Few-Shot Learning General Classification +1

Unconditional Synthesis of Complex Scenes Using a Semantic Bottleneck

no code implementations1 Jan 2021 Samaneh Azadi, Michael Tschannen, Eric Tzeng, Sylvain Gelly, Trevor Darrell, Mario Lucic

Coupling the high-fidelity generation capabilities of label-conditional image synthesis methods with the flexibility of unconditional generative models, we propose a semantic bottleneck GAN model for unconditional synthesis of complex scenes.

Image Generation Segmentation

A Sober Look at the Unsupervised Learning of Disentangled Representations and their Evaluation

no code implementations27 Oct 2020 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The idea behind the \emph{unsupervised} learning of \emph{disentangled} representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.

Disentanglement

A Commentary on the Unsupervised Learning of Disentangled Representations

no code implementations28 Jul 2020 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The goal of the unsupervised learning of disentangled representations is to separate the independent explanatory factors of variation in the data without access to supervision.

What Do Neural Networks Learn When Trained With Random Labels?

no code implementations NeurIPS 2020 Hartmut Maennel, Ibrahim Alabdulmohsin, Ilya Tolstikhin, Robert J. N. Baldock, Olivier Bousquet, Sylvain Gelly, Daniel Keysers

We show how this alignment produces a positive transfer: networks pre-trained with random labels train faster downstream compared to training from scratch even after accounting for simple effects, such as weight scaling.

Memorization

Predicting Neural Network Accuracy from Weights

1 code implementation26 Feb 2020 Thomas Unterthiner, Daniel Keysers, Sylvain Gelly, Olivier Bousquet, Ilya Tolstikhin

Furthermore, the predictors are able to rank networks trained on different, unobserved datasets and with different architectures.

Self-Supervised Learning of Video-Induced Visual Invariances

no code implementations CVPR 2020 Michael Tschannen, Josip Djolonga, Marvin Ritter, Aravindh Mahendran, Xiaohua Zhai, Neil Houlsby, Sylvain Gelly, Mario Lucic

We propose a general framework for self-supervised learning of transferable visual representations based on Video-Induced Visual Invariances (VIVI).

Ranked #15 on Image Classification on VTAB-1k (using extra training data)

Image Classification Self-Supervised Learning +1

Semantic Bottleneck Scene Generation

2 code implementations26 Nov 2019 Samaneh Azadi, Michael Tschannen, Eric Tzeng, Sylvain Gelly, Trevor Darrell, Mario Lucic

For the former, we use an unconditional progressive segmentation generation network that captures the distribution of realistic semantic scene layouts.

Conditional Image Generation Image-to-Image Translation +2

On Mutual Information Maximization for Representation Learning

2 code implementations ICLR 2020 Michael Tschannen, Josip Djolonga, Paul K. Rubenstein, Sylvain Gelly, Mario Lucic

Many recent methods for unsupervised or self-supervised representation learning train feature extractors by maximizing an estimate of the mutual information (MI) between different views of the data.

Inductive Bias Representation Learning +1

Google Research Football: A Novel Reinforcement Learning Environment

1 code implementation25 Jul 2019 Karol Kurach, Anton Raichuk, Piotr Stańczyk, Michał Zając, Olivier Bachem, Lasse Espeholt, Carlos Riquelme, Damien Vincent, Marcin Michalski, Olivier Bousquet, Sylvain Gelly

Recent progress in the field of reinforcement learning has been accelerated by virtual learning environments such as video games, where novel algorithms and ideas can be quickly tested in a safe and reproducible manner.

Game of Football reinforcement-learning +1

MULEX: Disentangling Exploitation from Exploration in Deep RL

no code implementations1 Jul 2019 Lucas Beyer, Damien Vincent, Olivier Teboul, Sylvain Gelly, Matthieu Geist, Olivier Pietquin

An agent learning through interactions should balance its action selection process between probing the environment to discover new rewards and using the information acquired in the past to adopt useful behaviour.

Adaptive Temporal-Difference Learning for Policy Evaluation with Per-State Uncertainty Estimates

no code implementations NeurIPS 2019 Hugo Penedones, Carlos Riquelme, Damien Vincent, Hartmut Maennel, Timothy Mann, Andre Barreto, Sylvain Gelly, Gergely Neu

We consider the core reinforcement-learning problem of on-policy value function approximation from a batch of trajectory data, and focus on various issues of Temporal Difference (TD) learning and Monte Carlo (MC) policy evaluation.

When can unlabeled data improve the learning rate?

no code implementations28 May 2019 Christina Göpfert, Shai Ben-David, Olivier Bousquet, Sylvain Gelly, Ilya Tolstikhin, Ruth Urner

In semi-supervised classification, one is given access both to labeled and unlabeled data.

Precision-Recall Curves Using Information Divergence Frontiers

no code implementations26 May 2019 Josip Djolonga, Mario Lucic, Marco Cuturi, Olivier Bachem, Olivier Bousquet, Sylvain Gelly

Despite the tremendous progress in the estimation of generative models, the development of tools for diagnosing their failures and assessing their performance has advanced at a much slower pace.

Image Generation Information Retrieval +1

A Case for Object Compositionality in Deep Generative Models of Images

no code implementations ICLR 2019 Sjoerd van Steenkiste, Karol Kurach, Sylvain Gelly

In this work we propose to structure the generator of a GAN to consider objects and their relations explicitly, and generate images by means of composition.

The GAN Landscape: Losses, Architectures, Regularization, and Normalization

no code implementations ICLR 2019 Karol Kurach, Mario Lucic, Xiaohua Zhai, Marcin Michalski, Sylvain Gelly

Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion.

FVD: A new Metric for Video Generation

no code implementations ICLR Workshop DeepGenStruct 2019 Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphaël Marinier, Marcin Michalski, Sylvain Gelly

While recent generative models of video have had some success, current progress is hampered by the lack of qualitative metrics that consider visual quality, temporal coherence, and diversity of samples.

Representation Learning Video Generation

Breaking the Softmax Bottleneck via Learnable Monotonic Pointwise Non-linearities

no code implementations21 Feb 2019 Octavian-Eugen Ganea, Sylvain Gelly, Gary Bécigneul, Aliaksei Severyn

The Softmax function on top of a final linear layer is the de facto method to output probability distributions in neural networks.

Language Modelling Text Generation

Towards Accurate Generative Models of Video: A New Metric & Challenges

3 code implementations3 Dec 2018 Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, Sylvain Gelly

To this extent we propose Fr\'{e}chet Video Distance (FVD), a new metric for generative models of video, and StarCraft 2 Videos (SCV), a benchmark of game play from custom starcraft 2 scenarios that challenge the current capabilities of generative models of video.

Representation Learning Starcraft +1

Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

8 code implementations ICML 2019 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.

Disentanglement

Investigating Object Compositionality in Generative Adversarial Networks

no code implementations ICLR 2019 Sjoerd van Steenkiste, Karol Kurach, Jürgen Schmidhuber, Sylvain Gelly

We present a minimal modification of a standard generator to incorporate this inductive bias and find that it reliably learns to generate images as compositions of objects.

Image Generation Inductive Bias +5

Episodic Curiosity through Reachability

1 code implementation ICLR 2019 Nikolay Savinov, Anton Raichuk, Raphaël Marinier, Damien Vincent, Marc Pollefeys, Timothy Lillicrap, Sylvain Gelly

One solution to this problem is to allow the agent to create rewards for itself - thus making rewards dense and more suitable for learning.

A Large-Scale Study on Regularization and Normalization in GANs

5 code implementations ICLR 2019 Karol Kurach, Mario Lucic, Xiaohua Zhai, Marcin Michalski, Sylvain Gelly

Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion.

Temporal Difference Learning with Neural Networks - Study of the Leakage Propagation Problem

no code implementations9 Jul 2018 Hugo Penedones, Damien Vincent, Hartmut Maennel, Sylvain Gelly, Timothy Mann, Andre Barreto

Temporal-Difference learning (TD) [Sutton, 1988] with function approximation can converge to solutions that are worse than those obtained by Monte-Carlo regression, even in the simple case of on-policy evaluation.

Assessing Generative Models via Precision and Recall

4 code implementations NeurIPS 2018 Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, Sylvain Gelly

Recent advances in generative modeling have led to an increased interest in the study of statistical divergences as means of model comparison.

Competitive Training of Mixtures of Independent Deep Generative Models

no code implementations30 Apr 2018 Francesco Locatello, Damien Vincent, Ilya Tolstikhin, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf

A common assumption in causal modeling posits that the data is generated by a set of independent mechanisms, and algorithms should aim to recover this structure.

Clustering

Gradient Descent Quantizes ReLU Network Features

no code implementations22 Mar 2018 Hartmut Maennel, Olivier Bousquet, Sylvain Gelly

Deep neural networks are often trained in the over-parametrized regime (i. e. with far more parameters than training examples), and understanding why the training converges to solutions that generalize remains an open problem.

Quantization

Online Hyper-Parameter Optimization

no code implementations ICLR 2018 Damien Vincent, Sylvain Gelly, Nicolas Le Roux, Olivier Bousquet

We propose an efficient online hyperparameter optimization method which uses a joint dynamical system to evaluate the gradient with respect to the hyperparameters.

Hyperparameter Optimization

Wasserstein Auto-Encoders

13 code implementations ICLR 2018 Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, Bernhard Schoelkopf

We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution.

Toward Optimal Run Racing: Application to Deep Learning Calibration

no code implementations10 Jun 2017 Olivier Bousquet, Sylvain Gelly, Karol Kurach, Marc Schoenauer, Michele Sebag, Olivier Teytaud, Damien Vincent

This paper aims at one-shot learning of deep neural nets, where a highly parallel setting is considered to address the algorithm calibration problem - selecting the best neural architecture and learning hyper-parameter values depending on the dataset at hand.

One-Shot Learning Two-sample testing

From optimal transport to generative modeling: the VEGAN cookbook

1 code implementation22 May 2017 Olivier Bousquet, Sylvain Gelly, Ilya Tolstikhin, Carl-Johann Simon-Gabriel, Bernhard Schoelkopf

We study unsupervised generative modeling in terms of the optimal transport (OT) problem between true (but unknown) data distribution $P_X$ and the latent variable model distribution $P_G$.

AdaGAN: Boosting Generative Models

1 code implementation NeurIPS 2017 Ilya Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann Simon-Gabriel, Bernhard Schölkopf

Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) are an effective method for training generative models of complex data such as natural images.

Cannot find the paper you are looking for? You can Submit a new open access paper.