Search Results for author: Olivier Bachem

Found 35 papers, 10 papers with code

Brax -- A Differentiable Physics Engine for Large Scale Rigid Body Simulation

1 code implementation24 Jun 2021 C. Daniel Freeman, Erik Frey, Anton Raichuk, Sertan Girgin, Igor Mordatch, Olivier Bachem

We present Brax, an open source library for rigid body simulation with a focus on performance and parallelism on accelerators, written in JAX.

OpenAI Gym

What Matters for Adversarial Imitation Learning?

no code implementations1 Jun 2021 Manu Orsini, Anton Raichuk, Léonard Hussenot, Damien Vincent, Robert Dadashi, Sertan Girgin, Matthieu Geist, Olivier Bachem, Olivier Pietquin, Marcin Andrychowicz

To tackle this issue, we implement more than 50 of these choices in a generic adversarial imitation learning framework and investigate their impacts in a large-scale study (>500k trained agents) with both synthetic and human-generated demonstrations.

Continuous Control Imitation Learning

A Sober Look at the Unsupervised Learning of Disentangled Representations and their Evaluation

no code implementations27 Oct 2020 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The idea behind the \emph{unsupervised} learning of \emph{disentangled} representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.

A Commentary on the Unsupervised Learning of Disentangled Representations

no code implementations28 Jul 2020 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The goal of the unsupervised learning of disentangled representations is to separate the independent explanatory factors of variation in the data without access to supervision.

Disentangling Factors of Variations Using Few Labels

no code implementations ICLR 2020 Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem

Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations.

Model Selection Representation Learning

Automatic Shortcut Removal for Self-Supervised Representation Learning

no code implementations ICML 2020 Matthias Minderer, Olivier Bachem, Neil Houlsby, Michael Tschannen

In self-supervised visual representation learning, a feature extractor is trained on a "pretext task" for which labels can be generated cheaply, without human annotation.

Representation Learning

Weakly-Supervised Disentanglement Without Compromises

2 code implementations ICML 2020 Francesco Locatello, Ben Poole, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem, Michael Tschannen

Third, we perform a large-scale empirical study and show that such pairs of observations are sufficient to reliably learn disentangled representations on several benchmark data sets.

Fairness

Google Research Football: A Novel Reinforcement Learning Environment

1 code implementation25 Jul 2019 Karol Kurach, Anton Raichuk, Piotr Stańczyk, Michał Zając, Olivier Bachem, Lasse Espeholt, Carlos Riquelme, Damien Vincent, Marcin Michalski, Olivier Bousquet, Sylvain Gelly

Recent progress in the field of reinforcement learning has been accelerated by virtual learning environments such as video games, where novel algorithms and ideas can be quickly tested in a safe and reproducible manner.

Game of Football

On the Fairness of Disentangled Representations

no code implementations NeurIPS 2019 Francesco Locatello, Gabriele Abbati, Tom Rainforth, Stefan Bauer, Bernhard Schölkopf, Olivier Bachem

Recently there has been a significant interest in learning disentangled representations, as they promise increased interpretability, generalization to unseen scenarios and faster learning on downstream tasks.

Fairness

Precision-Recall Curves Using Information Divergence Frontiers

no code implementations26 May 2019 Josip Djolonga, Mario Lucic, Marco Cuturi, Olivier Bachem, Olivier Bousquet, Sylvain Gelly

Despite the tremendous progress in the estimation of generative models, the development of tools for diagnosing their failures and assessing their performance has advanced at a much slower pace.

Image Generation Information Retrieval

Disentangling Factors of Variation Using Few Labels

no code implementations3 May 2019 Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem

Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations.

Model Selection Representation Learning

Recent Advances in Autoencoder-Based Representation Learning

no code implementations12 Dec 2018 Michael Tschannen, Olivier Bachem, Mario Lucic

Finally, we provide an analysis of autoencoder-based representation learning through the lens of rate-distortion theory and identify a clear tradeoff between the amount of prior knowledge available about the downstream tasks, and how useful the representation is for this task.

Representation Learning

Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

5 code implementations ICML 2019 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.

Representation Learning

Assessing Generative Models via Precision and Recall

4 code implementations NeurIPS 2018 Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, Sylvain Gelly

Recent advances in generative modeling have led to an increased interest in the study of statistical divergences as means of model comparison.

Distributed and Provably Good Seedings for k-Means in Constant Rounds

no code implementations ICML 2017 Olivier Bachem, Mario Lucic, Andreas Krause

The k-Means++ algorithm is the state of the art algorithm to solve k-Means clustering problems as the computed clusterings are O(log k) competitive in expectation.

Uniform Deviation Bounds for k-Means Clustering

no code implementations ICML 2017 Olivier Bachem, Mario Lucic, S. Hamed Hassani, Andreas Krause

In this paper, we provide a novel framework to obtain uniform deviation bounds for loss functions which are unbounded.

Practical Coreset Constructions for Machine Learning

2 code implementations19 Mar 2017 Olivier Bachem, Mario Lucic, Andreas Krause

We investigate coresets - succinct, small summaries of large data sets - so that solutions found on the summary are provably competitive with solution found on the full data set.

Scalable k-Means Clustering via Lightweight Coresets

no code implementations27 Feb 2017 Olivier Bachem, Mario Lucic, Andreas Krause

As such, they have been successfully used to scale up clustering models to massive data sets.

Data Summarization

Uniform Deviation Bounds for Unbounded Loss Functions like k-Means

no code implementations27 Feb 2017 Olivier Bachem, Mario Lucic, S. Hamed Hassani, Andreas Krause

In this paper, we provide a novel framework to obtain uniform deviation bounds for loss functions which are *unbounded*.

Fast and Provably Good Seedings for k-Means

no code implementations NeurIPS 2016 Olivier Bachem, Mario Lucic, Hamed Hassani, Andreas Krause

Seeding - the task of finding initial cluster centers - is critical in obtaining high-quality clusterings for k-Means.

Horizontally Scalable Submodular Maximization

no code implementations31 May 2016 Mario Lucic, Olivier Bachem, Morteza Zadimoghaddam, Andreas Krause

A variety of large-scale machine learning problems can be cast as instances of constrained submodular maximization.

Strong Coresets for Hard and Soft Bregman Clustering with Applications to Exponential Family Mixtures

no code implementations21 Aug 2015 Mario Lucic, Olivier Bachem, Andreas Krause

We propose a single, practical algorithm to construct strong coresets for a large class of hard and soft clustering problems based on Bregman divergences.

Cannot find the paper you are looking for? You can Submit a new open access paper.