no code implementations • 9 Feb 2023 • Pierre H. Richemond, Allison Tam, Yunhao Tang, Florian Strub, Bilal Piot, Felix Hill

With simple linear algebra, we show that when using a linear predictor, the optimal predictor is close to an orthogonal projection, and propose a general framework based on orthonormalization that enables to interpret and give intuition on why BYOL works.

2 code implementations • 12 Jan 2023 • Matko Bošnjak, Pierre H. Richemond, Nenad Tomasev, Florian Strub, Jacob C. Walker, Felix Hill, Lars Holger Buesing, Razvan Pascanu, Charles Blundell, Jovana Mitrovic

We propose a new semi-supervised learning method, Semantic Positives via Pseudo-Labels (SemPPL), that combines labelled and unlabelled data to learn informative representations.

no code implementations • 28 Nov 2022 • Sander Dieleman, Laurent Sartran, Arman Roshannai, Nikolay Savinov, Yaroslav Ganin, Pierre H. Richemond, Arnaud Doucet, Robin Strudel, Chris Dyer, Conor Durkan, Curtis Hawthorne, Rémi Leblond, Will Grathwohl, Jonas Adler

Diffusion models have quickly become the go-to paradigm for generative modelling of perceptual signals (such as images and sound) through iterative refinement.

no code implementations • 26 Oct 2022 • Pierre H. Richemond, Sander Dieleman, Arnaud Doucet

Diffusion models typically operate in the standard framework of generative modelling by producing continuously-valued datapoints.

4 code implementations • 22 Apr 2022 • Stephanie C. Y. Chan, Adam Santoro, Andrew K. Lampinen, Jane X. Wang, Aaditya Singh, Pierre H. Richemond, Jay McClelland, Felix Hill

In further experiments, we found that naturalistic data distributions were only able to elicit in-context learning in transformers, and not in recurrent models.

1 code implementation • 15 Mar 2022 • Stephanie C. Y. Chan, Andrew K. Lampinen, Pierre H. Richemond, Felix Hill

As humans and animals learn in the natural world, they encounter distributions of entities, situations and events that are far from uniform.

3 code implementations • 20 Oct 2020 • Pierre H. Richemond, Jean-bastien Grill, Florent Altché, Corentin Tallec, Florian Strub, Andrew Brock, Samuel Smith, Soham De, Razvan Pascanu, Bilal Piot, Michal Valko

Bootstrap Your Own Latent (BYOL) is a self-supervised learning approach for image representation.

31 code implementations • 13 Jun 2020 • Jean-bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, Michal Valko

From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.

Ranked #2 on Self-Supervised Person Re-Identification on SYSU-30k

Representation Learning
Self-Supervised Image Classification
**+3**

no code implementations • 25 Nov 2019 • Pierre H. Richemond, Arinbjörn Kolbeinsson, Yike Guo

Deep reinforcement learning requires a heavy price in terms of sample efficiency and overparameterization in the neural networks used for function approximation.

no code implementations • 25 Sep 2019 • Pierre H. Richemond, Arinbjorn Kolbeinsson, Yike Guo

Deep reinforcement learning requires a heavy price in terms of sample efficiency and overparameterization in the neural networks used for function approximation.

no code implementations • 3 May 2019 • Pierre H. Richemond, Yike Guo

Recent seminal work at the intersection of deep neural networks practice and random matrix theory has linked the convergence speed and robustness of these networks with the combination of random weight initialization and nonlinear activation function in use.

no code implementations • 7 Feb 2019 • Pierre H. Richemond, Yike Guo

The role of $L^2$ regularization, in the specific case of deep neural networks rather than more traditional machine learning models, is still not fully elucidated.

no code implementations • ICLR 2018 • Pierre H. Richemond, Brendan Maginnis

Two main families of reinforcement learning algorithms, Q-learning and policy gradients, have recently been proven to be equivalent when using a softmax relaxation on one part, and an entropic regularization on the other.

no code implementations • ICLR 2018 • Pierre H. Richemond, Brendan Maginnis

We derive policy gradients where the change in policy is limited to a small Wasserstein distance (or trust region).

no code implementations • 22 Dec 2017 • Pierre H. Richemond, Brendan Maginnis

Two main families of reinforcement learning algorithms, Q-learning and policy gradients, have recently been proven to be equivalent when using a softmax relaxation on one part, and an entropic regularization on the other.

no code implementations • 19 Dec 2017 • Pierre H. Richemond, Brendan Maginnis

We derive policy gradients where the change in policy is limited to a small Wasserstein distance (or trust region).

no code implementations • ICLR 2018 • Brendan Maginnis, Pierre H. Richemond

On tasks with a single output the RWA, RDA and GRU units learn much quicker than the LSTM and with better performance.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.