no code implementations • 8 Mar 2024 • Yazhe Li, Jorg Bornschein, Ting Chen
In this paper, we explore a new generative approach for learning visual representations.
no code implementations • 3 Mar 2024 • Jorg Bornschein, Yazhe Li, Amal Rannen-Triki
Inspired by the in-context learning capabilities of transformers and their connection to meta-learning, we propose a method that leverages these strengths for online continual learning.
1 code implementation • 20 Feb 2024 • Roman Pogodin, Antonin Schrab, Yazhe Li, Danica J. Sutherland, Arthur Gretton
We describe a data-efficient, kernel-based approach to statistical testing of conditional independence.
no code implementations • 19 Feb 2023 • Yazhe Li, Jorg Bornschein, Marcus Hutter
Although much of the success of Deep Learning builds on learning good representations, a rigorous method to evaluate their quality is lacking.
no code implementations • 8 Feb 2023 • Jacob Walker, Eszter Vértes, Yazhe Li, Gabriel Dulac-Arnold, Ankesh Anand, Théophane Weber, Jessica B. Hamrick
Our results show that intrinsic exploration combined with environment models present a viable direction towards agents that are self-supervised and able to generalize to novel reward functions.
1 code implementation • 16 Dec 2022 • Roman Pogodin, Namrata Deka, Yazhe Li, Danica J. Sutherland, Victor Veitch, Arthur Gretton
The procedure requires just a single ridge regression from $Y$ to kernelized features of $Z$, which can be done in advance.
no code implementations • 14 Oct 2022 • Jorg Bornschein, Yazhe Li, Marcus Hutter
In the prequential formulation of MDL, the objective is to minimize the cumulative next-step log-loss when sequentially going through the data and using previous observations for parameter estimation.
no code implementations • 30 Sep 2022 • Skanda Koppula, Yazhe Li, Evan Shelhamer, Andrew Jaegle, Nikhil Parthasarathy, Relja Arandjelovic, João Carreira, Olivier Hénaff
Self-supervised methods have achieved remarkable success in transfer learning, often achieving the same or better accuracy than supervised pre-training.
no code implementations • ICLR 2022 • Ankesh Anand, Jacob Walker, Yazhe Li, Eszter Vértes, Julian Schrittwieser, Sherjil Ozair, Théophane Weber, Jessica B. Hamrick
One of the key promises of model-based reinforcement learning is the ability to generalize using an internal model of the world to make predictions in novel environments and tasks.
Ranked #1 on Meta-Learning on ML10 (Meta-test success rate (zero-shot) metric)
1 code implementation • NeurIPS 2021 • Yazhe Li, Roman Pogodin, Danica J. Sutherland, Arthur Gretton
We approach self-supervised learning of image representations from a statistical dependence perspective, proposing Self-Supervised Learning with the Hilbert-Schmidt Independence Criterion (SSL-HSIC).
no code implementations • 8 Jun 2021 • Sherjil Ozair, Yazhe Li, Ali Razavi, Ioannis Antonoglou, Aäron van den Oord, Oriol Vinyals
Our key insight is to use discrete autoencoders to capture the multiple possible effects of an action in a stochastic environment.
no code implementations • 14 Oct 2019 • Cristina Gârbacea, Aäron van den Oord, Yazhe Li, Felicia S. C. Lim, Alejandro Luebs, Oriol Vinyals, Thomas C. Walters
In order to efficiently transmit and store speech signals, speech codecs create a minimally redundant representation of the input signal which is then decoded at the receiver with the best possible perceptual quality.
28 code implementations • 10 Jul 2018 • Aaron van den Oord, Yazhe Li, Oriol Vinyals
The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models.
Ranked #30 on Semi-Supervised Image Classification on ImageNet - 1% labeled data (Top 5 Accuracy metric)
Representation Learning Self-Supervised Image Classification +1
8 code implementations • 2 Jan 2018 • Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy Lillicrap, Martin Riedmiller
The DeepMind Control Suite is a set of continuous control tasks with a standardised structure and interpretable rewards, intended to serve as performance benchmarks for reinforcement learning agents.
2 code implementations • ICML 2018 • Aaron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis C. Cobo, Florian Stimberg, Norman Casagrande, Dominik Grewe, Seb Noury, Sander Dieleman, Erich Elsen, Nal Kalchbrenner, Heiga Zen, Alex Graves, Helen King, Tom Walters, Dan Belov, Demis Hassabis
The recently-developed WaveNet architecture is the current state of the art in realistic speech synthesis, consistently rated as more natural sounding for many different languages than any previous system.
3 code implementations • 14 Jun 2016 • Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z. Leibo, Jack Rae, Daan Wierstra, Demis Hassabis
State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance.