Search Results for author: Yazhe Li

Found 16 papers, 7 papers with code

Denoising Autoregressive Representation Learning

no code implementations8 Mar 2024 Yazhe Li, Jorg Bornschein, Ting Chen

In this paper, we explore a new generative approach for learning visual representations.

Denoising Image Generation +1

Transformers for Supervised Online Continual Learning

no code implementations3 Mar 2024 Jorg Bornschein, Yazhe Li, Amal Rannen-Triki

Inspired by the in-context learning capabilities of transformers and their connection to meta-learning, we propose a method that leverages these strengths for online continual learning.

Continual Learning Few-Shot Learning +2

Practical Kernel Tests of Conditional Independence

1 code implementation20 Feb 2024 Roman Pogodin, Antonin Schrab, Yazhe Li, Danica J. Sutherland, Arthur Gretton

We describe a data-efficient, kernel-based approach to statistical testing of conditional independence.

Evaluating Representations with Readout Model Switching

no code implementations19 Feb 2023 Yazhe Li, Jorg Bornschein, Marcus Hutter

Although much of the success of Deep Learning builds on learning good representations, a rigorous method to evaluate their quality is lacking.

Model Selection

Investigating the role of model-based learning in exploration and transfer

no code implementations8 Feb 2023 Jacob Walker, Eszter Vértes, Yazhe Li, Gabriel Dulac-Arnold, Ankesh Anand, Théophane Weber, Jessica B. Hamrick

Our results show that intrinsic exploration combined with environment models present a viable direction towards agents that are self-supervised and able to generalize to novel reward functions.

Transfer Learning

Efficient Conditionally Invariant Representation Learning

1 code implementation16 Dec 2022 Roman Pogodin, Namrata Deka, Yazhe Li, Danica J. Sutherland, Victor Veitch, Arthur Gretton

The procedure requires just a single ridge regression from $Y$ to kernelized features of $Z$, which can be done in advance.

Fairness regression +1

Sequential Learning Of Neural Networks for Prequential MDL

no code implementations14 Oct 2022 Jorg Bornschein, Yazhe Li, Marcus Hutter

In the prequential formulation of MDL, the objective is to minimize the cumulative next-step log-loss when sequentially going through the data and using previous observations for parameter estimation.

Image Classification

Procedural Generalization by Planning with Self-Supervised World Models

no code implementations ICLR 2022 Ankesh Anand, Jacob Walker, Yazhe Li, Eszter Vértes, Julian Schrittwieser, Sherjil Ozair, Théophane Weber, Jessica B. Hamrick

One of the key promises of model-based reinforcement learning is the ability to generalize using an internal model of the world to make predictions in novel environments and tasks.

 Ranked #1 on Meta-Learning on ML10 (Meta-test success rate (zero-shot) metric)

Benchmarking Meta-Learning +2

Self-Supervised Learning with Kernel Dependence Maximization

1 code implementation NeurIPS 2021 Yazhe Li, Roman Pogodin, Danica J. Sutherland, Arthur Gretton

We approach self-supervised learning of image representations from a statistical dependence perspective, proposing Self-Supervised Learning with the Hilbert-Schmidt Independence Criterion (SSL-HSIC).

Depth Estimation Object Recognition +2

Vector Quantized Models for Planning

no code implementations8 Jun 2021 Sherjil Ozair, Yazhe Li, Ali Razavi, Ioannis Antonoglou, Aäron van den Oord, Oriol Vinyals

Our key insight is to use discrete autoencoders to capture the multiple possible effects of an action in a stochastic environment.

Low Bit-Rate Speech Coding with VQ-VAE and a WaveNet Decoder

no code implementations14 Oct 2019 Cristina Gârbacea, Aäron van den Oord, Yazhe Li, Felicia S. C. Lim, Alejandro Luebs, Oriol Vinyals, Thomas C. Walters

In order to efficiently transmit and store speech signals, speech codecs create a minimally redundant representation of the input signal which is then decoded at the receiver with the best possible perceptual quality.

Representation Learning with Contrastive Predictive Coding

28 code implementations10 Jul 2018 Aaron van den Oord, Yazhe Li, Oriol Vinyals

The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models.

Representation Learning Self-Supervised Image Classification +1

DeepMind Control Suite

8 code implementations2 Jan 2018 Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy Lillicrap, Martin Riedmiller

The DeepMind Control Suite is a set of continuous control tasks with a standardised structure and interpretable rewards, intended to serve as performance benchmarks for reinforcement learning agents.

Continuous Control reinforcement-learning +1

Model-Free Episodic Control

3 code implementations14 Jun 2016 Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z. Leibo, Jack Rae, Daan Wierstra, Demis Hassabis

State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance.

Decision Making Hippocampus +2

Cannot find the paper you are looking for? You can Submit a new open access paper.