Search Results for author: Lucas Caccia

Found 11 papers, 6 papers with code

On Anytime Learning at Macroscale

no code implementations17 Jun 2021 Lucas Caccia, Jing Xu, Myle Ott, Marc'Aurelio Ranzato, Ludovic Denoyer

In this work, we consider such a streaming learning setting, which we dub anytime learning at macroscale} (ALMA).

SPeCiaL: Self-Supervised Pretraining for Continual Learning

no code implementations16 Jun 2021 Lucas Caccia, Joelle Pineau

This paper presents SPeCiaL: a method for unsupervised pretraining of representations tailored for continual learning.

Continual Learning Few-Shot Learning

Decoupled Greedy Learning of CNNs for Synchronous and Asynchronous Distributed Learning

no code implementations11 Jun 2021 Eugene Belilovsky, Louis Leconte, Lucas Caccia, Michael Eickenberg, Edouard Oyallon

With the use of a replay buffer we show that this approach can be extended to asynchronous settings, where modules can operate and continue to update with possibly large communication delays.

Image Classification Quantization

Reducing Representation Drift in Online Continual Learning

1 code implementation11 Apr 2021 Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, Eugene Belilovsky

In this work we instead focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones.

Continual Learning Metric Learning

Online Learned Continual Compression with Adaptive Quantization Modules

1 code implementation ICML 2020 Lucas Caccia, Eugene Belilovsky, Massimo Caccia, Joelle Pineau

We show how to use discrete auto-encoders to effectively address this challenge and introduce Adaptive Quantization Modules (AQM) to control variation in the compression ability of the module at any given stage of learning.

Continual Learning Quantization

Online Learned Continual Compression with Stacked Quantization Modules

no code implementations25 Sep 2019 Lucas Caccia, Eugene Belilovsky, Massimo Caccia, Joelle Pineau

We first replace the episodic memory used in Experience Replay with SQM, leading to significant gains on standard continual learning benchmarks using a fixed memory budget.

Continual Learning Quantization

Online Continual Learning with Maximally Interfered Retrieval

1 code implementation11 Aug 2019 Rahaf Aljundi, Lucas Caccia, Eugene Belilovsky, Massimo Caccia, Min Lin, Laurent Charlin, Tinne Tuytelaars

Methods based on replay, either generative or from a stored memory, have been shown to be effective approaches for continual learning, matching or exceeding the state of the art in a number of standard benchmarks.

Continual Learning

Recurrent Value Functions

no code implementations23 May 2019 Pierre Thodoroff, Nishanth Anand, Lucas Caccia, Doina Precup, Joelle Pineau

Despite recent successes in Reinforcement Learning, value-based methods often suffer from high variance hindering performance.

Continuous Control

Deep Generative Modeling of LiDAR Data

1 code implementation4 Dec 2018 Lucas Caccia, Herke van Hoof, Aaron Courville, Joelle Pineau

In this work, we show that one can adapt deep generative models for this task by unravelling lidar scans into a 2D point map.

Point Cloud Generation

Language GANs Falling Short

1 code implementation ICLR 2020 Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, Laurent Charlin

Generating high-quality text with sufficient diversity is essential for a wide range of Natural Language Generation (NLG) tasks.

Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.