Search Results for author: Arthur Douillard

Found 9 papers, 8 papers with code

Foundational Models for Continual Learning: An Empirical Study of Latent Replay

1 code implementation30 Apr 2022 Oleksiy Ostapenko, Timothee Lesort, Pau Rodríguez, Md Rifat Arefin, Arthur Douillard, Irina Rish, Laurent Charlin

Motivated by this, we study the efficacy of pre-trained vision models as a foundation for downstream continual learning (CL) scenarios.

Continual Learning

Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic Segmentation

1 code implementation25 Apr 2022 Antoine Saporta, Arthur Douillard, Tuan-Hung Vu, Patrick Pérez, Matthieu Cord

Unsupervised Domain Adaptation (UDA) is a transfer learning task which aims at training on an unlabeled target domain by leveraging a labeled source domain.

Continual Learning Semantic Segmentation +2

DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion

1 code implementation22 Nov 2021 Arthur Douillard, Alexandre Ramé, Guillaume Couairon, Matthieu Cord

Our strategy scales to a large number of tasks while having negligible memory and time overheads due to strict control of the parameters expansion.

class-incremental learning Incremental Learning

Continuum: Simple Management of Complex Continual Learning Scenarios

1 code implementation11 Feb 2021 Arthur Douillard, Timothée Lesort

Those drifts might cause interferences in the trained model and knowledge learned on previous states of the data distribution might be forgotten.

Continual Learning

CORE: Color Regression for Multiple Colors Fashion Garments

no code implementations6 Oct 2020 Alexandre Rame, Arthur Douillard, Charles Ollion

The second stage combines a colorname-attention (dependent of the detected color) with an object-attention (dependent of the clothing category) and finally weights a spatial pooling over the image pixels' RGB values.

Insights from the Future for Continual Learning

1 code implementation24 Jun 2020 Arthur Douillard, Eduardo Valle, Charles Ollion, Thomas Robert, Matthieu Cord

Continual learning aims to learn tasks sequentially, with (often severe) constraints on the storage of old learning samples, without suffering from catastrophic forgetting.

class-incremental learning Representation Learning +1

PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning

1 code implementation ECCV 2020 Arthur Douillard, Matthieu Cord, Charles Ollion, Thomas Robert, Eduardo Valle

Lifelong learning has attracted much attention, but existing works still struggle to fight catastrophic forgetting and accumulate knowledge over long stretches of incremental learning.

class-incremental learning Incremental Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.