Search Results for author: Arthur Douillard

Found 15 papers, 10 papers with code

PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning

2 code implementations ECCV 2020 Arthur Douillard, Matthieu Cord, Charles Ollion, Thomas Robert, Eduardo Valle

Lifelong learning has attracted much attention, but existing works still struggle to fight catastrophic forgetting and accumulate knowledge over long stretches of incremental learning.

Class Incremental Learning Incremental Learning +1

Continuum: Simple Management of Complex Continual Learning Scenarios

1 code implementation11 Feb 2021 Arthur Douillard, Timothée Lesort

Those drifts might cause interferences in the trained model and knowledge learned on previous states of the data distribution might be forgotten.

Continual Learning Management

Insights from the Future for Continual Learning

1 code implementation24 Jun 2020 Arthur Douillard, Eduardo Valle, Charles Ollion, Thomas Robert, Matthieu Cord

Continual learning aims to learn tasks sequentially, with (often severe) constraints on the storage of old learning samples, without suffering from catastrophic forgetting.

Class Incremental Learning Representation Learning +1

Continual Learning with Foundation Models: An Empirical Study of Latent Replay

1 code implementation30 Apr 2022 Oleksiy Ostapenko, Timothee Lesort, Pau Rodríguez, Md Rifat Arefin, Arthur Douillard, Irina Rish, Laurent Charlin

Motivated by this, we study the efficacy of pre-trained vision models as a foundation for downstream continual learning (CL) scenarios.

Benchmarking Continual Learning

Asynchronous Local-SGD Training for Language Modeling

1 code implementation17 Jan 2024 Bo Liu, Rachita Chhaparia, Arthur Douillard, Satyen Kale, Andrei A. Rusu, Jiajun Shen, Arthur Szlam, Marc'Aurelio Ranzato

Local stochastic gradient descent (Local-SGD), also referred to as federated averaging, is an approach to distributed optimization where each device performs more than one SGD update per communication.

Distributed Optimization Language Modelling

Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic Segmentation

1 code implementation25 Apr 2022 Antoine Saporta, Arthur Douillard, Tuan-Hung Vu, Patrick Pérez, Matthieu Cord

Unsupervised Domain Adaptation (UDA) is a transfer learning task which aims at training on an unlabeled target domain by leveraging a labeled source domain.

Continual Learning Semantic Segmentation +2

CoRe: Color Regression for Multicolor Fashion Garments

no code implementations6 Oct 2020 Alexandre Rame, Arthur Douillard, Charles Ollion

That's why in addition to a first color classifier, we include a second regression stage for refinement in our newly proposed architecture.

regression

Towards Compute-Optimal Transfer Learning

no code implementations25 Apr 2023 Massimo Caccia, Alexandre Galashov, Arthur Douillard, Amal Rannen-Triki, Dushyant Rao, Michela Paganini, Laurent Charlin, Marc'Aurelio Ranzato, Razvan Pascanu

The field of transfer learning is undergoing a significant shift with the introduction of large pretrained models which have demonstrated strong adaptability to a variety of downstream tasks.

Computational Efficiency Continual Learning +1

DiLoCo: Distributed Low-Communication Training of Language Models

no code implementations14 Nov 2023 Arthur Douillard, Qixuan Feng, Andrei A. Rusu, Rachita Chhaparia, Yani Donchev, Adhiguna Kuncoro, Marc'Aurelio Ranzato, Arthur Szlam, Jiajun Shen

In this work, we propose a distributed optimization algorithm, Distributed Low-Communication (DiLoCo), that enables training of language models on islands of devices that are poorly connected.

Distributed Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.