Search Results for author: Matteo Boschini

Found 8 papers, 7 papers with code

CaSpeR: Latent Spectral Regularization for Continual Learning

no code implementations9 Jan 2023 Emanuele Frascaroli, Riccardo Benaglia, Matteo Boschini, Luca Moschella, Cosimo Fiorini, Emanuele Rodolà, Simone Calderara

While biological intelligence grows organically as new knowledge is gathered throughout life, Artificial Neural Networks forget catastrophically whenever they face a changing training data distribution.

Continual Learning

On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning

1 code implementation12 Oct 2022 Lorenzo Bonicelli, Matteo Boschini, Angelo Porrello, Concetto Spampinato, Simone Calderara

By means of extensive experiments, we show that applying LiDER delivers a stable performance gain to several state-of-the-art rehearsal CL methods across multiple datasets, both in the presence and absence of pre-training.

Continual Learning

Effects of Auxiliary Knowledge on Continual Learning

1 code implementation3 Jun 2022 Giovanni Bellitto, Matteo Pennisi, Simone Palazzo, Lorenzo Bonicelli, Matteo Boschini, Simone Calderara, Concetto Spampinato

In this paper we propose a new, simple, CL algorithm that focuses on solving the current task in a way that might facilitate the learning of the next ones.

Continual Learning Image Classification

Continual Semi-Supervised Learning through Contrastive Interpolation Consistency

1 code implementation14 Aug 2021 Matteo Boschini, Pietro Buzzega, Lorenzo Bonicelli, Angelo Porrello, Simone Calderara

This work explores Continual Semi-Supervised Learning (CSSL): here, only a small fraction of labeled input examples are shown to the learner.

Continual Learning Metric Learning

Rethinking Experience Replay: a Bag of Tricks for Continual Learning

2 code implementations12 Oct 2020 Pietro Buzzega, Matteo Boschini, Angelo Porrello, Simone Calderara

In Continual Learning, a Neural Network is trained on a stream of data whose distribution shifts over time.

Continual Learning

Dark Experience for General Continual Learning: a Strong, Simple Baseline

2 code implementations NeurIPS 2020 Pietro Buzzega, Matteo Boschini, Angelo Porrello, Davide Abati, Simone Calderara

Continual Learning has inspired a plethora of approaches and evaluation settings; however, the majority of them overlooks the properties of a practical scenario, where the data stream cannot be shaped as a sequence of tasks and offline training is not viable.

Continual Learning Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.