Search Results for author: Andrea Cossu

Found 21 papers, 15 papers with code

Avalanche: A PyTorch Library for Deep Continual Learning

1 code implementation2 Feb 2023 Antonio Carta, Lorenzo Pellegrini, Andrea Cossu, Hamed Hemati, Vincenzo Lomonaco

Continual learning is the problem of learning from a nonstationary stream of data, a fundamental issue for sustainable and efficient training of deep neural networks over time.

Class Incremental Learning

A Comprehensive Empirical Evaluation on Online Continual Learning

2 code implementations20 Aug 2023 Albin Soutif--Cormerais, Antonio Carta, Andrea Cossu, Julio Hurtado, Hamed Hemati, Vincenzo Lomonaco, Joost Van de Weijer

Online continual learning aims to get closer to a live learning experience by learning directly on a stream of data with temporally shifting distribution and by storing a minimum amount of data from that stream.

Class Incremental Learning Image Classification

Distilled Replay: Overcoming Forgetting through Synthetic Samples

2 code implementations29 Mar 2021 Andrea Rosasco, Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, Davide Bacciu

Replay strategies are Continual Learning techniques which mitigate catastrophic forgetting by keeping a buffer of patterns from previous experiences, which are interleaved with new data during training.

Continual Learning

Sample Condensation in Online Continual Learning

1 code implementation23 Jun 2022 Mattia Sangermano, Antonio Carta, Andrea Cossu, Davide Bacciu

A popular solution in these scenario is to use a small memory to retain old data and rehearse them over time.

Continual Learning

Calibration of Continual Learning Models

2 code implementations11 Apr 2024 Lanpei Li, Elia Piccoli, Andrea Cossu, Davide Bacciu, Vincenzo Lomonaco

Continual Learning (CL) focuses on maximizing the predictive performance of a model across a non-stationary stream of data.

Continual Learning

Continual Pre-Training Mitigates Forgetting in Language and Vision

1 code implementation19 May 2022 Andrea Cossu, Tinne Tuytelaars, Antonio Carta, Lucia Passaro, Vincenzo Lomonaco, Davide Bacciu

We formalize and investigate the characteristics of the continual pre-training scenario in both language and vision environments, where a model is continually pre-trained on a stream of incoming data and only later fine-tuned to different downstream tasks.

Continual Learning Continual Pretraining

Continual Learning with Gated Incremental Memories for sequential data processing

1 code implementation8 Apr 2020 Andrea Cossu, Antonio Carta, Davide Bacciu

The ability to learn in dynamic, nonstationary environments without forgetting previous knowledge, also known as Continual Learning (CL), is a key enabler for scalable and trustworthy deployments of adaptive solutions.

Continual Learning Reinforcement Learning (RL)

Continual Learning with Echo State Networks

1 code implementation17 May 2021 Andrea Cossu, Davide Bacciu, Antonio Carta, Claudio Gallicchio, Vincenzo Lomonaco

Continual Learning (CL) refers to a learning setup where data is non stationary and the model has to learn without forgetting existing knowledge.

Continual Learning

Ex-Model: Continual Learning from a Stream of Trained Models

1 code implementation13 Dec 2021 Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, Davide Bacciu

Learning continually from non-stationary data streams is a challenging research topic of growing popularity in the last few years.

Continual Learning

Class-Incremental Learning with Repetition

1 code implementation26 Jan 2023 Hamed Hemati, Andrea Cossu, Antonio Carta, Julio Hurtado, Lorenzo Pellegrini, Davide Bacciu, Vincenzo Lomonaco, Damian Borth

We propose two stochastic stream generators that produce a wide range of CIR streams starting from a single dataset and a few interpretable control parameters.

Class Incremental Learning Incremental Learning

A Protocol for Continual Explanation of SHAP

1 code implementation12 Jun 2023 Andrea Cossu, Francesco Spinnato, Riccardo Guidotti, Davide Bacciu

Continual Learning trains models on a stream of data, with the aim of learning new information without forgetting previous knowledge.

Continual Learning

Continual Learning for Human State Monitoring

1 code implementation29 Jun 2022 Federico Matteoni, Andrea Cossu, Claudio Gallicchio, Vincenzo Lomonaco, Davide Bacciu

Continual Learning (CL) on time series data represents a promising but under-studied avenue for real-world applications.

Continual Learning Time Series +1

Continual Learning for Recurrent Neural Networks: an Empirical Evaluation

no code implementations12 Mar 2021 Andrea Cossu, Antonio Carta, Vincenzo Lomonaco, Davide Bacciu

We propose two new benchmarks for CL with sequential data based on existing datasets, whose characteristics resemble real-world applications.

Continual Learning

Sustainable Artificial Intelligence through Continual Learning

no code implementations17 Nov 2021 Andrea Cossu, Marta Ziosi, Vincenzo Lomonaco

The increasing attention on Artificial Intelligence (AI) regulation has led to the definition of a set of ethical principles grouped into the Sustainable AI framework.

Continual Learning

Practical Recommendations for Replay-based Continual Learning Methods

no code implementations19 Mar 2022 Gabriele Merlin, Vincenzo Lomonaco, Andrea Cossu, Antonio Carta, Davide Bacciu

Continual Learning requires the model to learn from a stream of dynamic, non-stationary data without forgetting previous knowledge.

Continual Learning Data Augmentation

Projected Latent Distillation for Data-Agnostic Consolidation in Distributed Continual Learning

1 code implementation28 Mar 2023 Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, Davide Bacciu, Joost Van de Weijer

We formalize this problem as a Distributed Continual Learning scenario, where SCD adapt to local tasks and a CL model consolidates the knowledge from the resulting stream of models without looking at the SCD's private data.

Continual Learning Knowledge Distillation

MultiSTOP: Solving Functional Equations with Reinforcement Learning

no code implementations23 Apr 2024 Alessandro Trenta, Davide Bacciu, Andrea Cossu, Pietro Ferrero

We develop MultiSTOP, a Reinforcement Learning framework for solving functional equations in physics.

Cannot find the paper you are looking for? You can Submit a new open access paper.