1 code implementation • 2 Feb 2023 • Antonio Carta, Lorenzo Pellegrini, Andrea Cossu, Hamed Hemati, Vincenzo Lomonaco
Continual learning is the problem of learning from a nonstationary stream of data, a fundamental issue for sustainable and efficient training of deep neural networks over time.
no code implementations • 26 Jan 2023 • Hamed Hemati, Andrea Cossu, Antonio Carta, Julio Hurtado, Lorenzo Pellegrini, Davide Bacciu, Vincenzo Lomonaco, Damian Borth
We focus on the family of Class-Incremental with Repetition (CIR) scenarios, where repetition is embedded in the definition of the stream.
no code implementations • 13 Dec 2022 • Lorenzo Pellegrini, Chenchen Zhu, Fanyi Xiao, Zhicheng Yan, Antonio Carta, Matthias De Lange, Vincenzo Lomonaco, Roshan Sumbaly, Pau Rodriguez, David Vazquez
Continual Learning, also known as Lifelong or Incremental Learning, has recently gained renewed interest among the Artificial Intelligence research community.
1 code implementation • 23 Jun 2022 • Mattia Sangermano, Antonio Carta, Andrea Cossu, Davide Bacciu
A popular solution in these scenario is to use a small memory to retain old data and rehearse them over time.
1 code implementation • 19 May 2022 • Andrea Cossu, Tinne Tuytelaars, Antonio Carta, Lucia Passaro, Vincenzo Lomonaco, Davide Bacciu
We formalize and investigate the characteristics of the continual pre-training scenario in both language and vision environments, where a model is continually pre-trained on a stream of incoming data and only later fine-tuned to different downstream tasks.
no code implementations • 19 Mar 2022 • Gabriele Merlin, Vincenzo Lomonaco, Andrea Cossu, Antonio Carta, Davide Bacciu
Continual Learning requires the model to learn from a stream of dynamic, non-stationary data without forgetting previous knowledge.
1 code implementation • 28 Feb 2022 • Nicolò Lucchesi, Antonio Carta, Vincenzo Lomonaco, Davide Bacciu
Continual Reinforcement Learning (CRL) is a challenging setting where an agent learns to interact with an environment that is constantly changing over time (the stream of experiences).
no code implementations • 3 Feb 2022 • Valerio De Caro, Saira Bano, Achilles Machumilane, Alberto Gotta, Pietro Cassará, Antonio Carta, Rudy Semola, Christos Sardianos, Christos Chronis, Iraklis Varlamis, Konstantinos Tserpes, Vincenzo Lomonaco, Claudio Gallicchio, Davide Bacciu
This paper presents a proof-of-concept implementation of the AI-as-a-Service toolkit developed within the H2020 TEACHING project and designed to implement an autonomous driving personalization system according to the output of an automatic driver's stress recognition algorithm, both of them realizing a Cyber-Physical System of Systems.
1 code implementation • 13 Dec 2021 • Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, Davide Bacciu
Learning continually from non-stationary data streams is a challenging research topic of growing popularity in the last few years.
no code implementations • 6 Dec 2021 • Andrea Cossu, Gabriele Graffieti, Lorenzo Pellegrini, Davide Maltoni, Davide Bacciu, Antonio Carta, Vincenzo Lomonaco
The ability of a model to learn continually can be empirically assessed in different continual learning scenarios.
no code implementations • 14 Jul 2021 • Davide Bacciu, Siranush Akarmazyan, Eric Armengaud, Manlio Bacco, George Bravos, Calogero Calandra, Emanuele Carlini, Antonio Carta, Pietro Cassara, Massimo Coppola, Charalampos Davalas, Patrizio Dazzi, Maria Carmela Degennaro, Daniele Di Sarli, Jürgen Dobaj, Claudio Gallicchio, Sylvain Girbal, Alberto Gotta, Riccardo Groppo, Vincenzo Lomonaco, Georg Macher, Daniele Mazzei, Gabriele Mencagli, Dimitrios Michail, Alessio Micheli, Roberta Peroglio, Salvatore Petroni, Rosaria Potenza, Farank Pourdanesh, Christos Sardianos, Konstantinos Tserpes, Fulvio Tagliabò, Jakob Valtl, Iraklis Varlamis, Omar Veledar
This paper discusses the perspective of the H2020 TEACHING project on the next generation of autonomous applications running in a distributed and highly heterogeneous environment comprising both virtual and physical resources spanning the edge-cloud continuum.
1 code implementation • 17 May 2021 • Andrea Cossu, Davide Bacciu, Antonio Carta, Claudio Gallicchio, Vincenzo Lomonaco
Continual Learning (CL) refers to a learning setup where data is non stationary and the model has to learn without forgetting existing knowledge.
4 code implementations • 1 Apr 2021 • Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta, Gabriele Graffieti, Tyler L. Hayes, Matthias De Lange, Marc Masana, Jary Pomponi, Gido van de Ven, Martin Mundt, Qi She, Keiland Cooper, Jeremy Forest, Eden Belouadah, Simone Calderara, German I. Parisi, Fabio Cuzzolin, Andreas Tolias, Simone Scardapane, Luca Antiga, Subutai Amhad, Adrian Popescu, Christopher Kanan, Joost Van de Weijer, Tinne Tuytelaars, Davide Bacciu, Davide Maltoni
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning.
2 code implementations • 29 Mar 2021 • Andrea Rosasco, Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, Davide Bacciu
Replay strategies are Continual Learning techniques which mitigate catastrophic forgetting by keeping a buffer of patterns from previous experiences, which are interleaved with new data during training.
1 code implementation • 22 Mar 2021 • Antonio Carta, Andrea Cossu, Federico Errica, Davide Bacciu
In this work, we study the phenomenon of catastrophic forgetting in the graph representation learning scenario.
no code implementations • 12 Mar 2021 • Andrea Cossu, Antonio Carta, Vincenzo Lomonaco, Davide Bacciu
We propose two new benchmarks for CL with sequential data based on existing datasets, whose characteristics resemble real-world applications.
1 code implementation • 5 Nov 2020 • Antonio Carta, Alessandro Sperduti, Davide Bacciu
Training RNNs to learn long-term dependencies is difficult due to vanishing gradients.
1 code implementation • 29 Jun 2020 • Antonio Carta, Alessandro Sperduti, Davide Bacciu
The effectiveness of recurrent neural networks can be largely influenced by their ability to store into their dynamical memory information extracted from input sequences at different frequencies and timescales.
1 code implementation • 8 Apr 2020 • Andrea Cossu, Antonio Carta, Davide Bacciu
The ability to learn in dynamic, nonstationary environments without forgetting previous knowledge, also known as Continual Learning (CL), is a key enabler for scalable and trustworthy deployments of adaptive solutions.
no code implementations • 31 Jan 2020 • Antonio Carta, Alessandro Sperduti, Davide Bacciu
The experimental results on synthetic and real-world datasets show that specializing the training algorithm to train the memorization component always improves the final performance whenever the memorization of long sequences is necessary to solve the problem.
2 code implementations • 15 Jan 2020 • Andrea Valenti, Antonio Carta, Davide Bacciu
Through the paper, we show how Gaussian mixtures taking into account music metadata information can be used as an effective prior for the autoencoder latent space, introducing the first Music Adversarial Autoencoder (MusAE).
no code implementations • 25 Sep 2019 • Antonio Carta, Alessandro Sperduti, Davide Bacciu
We propose an initialization schema that sets the weights of a recurrent architecture to approximate a linear autoencoder of the input sequences, which can be found with a closed-form solution.
no code implementations • 8 Nov 2018 • Davide Bacciu, Antonio Carta, Alessandro Sperduti
By building on such conceptualization, we introduce the Linear Memory Network, a recurrent model comprising a feedforward neural network, realizing the non-linear functional transformation, and a linear autoencoder for sequences, implementing the memory component.
no code implementations • SEMEVAL 2017 • Giuseppe Attardi, Antonio Carta, Federico Errica, Andrea Madotto, Ludovica Pannitto
In this paper we present ThReeNN, a model for Community Question Answering, Task 3, of SemEval-2017.