no code implementations • 12 Mar 2024 • Davide Maltoni, Lorenzo Pellegrini
TPC (Three-Phase Consolidation) is here introduced as a simple but effective approach to continually learn new classes (and/or instances of known classes) while controlling forgetting of previous knowledge.
no code implementations • 18 Jan 2024 • Lorenzo Vorabbi, Davide Maltoni, Guido Borghi, Stefano Santi
On-device learning remains a formidable challenge, especially when dealing with resource-constrained devices that have limited computational capabilities.
no code implementations • 29 Aug 2023 • Lorenzo Vorabbi, Davide Maltoni, Stefano Santi
Existing Continual Learning (CL) solutions only partially address the constraints on power, memory and computation of the deep learning models when deployed on low-power embedded CPUs.
no code implementations • 2 Aug 2023 • Davide Maltoni, Matteo Ferrara
A better understanding of the emergent computation and problem-solving capabilities of recent large language models is of paramount importance to further improve them and broaden their applicability.
no code implementations • 27 Jul 2023 • Lorenzo Pellegrini, Guido Borghi, Annalisa Franco, Davide Maltoni
Scenarios in which restrictions in data transfer and storage limit the possibility to compose a single dataset -- also exploiting different data sources -- to perform a batch-based training procedure, make the development of robust models particularly challenging.
no code implementations • 4 May 2023 • Lorenzo Vorabbi, Davide Maltoni, Stefano Santi
Binary Neural Networks (BNNs) use 1-bit weights and activations to efficiently execute deep convolutional neural networks on edge devices.
1 code implementation • 1 Mar 2023 • Matteo Scucchia, Davide Maltoni
In topological SLAM the recognition takes place by comparing a signature (or feature vector) associated to the current node with the signatures of the nodes in the known map.
Loop Closure Detection Simultaneous Localization and Mapping
no code implementations • 9 Jan 2023 • Guido Borghi, Gabriele Graffieti, Davide Maltoni
In real-world contexts, sometimes data are available in form of Natural Data Streams, i. e. data characterized by a streaming nature, unbalanced distribution, data drift over a long time frame and strong correlation of samples in short time ranges.
no code implementations • 6 Jan 2023 • Vincenzo Lomonaco, Lorenzo Pellegrini, Gabriele Graffieti, Davide Maltoni
In recent years we have witnessed a renewed interest in machine learning methodologies, especially for deep representation learning, that could overcome basic i. i. d.
1 code implementation • 28 Apr 2022 • Matteo Ferrara, Annalisa Franco, Davide Maltoni, Christoph Busch
In security systems the risk assessment in the sense of common criteria testing is a very relevant topic; this requires quantifying the attack potential in terms of the expertise of the attacker, his knowledge about the target and access to equipment.
no code implementations • 12 Apr 2022 • Gabriele Graffieti, Davide Maltoni, Lorenzo Pellegrini, Vincenzo Lomonaco
Learning continually is a key aspect of intelligence and a necessary ability to solve many real-life problems.
no code implementations • 6 Dec 2021 • Andrea Cossu, Gabriele Graffieti, Lorenzo Pellegrini, Davide Maltoni, Davide Bacciu, Antonio Carta, Vincenzo Lomonaco
The ability of a model to learn continually can be empirically assessed in different continual learning scenarios.
1 code implementation • 24 May 2021 • Lorenzo Pellegrini, Vincenzo Lomonaco, Gabriele Graffieti, Davide Maltoni
On-device training for personalized learning is a challenging research problem.
4 code implementations • 1 Apr 2021 • Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta, Gabriele Graffieti, Tyler L. Hayes, Matthias De Lange, Marc Masana, Jary Pomponi, Gido van de Ven, Martin Mundt, Qi She, Keiland Cooper, Jeremy Forest, Eden Belouadah, Simone Calderara, German I. Parisi, Fabio Cuzzolin, Andreas Tolias, Simone Scardapane, Luca Antiga, Subutai Amhad, Adrian Popescu, Christopher Kanan, Joost Van de Weijer, Tinne Tuytelaars, Davide Bacciu, Davide Maltoni
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning.
1 code implementation • 14 Sep 2020 • Vincenzo Lomonaco, Lorenzo Pellegrini, Pau Rodriguez, Massimo Caccia, Qi She, Yu Chen, Quentin Jodelet, Ruiping Wang, Zheda Mai, David Vazquez, German I. Parisi, Nikhil Churamani, Marc Pickett, Issam Laradji, Davide Maltoni
In the last few years, we have witnessed a renewed and fast-growing interest in continual learning with deep neural networks with the shared objective of making current AI systems more adaptive, efficient and autonomous.
no code implementations • 11 Jun 2020 • Kiran Raja, Matteo Ferrara, Annalisa Franco, Luuk Spreeuwers, Illias Batskos, Florens de Wit Marta Gomez-Barrero, Ulrich Scherhag, Daniel Fischer, Sushma Venkatesh, Jag Mohan Singh, Guoqiang Li, Loïc Bergeron, Sergey Isadskiy, Raghavendra Ramachandra, Christian Rathgeb, Dinusha Frings, Uwe Seidel, Fons Knopjes, Raymond Veldhuis, Davide Maltoni, Christoph Busch
Further, we present a new online evaluation platform to test algorithms on sequestered data.
no code implementations • 26 Apr 2020 • Qi She, Fan Feng, Qi Liu, Rosa H. M. Chan, Xinyue Hao, Chuanlin Lan, Qihan Yang, Vincenzo Lomonaco, German I. Parisi, Heechul Bae, Eoin Brophy, Baoquan Chen, Gabriele Graffieti, Vidit Goel, Hyonyoung Han, Sathursan Kanagarajah, Somesh Kumar, Siew-Kei Lam, Tin Lun Lam, Liang Ma, Davide Maltoni, Lorenzo Pellegrini, Duvindu Piyasena, ShiLiang Pu, Debdoot Sheet, Soonyong Song, Youngsung Son, Zhengwei Wang, Tomas E. Ward, Jianwen Wu, Meiqing Wu, Di Xie, Yangsheng Xu, Lin Yang, Qiaoyong Zhong, Liguang Zhou
This report summarizes IROS 2019-Lifelong Robotic Vision Competition (Lifelong Object Recognition Challenge) with methods and results from the top $8$ finalists (out of over~$150$ teams).
3 code implementations • 2 Dec 2019 • Lorenzo Pellegrini, Gabriele Graffieti, Vincenzo Lomonaco, Davide Maltoni
Continual learning techniques, where complex models are incrementally trained on small batches of new data, can make the learning problem tractable even for CPU-only embedded devices enabling remarkable levels of adaptiveness and autonomy.
5 code implementations • 8 Jul 2019 • Vincenzo Lomonaco, Davide Maltoni, Lorenzo Pellegrini
Ideally, continual learning should be triggered by the availability of short videos of single objects and performed on-line on on-board hardware with fine-grained updates.
no code implementations • 29 Jun 2019 • Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat, Natalia Díaz-Rodríguez
An important challenge for machine learning is not necessarily finding solutions that work in the real world but rather finding stable algorithms that can learn in real world.
1 code implementation • 24 May 2019 • Vincenzo Lomonaco, Karan Desai, Eugenio Culurciello, Davide Maltoni
High-dimensional always-changing environments constitute a hard challenge for current reinforcement learning techniques.
no code implementations • 25 Jan 2019 • Matteo Ferrara, Annalisa Franco, Davide Maltoni
Face morphing represents nowadays a big security threat in the context of electronic identity documents as well as an interesting challenge for researchers in the field of face recognition.
no code implementations • 31 Oct 2018 • Natalia Díaz-Rodríguez, Vincenzo Lomonaco, David Filliat, Davide Maltoni
Continual learning consists of algorithms that learn from a stream of data/tasks continuously and adaptively thought time, enabling the incremental development of ever more complex knowledge and skills.
1 code implementation • 22 Jun 2018 • Davide Maltoni, Vincenzo Lomonaco
It was recently shown that architectural, regularization and rehearsal strategies can be used to train deep models sequentially on a number of disjoint tasks without forgetting previously acquired knowledge.
1 code implementation • 9 May 2017 • Vincenzo Lomonaco, Davide Maltoni
Continuous/Lifelong learning of high-dimensional data streams is a challenging research problem.
1 code implementation • 10 Nov 2015 • Davide Maltoni, Vincenzo Lomonaco
Recent works demonstrated the usefulness of temporal coherence to regularize supervised training or to learn invariant features with deep architectures.