no code implementations • 5 Oct 2017 • Luiza Mici, German I. Parisi, Stefan Wermter
We show that our unsupervised model shows competitive classification results on the benchmark dataset with respect to strictly supervised approaches.
no code implementations • 22 Dec 2017 • Luiza Mici, German I. Parisi, Stefan Wermter
During visuomotor tasks, robots must compensate for temporal delays inherent in their sensorimotor processing systems.
no code implementations • 23 Jan 2018 • Pablo Barros, German I. Parisi, Di Fu, Xun Liu, Stefan Wermter
The human brain is able to learn, generalize, and predict crossmodal stimuli.
no code implementations • 19 Feb 2018 • Jonathan Tong, German I. Parisi, Stefan Wermter, Brigitte Röder
Furthermore, we propose that these unisensory and multisensory neurons play dual roles in i) encoding spatial location as separate or integrated estimates and ii) accumulating evidence for the independence or relatedness of multisensory stimuli.
no code implementations • 21 Feb 2018 • German I. Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, Stefan Wermter
Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan.
1 code implementation • 28 May 2018 • German I. Parisi, Jun Tani, Cornelius Weber, Stefan Wermter
Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience.
no code implementations • 13 Jul 2018 • German I. Parisi, Jonathan Tong, Pablo Barros, Brigitte Röder, Stefan Wermter
In the associative layer, congruent audiovisual representations are obtained via the experience-driven development of feature-based associations.
no code implementations • 26 Jul 2018 • Francisco Cruz, German I. Parisi, Stefan Wermter
Additionally, we modulate the influence of sensory-driven feedback in the IRL task using goal-oriented knowledge in terms of contextual affordances.
no code implementations • 27 Sep 2018 • Pablo Barros, German I. Parisi, Manfred Eppe, Stefan Wermter
The model adapts concepts of expectation learning to enhance the unisensory representation based on the learned bindings.
no code implementations • 15 Oct 2018 • Di Fu, Pablo Barros, German I. Parisi, Haiyan Wu, Sven Magg, Xun Liu, Stefan Wermter
The efficient integration of multisensory observations is a key property of the brain that yields the robust interaction with the environment.
no code implementations • 6 Nov 2018 • German I. Parisi, Xu Ji, Stefan Wermter
Lifelong learning capabilities are crucial for artificial autonomous agents operating on real-world data, which is typically non-stationary and temporally correlated.
no code implementations • 23 Apr 2019 • Pablo Barros, German I. Parisi, Stefan Wermter
Recent models of emotion recognition strongly rely on supervised deep learning solutions for the distinction of general emotion expressions.
no code implementations • 2 Jul 2019 • German I. Parisi, Christopher Kanan
Continual learning refers to the ability of a biological or artificial system to seamlessly learn from continuous streams of information while preventing catastrophic forgetting, i. e., a condition in which new incoming information strongly interferes with previously learned representations.
no code implementations • 4 Jan 2020 • German I. Parisi
In this chapter, I introduce a set of hierarchical models for the learning and recognition of actions from depth maps and RGB images through the use of neural network self-organization.
no code implementations • 20 Mar 2020 • German I. Parisi, Vincenzo Lomonaco
Online continual learning (OCL) refers to the ability of a system to learn over time from a continuous stream of data without having to revisit previously encountered training samples.
no code implementations • 26 Apr 2020 • Qi She, Fan Feng, Qi Liu, Rosa H. M. Chan, Xinyue Hao, Chuanlin Lan, Qihan Yang, Vincenzo Lomonaco, German I. Parisi, Heechul Bae, Eoin Brophy, Baoquan Chen, Gabriele Graffieti, Vidit Goel, Hyonyoung Han, Sathursan Kanagarajah, Somesh Kumar, Siew-Kei Lam, Tin Lun Lam, Liang Ma, Davide Maltoni, Lorenzo Pellegrini, Duvindu Piyasena, ShiLiang Pu, Debdoot Sheet, Soonyong Song, Youngsung Son, Zhengwei Wang, Tomas E. Ward, Jianwen Wu, Meiqing Wu, Di Xie, Yangsheng Xu, Lin Yang, Qiaoyong Zhong, Liguang Zhou
This report summarizes IROS 2019-Lifelong Robotic Vision Competition (Lifelong Object Recognition Challenge) with methods and results from the top $8$ finalists (out of over~$150$ teams).
1 code implementation • 14 Sep 2020 • Vincenzo Lomonaco, Lorenzo Pellegrini, Pau Rodriguez, Massimo Caccia, Qi She, Yu Chen, Quentin Jodelet, Ruiping Wang, Zheda Mai, David Vazquez, German I. Parisi, Nikhil Churamani, Marc Pickett, Issam Laradji, Davide Maltoni
In the last few years, we have witnessed a renewed and fast-growing interest in continual learning with deep neural networks with the shared objective of making current AI systems more adaptive, efficient and autonomous.
4 code implementations • 1 Apr 2021 • Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta, Gabriele Graffieti, Tyler L. Hayes, Matthias De Lange, Marc Masana, Jary Pomponi, Gido van de Ven, Martin Mundt, Qi She, Keiland Cooper, Jeremy Forest, Eden Belouadah, Simone Calderara, German I. Parisi, Fabio Cuzzolin, Andreas Tolias, Simone Scardapane, Luca Antiga, Subutai Amhad, Adrian Popescu, Christopher Kanan, Joost Van de Weijer, Tinne Tuytelaars, Davide Bacciu, Davide Maltoni
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning.
no code implementations • 10 May 2023 • Nikhil Churamani, Tolga Dimlioglu, German I. Parisi, Hatice Gunes
Understanding human affective behaviour, especially in the dynamics of real-world settings, requires Facial Expression Recognition (FER) models to continuously adapt to individual differences in user expression, contextual attributions, and the environment.