The changes in goals or data are referred to as new tasks in a continual learning model.
Catastrophic forgetting notoriously impedes the sequential learning of neural networks as the data of previous tasks are unavailable.
Temporal event segmentation of a long video into coherent events requires a high level understanding of activities' temporal features.
Explaining the behaviors of deep neural networks, usually considered as black boxes, is critical especially when they are now being adopted over diverse aspects of human life.
Recently, fine-tuning pre-trained cross-lingual models (e. g., multilingual BERT) to downstream cross-lingual tasks has shown promising results.
This report summarizes IROS 2019-Lifelong Robotic Vision Competition (Lifelong Object Recognition Challenge) with methods and results from the top $8$ finalists (out of over~$150$ teams).
However, it is more reliable to preserve the knowledge it has learned from the previous tasks.
Our extensive experiments on three large-scale datasets, using two different architectures for five different continual learning methods, reveal that Normalised cross-entropy and synthetic transfer leads to less forgetting in existing techniques.
One of the great promises of robot learning systems is that they will be able to learn from their mistakes and continuously adapt to ever-changing environments.