Browse > Methodology > Continual Learning

# Continual Learning Edit

74 papers with code · Methodology

No evaluation results yet. Help compare methods by submit evaluation metrics.

# Continual Learning Using Task Conditional Neural Networks

8 May 2020

The changes in goals or data are referred to as new tasks in a continual learning model.

# Generative Feature Replay with Orthogonal Weight Modification for Continual Learning

7 May 2020

Catastrophic forgetting notoriously impedes the sequential learning of neural networks as the data of previous tasks are unavailable.

# Temporal Event Segmentation using Attention-based Perceptual Prediction Model for Continual Learning

5 May 2020

Temporal event segmentation of a long video into coherent events requires a high level understanding of activities' temporal features.

# Explaining How Deep Neural Networks Forget by Deep Visualization

3 May 2020

Explaining the behaviors of deep neural networks, usually considered as black boxes, is critical especially when they are now being adopted over diverse aspects of human life.

# Importance Driven Continual Learning for Segmentation Across Domains

30 Apr 2020

The ability of neural networks to continuously learn and adapt to new tasks while retaining prior knowledge is crucial for many applications.

# Exploring Fine-tuning Techniques for Pre-trained Cross-lingual Models via Continual Learning

29 Apr 2020

Recently, fine-tuning pre-trained cross-lingual models (e. g., multilingual BERT) to downstream cross-lingual tasks has shown promising results.

# IROS 2019 Lifelong Robotic Vision Challenge -- Lifelong Object Recognition Report

26 Apr 2020

This report summarizes IROS 2019-Lifelong Robotic Vision Competition (Lifelong Object Recognition Challenge) with methods and results from the top $8$ finalists (out of over~$150$ teams).

# Dropout as an Implicit Gating Mechanism For Continual Learning

24 Apr 2020

However, it is more reliable to preserve the knowledge it has learned from the previous tasks.

# Continual Learning of Object Instances

22 Apr 2020

Our extensive experiments on three large-scale datasets, using two different architectures for five different continual learning methods, reveal that Normalised cross-entropy and synthetic transfer leads to less forgetting in existing techniques.

# Efficient Adaptation for End-to-End Vision-Based Robotic Manipulation

21 Apr 2020

One of the great promises of robot learning systems is that they will be able to learn from their mistakes and continuously adapt to ever-changing environments.