Progressive Neural Networks

Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Continual Learning CUBS (Fine-grained 6 Tasks) ProgressiveNet Accuracy 78.94 # 6
Continual Learning Flowers (Fine-grained 6 Tasks) ProgressiveNet Accuracy 93.41 # 5
Continual Learning ImageNet (Fine-grained 6 Tasks) ProgressiveNet Accuracy 76.16 # 1
Continual Learning Sketch (Fine-grained 6 Tasks) ProgressiveNet Accuracy 76.35 # 4
Continual Learning Stanford Cars (Fine-grained 6 Tasks) ProgressiveNet Accuracy 89.21 # 5
Continual Learning Wikiart (Fine-grained 6 Tasks) ProgressiveNet Accuracy 74.94 # 4


No methods listed for this paper. Add relevant methods here