Continual Learning of Context-dependent Processing in Neural Networks

29 Sep 2018  ·  Guanxiong Zeng, Yang Chen, Bo Cui, Shan Yu ·

Deep neural networks (DNNs) are powerful tools in learning sophisticated but fixed mapping rules between inputs and outputs, thereby limiting their application in more complex and dynamic situations in which the mapping rules are not kept the same but changing according to different contexts. To lift such limits, we developed a novel approach involving a learning algorithm, called orthogonal weights modification (OWM), with the addition of a context-dependent processing (CDP) module. We demonstrated that with OWM to overcome the problem of catastrophic forgetting, and the CDP module to learn how to reuse a feature representation and a classifier for different contexts, a single network can acquire numerous context-dependent mapping rules in an online and continual manner, with as few as $\sim$10 samples to learn each. This should enable highly compact systems to gradually learn myriad regularities of the real world and eventually behave appropriately within it.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Continual Learning ASC (19 tasks) OWM F1 - macro 0.7931 # 5

Methods


No methods listed for this paper. Add relevant methods here