Lifelong learning
165 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Lifelong learning
Libraries
Use these libraries to find Lifelong learning models and implementationsMost implemented papers
Three scenarios for continual learning
Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning.
Differentiable plasticity: training plastic neural networks with backpropagation
How can we build agents that keep learning from experience, quickly and efficiently, after their initial training?
Generative replay with feedback connections as a general strategy for continual learning
A major obstacle to developing artificial intelligence applications capable of true lifelong learning is that artificial neural networks quickly or catastrophically forget previously learned tasks when trained on a new one.
BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning
We also apply BatchEnsemble to lifelong learning, where on Split-CIFAR-100, BatchEnsemble yields comparable performance to progressive neural networks while having a much lower computational and memory costs.
Learning to Continually Learn
Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it.
Meta-Learning through Hebbian Plasticity in Random Networks
We find that starting from completely random weights, the discovered Hebbian rules enable an agent to navigate a dynamical 2D-pixel environment; likewise they allow a simulated 3D quadrupedal robot to learn how to walk while adapting to morphological damage not seen during training and in the absence of any explicit reward or error signal in less than 100 timesteps.
Lifelong Learning with Dynamically Expandable Networks
We propose a novel deep network architecture for lifelong learning which we refer to as Dynamically Expandable Network (DEN), that can dynamically decide its network capacity as it trains on a sequence of tasks, to learn a compact overlapping knowledge sharing structure among tasks.
Memory Aware Synapses: Learning what (not) to forget
We show state-of-the-art performance and, for the first time, the ability to adapt the importance of the parameters based on unlabeled data towards what the network needs (not) to forget, which may vary depending on test conditions.
Efficient Lifelong Learning with A-GEM
In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task.
Automatically Optimized Gradient Boosting Trees for Classifying Large Volume High Cardinality Data Streams Under Concept Drift
Data abundance along with scarcity of machine learning experts and domain specialists necessitates progressive automation of end-to-end machine learning workflows.