Split-MNIST
7 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Split-MNIST
Most implemented papers
Improving and Understanding Variational Continual Learning
In the continual learning setting, tasks are encountered sequentially.
SpaceNet: Make Free Space For Continual Learning
Regularization-based methods maintain a fixed model capacity; however, previous studies showed the huge performance degradation of these methods when the task identity is not available during inference (e. g. class incremental learning scenario).
Learning Invariant Representation for Continual Learning
Finally, we analyze the role of the shared invariant representation in mitigating the forgetting problem especially when the number of replayed samples for each previous task is small.
Self-Attention Meta-Learner for Continual Learning
In this paper, we propose a new method, named Self-Attention Meta-Learner (SAM), which learns a prior knowledge for continual learning that permits learning a sequence of tasks, while avoiding catastrophic forgetting.
Mixture-of-Variational-Experts for Continual Learning
One weakness of machine learning algorithms is the poor ability of models to solve new problems without forgetting previously acquired knowledge.
Negotiated Representations to Prevent Forgetting in Machine Learning Applications
By evaluating our method on these challenging datasets, we aim to showcase its potential for addressing catastrophic forgetting and improving the performance of neural networks in continual learning settings.
Automating Continual Learning
General-purpose learning systems should improve themselves in open-ended fashion in ever-changing environments.