Permuted-MNIST
13 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Permuted-MNIST
Most implemented papers
Three scenarios for continual learning
Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning.
Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs
Using unitary (instead of general) matrices in artificial neural networks (ANNs) is a promising way to solve the gradient explosion/vanishing problem, as well as to enable ANNs to learn long-term correlations in the data.
Generative replay with feedback connections as a general strategy for continual learning
A major obstacle to developing artificial intelligence applications capable of true lifelong learning is that artificial neural networks quickly or catastrophically forget previously learned tasks when trained on a new one.
Low-rank passthrough neural networks
Various common deep learning architectures, such as LSTMs, GRUs, Resnets and Highway Networks, employ state passthrough connections that support training with high feed-forward depth or recurrence over many time steps.
HiPPO: Recurrent Memory with Optimal Polynomial Projections
A central problem in learning from sequential data is representing cumulative history in an incremental fashion as more data is processed.
IGLOO: Slicing the Features Space to Represent Sequences
One notable issue is the relative difficulty to deal with long sequences (i. e. more than 20, 000 steps).
Improving and Understanding Variational Continual Learning
In the continual learning setting, tasks are encountered sequentially.
Short-Term Memory Optimization in Recurrent Neural Networks by Autoencoder-based Initialization
Training RNNs to learn long-term dependencies is difficult due to vanishing gradients.
CKConv: Continuous Kernel Convolution For Sequential Data
Convolutional networks are unable to handle sequences of unknown size and their memory horizon must be defined a priori.
Shared and Private VAEs with Generative Replay for Continual Learning
We propose a hybrid continual learning model that is more suitable in real case scenarios to address the issues that has a task-invariant shared variational autoencoder and T task-specific variational autoencoders.