Permuted-MNIST

13 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Three scenarios for continual learning

GMvandeVen/continual-learning 15 Apr 2019

Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning.

Generative replay with feedback connections as a general strategy for continual learning

GMvandeVen/continual-learning 27 Sep 2018

A major obstacle to developing artificial intelligence applications capable of true lifelong learning is that artificial neural networks quickly or catastrophically forget previously learned tasks when trained on a new one.

Low-rank passthrough neural networks

Avmb/lowrank-gru WS 2018

Various common deep learning architectures, such as LSTMs, GRUs, Resnets and Highway Networks, employ state passthrough connections that support training with high feed-forward depth or recurrence over many time steps.

Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs

jingli9111/EUNN-tensorflow ICML 2017

Using unitary (instead of general) matrices in artificial neural networks (ANNs) is a promising way to solve the gradient explosion/vanishing problem, as well as to enable ANNs to learn long-term correlations in the data.

HiPPO: Recurrent Memory with Optimal Polynomial Projections

HazyResearch/hippo-code NeurIPS 2020

A central problem in learning from sequential data is representing cumulative history in an incremental fashion as more data is processed.

IGLOO: Slicing the Features Space to Represent Sequences

hello1910/eeg 9 Jul 2018

One notable issue is the relative difficulty to deal with long sequences (i. e. more than 20, 000 steps).

Improving and Understanding Variational Continual Learning

nvcuong/variational-continual-learning 6 May 2019

In the continual learning setting, tasks are encountered sequentially.

Short-Term Memory Optimization in Recurrent Neural Networks by Autoencoder-based Initialization

AntonioCarta/rnn_autoencoding_neurips2020 5 Nov 2020

Training RNNs to learn long-term dependencies is difficult due to vanishing gradients.

CKConv: Continuous Kernel Convolution For Sequential Data

dwromero/ckconv ICLR 2022

Convolutional networks are unable to handle sequences of unknown size and their memory horizon must be defined a priori.

Shared and Private VAEs with Generative Replay for Continual Learning

DVAEsCL/DVAEsCL 17 May 2021

We propose a hybrid continual learning model that is more suitable in real case scenarios to address the issues that has a task-invariant shared variational autoencoder and T task-specific variational autoencoders.