Learning to Remember More with Less Memorization

ICLR 2019  ·  Hung Le, Truyen Tran, Svetha Venkatesh ·

Memory-augmented neural networks consisting of a neural controller and an external memory have shown potentials in long-term sequential learning. Current RAM-like memory models maintain memory accessing every timesteps, thus they do not effectively leverage the short-term memory held in the controller. We hypothesize that this scheme of writing is suboptimal in memory utilization and introduces redundant computation. To validate our hypothesis, we derive a theoretical bound on the amount of information stored in a RAM-like system and formulate an optimization problem that maximizes the bound. The proposed solution dubbed Uniform Writing is proved to be optimal under the assumption of equal timestep contributions. To relax this assumption, we introduce modifications to the original solution, resulting in a solution termed Cached Uniform Writing. This method aims to balance between maximizing memorization and forgetting via overwriting mechanisms. Through an extensive set of experiments, we empirically demonstrate the advantages of our solutions over other recurrent architectures, claiming the state-of-the-arts in various sequential modeling tasks.

PDF Abstract ICLR 2019 PDF ICLR 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text Classification AG News DNC+CUW Error 6.10 # 6
Sequential Image Classification Sequential MNIST DNC+CUW Unpermuted Accuracy 99.1% # 14
Permuted Accuracy 96.3% # 19
Text Classification Yahoo! Answers DNC+CUW Accuracy 74.30 # 5
Sentiment Analysis Yelp Binary classification DNC+CUW Error 3.60 # 13
Sentiment Analysis Yelp Fine-grained classification DNC+CUW Error 34.40 # 12

Methods


No methods listed for this paper. Add relevant methods here