Paper

Understanding Recurrent Neural Architectures by Analyzing and Synthesizing Long Distance Dependencies in Benchmark Sequential Datasets

In order to build efficient deep recurrent neural architectures, it is essential to analyze the complexityof long distance dependencies (LDDs) of the dataset being modeled. In this paper, we presentdetailed analysis of the dependency decay curve exhibited by various datasets. The datasets sampledfrom a similar process (e.g. natural language, sequential MNIST, Strictlyk-Piecewise languages,etc) display variations in the properties of the dependency decay curve. Our analysis reveal thefactors resulting in these variations; such as (i) number of unique symbols in a dataset, (ii) size ofthe dataset, (iii) number of interacting symbols within a given LDD, and (iv) the distance betweenthe interacting symbols. We test these factors by generating synthesized datasets of the Strictlyk-Piecewise languages. Another advantage of these synthesized datasets is that they enable targetedtesting of deep recurrent neural architectures in terms of their ability to model LDDs with differentcharacteristics. We also demonstrate that analysing dependency decay curves can inform the selectionof optimal hyper-parameters for SOTA deep recurrent neural architectures. This analysis can directlycontribute to the development of more accurate and efficient sequential models.

Results in Papers With Code
(↓ scroll down to see all results)