Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes

In sequence modeling tasks the token order matters, but this information can be partially lost due to the discretization of the sequence into data points. In this paper, we study the imbalance between the way certain token pairs are included in data points and others are not. We denote this a token order imbalance (TOI) and we link the partial sequence information loss to a diminished performance of the system as a whole, both in text and speech processing tasks. We then provide a mechanism to leverage the full token order information -Alleviated TOI- by iteratively overlapping the token composition of data points. For recurrent networks, we use prime numbers for the batch size to avoid redundancies when building batches from overlapped data points. The proposed method achieved state of the art performance in both text and speech related tasks.

PDF Abstract CONLL 2019 PDF CONLL 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Language Modelling WikiText-103 AWD-LSTM-MoS + ATOI Validation perplexity 31.92 # 31
Test perplexity 32.85 # 74
Language Modelling WikiText-2 AWD-LSTM + ATOI Validation perplexity 67.47 # 22
Test perplexity 64.73 # 30
Number of params 33M # 23

Methods


No methods listed for this paper. Add relevant methods here