Sparsifying Transformer Models with Trainable Representation Pooling

ACL ARR November 2021  ·  Anonymous ·

We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on the task-specific parts of an input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust trainable top-$k$ operator. Our experiments on a challenging long document summarization task show that even our simple baseline performs comparably to the current SOTA, and with trainable pooling we can retain its top quality, while being $1.8\times$ faster during training, $4.5\times$ faster during inference and up to $13\times$ more computationally efficient in the decoder.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Document Summarization Arxiv HEP-TH citation graph DeepPyramidion ROUGE-1 47.15 # 1
Text Summarization Arxiv HEP-TH citation graph Blockwise(baseline) ROUGE-1 46.85 # 12
ROUGE-2 19.39 # 11
Text Summarization Arxiv HEP-TH citation graph DeepPyramidion ROUGE-1 47.15 # 11
ROUGE-2 19.99 # 10
Document Summarization arXiv Summarization Dataset DeepPyramidion Rouge-2 19.99 # 1

Methods