Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference

EMNLP 2018  ·  Yi Tay, Luu Anh Tuan, Siu Cheung Hui ·

This paper presents a new deep learning architecture for Natural Language Inference (NLI). Firstly, we introduce a new architecture where alignment pairs are compared, compressed and then propagated to upper layers for enhanced representation learning. Secondly, we adopt factorization layers for efficient and expressive compression of alignment vectors into scalar features, which are then used to augment the base word representations. The design of our approach is aimed to be conceptually simple, compact and yet powerful. We conduct experiments on three popular benchmarks, SNLI, MultiNLI and SciTail, achieving competitive performance on all. A lightweight parameterization of our model also enjoys a $\approx 3$ times reduction in parameter size compared to the existing state-of-the-art models, e.g., ESIM and DIIN, while maintaining competitive performance. Additionally, visual analysis shows that our propagated features are highly interpretable.

PDF Abstract EMNLP 2018 PDF EMNLP 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Natural Language Inference SciTail CAFE Accuracy 83.3 # 7
Natural Language Inference SNLI 300D CAFE Ensemble % Test Accuracy 89.3 # 17
% Train Accuracy 92.5 # 30
Parameters 17.5m # 4
Natural Language Inference SNLI 300D CAFE (no cross-sentence attention) % Test Accuracy 85.9 # 65
% Train Accuracy 87.3 # 57
Parameters 3.7m # 4
Natural Language Inference SNLI 300D CAFE % Test Accuracy 88.5 # 32
% Train Accuracy 89.8 # 47
Parameters 4.7m # 4

Methods