Deep Equilibrium Models

We present a new approach to modeling sequential data: the deep equilibrium model (DEQ). Motivated by an observation that the hidden layers of many existing deep sequence models converge towards some fixed point, we propose the DEQ approach that directly finds these equilibrium points via root-finding. Such a method is equivalent to running an infinite depth (weight-tied) feedforward network, but has the notable advantage that we can analytically backpropagate through the equilibrium point using implicit differentiation. Using this approach, training and prediction in these networks require only constant memory, regardless of the effective "depth" of the network. We demonstrate how DEQs can be applied to two state-of-the-art deep sequence models: self-attention transformers and trellis networks. On large-scale language modeling tasks, such as the WikiText-103 benchmark, we show that DEQs 1) often improve performance over these state-of-the-art models (for similar parameter counts); 2) have similar computational requirements to existing models; and 3) vastly reduce memory consumption (often the bottleneck for training large sequence models), demonstrating an up-to 88% memory reduction in our experiments. The code is available at https://github.com/locuslab/deq .

PDF Abstract NeurIPS 2019 PDF NeurIPS 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Language Modelling Penn Treebank (Word Level) DEQ-TrellisNet Test perplexity 57.1 # 29
Params 24M # 7
Language Modelling WikiText-103 DEQ-Transformer (medium, adaptive embed) Test perplexity 23.2 # 50
Number of params 110M # 41
Language Modelling WikiText-103 DEQ-Transformer (small) Test perplexity 32.4 # 73
Number of params 138M # 35
Language Modelling WikiText-103 DEQ-TrellisNet Test perplexity 29.0 # 66
Number of params 180M # 28

Methods