Transformers

Primer is a Transformer-based architecture that improves upon the Transformer architecture with two improvements found through neural architecture search: squared RELU activations in the feedforward block, and depthwise convolutions added to the attention multi-head projections: resulting in a new module called Multi-DConv-Head-Attention.

Source: Primer: Searching for Efficient Transformers for Language Modeling

Papers


Paper Code Results Date Stars

Categories