Setting the Record Straight on Transformer Oversmoothing

9 Jan 2024  ·  Gbètondji J-S Dovonon, Michael M. Bronstein, Matt J. Kusner ·

Transformer-based models have recently become wildly successful across a diverse set of domains. At the same time, recent work has argued that Transformers are inherently low-pass filters that gradually oversmooth the inputs. This is worrisome as it limits generalization, especially as model depth increases. A natural question is: How can Transformers achieve these successes given this shortcoming? In this work we show that in fact Transformers are not inherently low-pass filters. Instead, whether Transformers oversmooth or not depends on the eigenspectrum of their update equations. Further, depending on the task, smoothing does not harm generalization as model depth increases. Our analysis extends prior work in oversmoothing and in the closely-related phenomenon of rank collapse. Based on this analysis, we derive a simple way to parameterize the weights of the Transformer update equations that allows for control over its filtering behavior. For image classification tasks we show that smoothing, instead of sharpening, can improve generalization. Whereas for text generation tasks Transformers that are forced to either smooth or sharpen have worse generalization. We hope that this work gives ML researchers and practitioners additional insight and leverage when developing future Transformer models.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods