no code implementations • EMNLP 2021 • Xiongyi Zhang, Jan-Willem van de Meent, Byron C. Wallace
Representations from large pretrained models such as BERT encode a range of features into monolithic vectors, affording strong predictive accuracy across a multitude of downstream tasks.