Search Results for author: Kevin Ro Wang

Found 1 papers, 1 papers with code

Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space

1 code implementation28 Mar 2022 Mor Geva, Avi Caciularu, Kevin Ro Wang, Yoav Goldberg

Transformer-based language models (LMs) are at the core of modern NLP, but their internal prediction construction process is opaque and largely not understood.

Cannot find the paper you are looking for? You can Submit a new open access paper.