A Structured Self-attentive Sentence Embedding

9 Mar 2017Zhouhan LinMinwei FengCicero Nogueira dos SantosMo YuBing XiangBowen ZhouYoshua Bengio

This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence... (read more)

PDF Abstract


Evaluation Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.