Extracting Syntactic Trees from Transformer Encoder Self-Attentions

WS 2018  ·  David Mare{\v{c}}ek, Rudolf Rosa ·

This is a work in progress about extracting the sentence tree structures from the encoder{'}s self-attention weights, when translating into another language using the Transformer neural network architecture. We visualize the structures and discuss their characteristics with respect to the existing syntactic theories and annotations.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods