Extracting Syntactic Trees from Transformer Encoder Self-Attentions
This is a work in progress about extracting the sentence tree structures from the encoder{'}s self-attention weights, when translating into another language using the Transformer neural network architecture. We visualize the structures and discuss their characteristics with respect to the existing syntactic theories and annotations.
PDF AbstractTasks
Datasets
Results from the Paper
Submit
results from this paper
to get state-of-the-art GitHub badges and help the
community compare results to other papers.