Neutron: An Implementation of the Transformer Translation Model and its Variants
The Transformer translation model is easier to parallelize and provides better performance compared to recurrent seq2seq models, which makes it popular among industry and research community. We implement the Neutron in this work, including the Transformer model and its several variants from most recent researches. It is highly optimized, easy to modify and provides comparable performance with interesting features while keeping readability.
PDF AbstractTasks
Datasets
Add Datasets
introduced or used in this paper
Results from the Paper
Submit
results from this paper
to get state-of-the-art GitHub badges and help the
community compare results to other papers.
Methods
Absolute Position Encodings •
Adam •
BPE •
Dense Connections •
Dropout •
Label Smoothing •
Layer Normalization •
Linear Layer •
LSTM •
Multi-Head Attention •
Position-Wise Feed-Forward Layer •
ReLU •
Residual Connection •
Scaled Dot-Product Attention •
Seq2Seq •
Sigmoid Activation •
Softmax •
Tanh Activation •
Transformer