HUBERT Untangles BERT to Improve Transfer across NLP Tasks
We introduce HUBERT which combines the structured-representational power of Tensor-Product Representations (TPRs) and BERT, a pre-trained bidirectional Transformer language model. We show that there is shared structure between different NLP datasets that HUBERT, but not BERT, is able to learn and leverage. We validate the effectiveness of our model on the GLUE benchmark and HANS dataset. Our experiment results show that untangling data-specific semantics from general language structure is key for better transfer among NLP tasks.
PDF AbstractCode
Tasks
Results from the Paper
Submit
results from this paper
to get state-of-the-art GitHub badges and help the
community compare results to other papers.
Methods
Absolute Position Encodings •
Adam •
Attention Dropout •
BERT •
BPE •
Dense Connections •
Dropout •
GELU •
Label Smoothing •
Layer Normalization •
Linear Layer •
Linear Warmup With Linear Decay •
Multi-Head Attention •
Position-Wise Feed-Forward Layer •
ReLU •
Residual Connection •
Scaled Dot-Product Attention •
Softmax •
Transformer •
Weight Decay •
WordPiece