Multi-Task Deep Neural Networks for Natural Language Understanding

In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks. MT-DNN not only leverages large amounts of cross-task data, but also benefits from a regularization effect that leads to more general representations in order to adapt to new tasks and domains. MT-DNN extends the model proposed in Liu et al. (2015) by incorporating a pre-trained bidirectional transformer language model, known as BERT (Devlin et al., 2018). MT-DNN obtains new state-of-the-art results on ten NLU tasks, including SNLI, SciTail, and eight out of nine GLUE tasks, pushing the GLUE benchmark to 82.7% (2.2% absolute improvement). We also demonstrate using the SNLI and SciTail datasets that the representations learned by MT-DNN allow domain adaptation with substantially fewer in-domain labels than the pre-trained BERT representations. The code and pre-trained models are publicly available at https://github.com/namisan/mt-dnn.

PDF Abstract ACL 2019 PDF ACL 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Natural Language Inference MultiNLI MT-DNN Matched 86.7 # 22
Mismatched 86.0 # 16
Paraphrase Identification Quora Question Pairs MT-DNN Accuracy 89.6 # 8
F1 72.4 # 10
Natural Language Inference SciTail MT-DNN Accuracy 94.1 # 2
Natural Language Inference SNLI MT-DNN % Test Accuracy 91.6 # 7
% Train Accuracy 97.2 # 4
Parameters 330m # 4
Natural Language Inference SNLI Ntumpha % Test Accuracy 90.5 # 9
% Train Accuracy 99.1 # 2
Parameters 220 # 3
Sentiment Analysis SST-2 Binary classification MT-DNN Accuracy 95.6 # 22

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Linguistic Acceptability CoLA MT-DNN Accuracy 68.4% # 14

Methods