TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection

We propose TANDA, an effective technique for fine-tuning pre-trained Transformer models for natural language tasks. Specifically, we first transfer a pre-trained model into a model for a general task by fine-tuning it with a large and high-quality dataset. We then perform a second fine-tuning step to adapt the transferred model to the target domain. We demonstrate the benefits of our approach for answer sentence selection, which is a well-known inference task in Question Answering. We built a large scale dataset to enable the transfer step, exploiting the Natural Questions dataset. Our approach establishes the state of the art on two well-known benchmarks, WikiQA and TREC-QA, achieving MAP scores of 92% and 94.3%, respectively, which largely outperform the previous highest scores of 83.4% and 87.5%, obtained in very recent work. We empirically show that TANDA generates more stable and robust models reducing the effort required for selecting optimal hyper-parameters. Additionally, we show that the transfer step of TANDA makes the adaptation step more robust to noise. This enables a more effective use of noisy datasets for fine-tuning. Finally, we also confirm the positive impact of TANDA in an industrial setting, using domain specific datasets subject to different types of noise.

PDF Abstract

Datasets


Introduced in the Paper:

ASNQ

Used in the Paper:

GLUE QNLI Natural Questions WikiQA TrecQA

Results from the Paper


Ranked #2 on Question Answering on TrecQA (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Question Answering TrecQA TANDA-RoBERTa (ASNQ, TREC-QA) MAP 0.943 # 2
MRR 0.974 # 3
Question Answering WikiQA TANDA-RoBERTa (ASNQ, WikiQA) MAP 0.920 # 3
MRR 0.933 # 2

Methods