DIBERT: Dependency Injected Bidirectional Encoder Representations from Transformers

IEEE SSCI 2021  ·  Abdul Wahab, Rafet Sifa ·

Prior research in the area of Natural Language Processing (NLP) has shown that including the syntactic structure of a sentence using a dependency parse tree while training a representation learning model improves the performance on downstream tasks. However, most of these modeling approaches make use of the dependency parse tree of sentences for learning task-specific word representations rather than considering that for learning generic representations. In this paper, we propose a new model named DIBERT which stands for Dependency Injected Bidirectional Encoder Representations from Transformers. DIBERT is a variation of the BERT, that apart from Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) also incorporates an additional third objective called Parent Prediction (PP). PP injects the syntactic structure of a dependency tree while pre-training the DIBERT, which generates syntax-aware generic representations. We use the WikiText-103 benchmark dataset to pre-train both the original BERT (BERT-Base) and the proposed DIBERT models. After fine-tuning, we observe that DIBERT performs better than BERT-Base on various NLP downstream tasks including Semantic Similarity, Natural Language Inference and Sentiment Analysis hinting at the fact that incorporating dependency information when learning textual representations can improve the quality of the learned representations.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods