DIBERT: Dependency Injected Bidirectional Encoder Representations from Transformers

Prior research in the area of Natural LanguageProcessing (NLP) has shown that including the syntactic structureof a sentence using a dependency parse tree while traininga representation learning model improves the performance ondownstream tasks. However, most of these modeling approachesmake use of the dependency parse tree of sentences for learn-ing task-specific word representations rather than consider-ing that for learning generic representations. In this paper,we propose a new model named DIBERT which stands forDependency Injected Bidirectional Encoder Representations fromTransformers. DIBERT is a variation of the BERT, that apartfrom Masked Language Modeling (MLM) and Next SentencePrediction (NSP) also incorporates an additional third objectivecalled Parent Prediction (PP). PP injects the syntactic structure ofa dependency tree while pre-training the DIBERT, which gener-ates syntax-aware generic representations. We use the WikiText-103 benchmark dataset to pre-train both the original BERT(BERT-Base) and the proposed DIBERT models. After fine-tuning, we observe that DIBERT performs better than BERT-Base on various NLP downstream tasks including SemanticSimilarity, Natural Language Inference and Sentiment Analysishinting at the fact that incorporating dependency informationwhen learning textual representations can improve the quality ofthe learned representations.

  • Published in:
    2021 IEEE Symposium Series on Computational Intelligence (SSCI) IEEE Symposium Series on Computational Intelligence (SSCI)
  • Type:
    Inproceedings
  • Authors:
    A. Wahab, R. Sifa
  • Year:
    2021

Citation information

A. Wahab, R. Sifa: DIBERT: Dependency Injected Bidirectional Encoder Representations from Transformers, IEEE Symposium Series on Computational Intelligence (SSCI), 2021 IEEE Symposium Series on Computational Intelligence (SSCI), 2021, https://doi.org/10.1109/SSCI50451.2021.9659898, Wahab.Sifa.2021,