Detecting Contradictions in German Text: A Comparative Study

This study presents a comparison on Natural Language Inference (NLI), specifically detecting contradictions, in German text. To that end, four state-of-the-art model paradigms are being compared with respect to their performance on a machine-translated version of the well-known Stanford Natural Language Inference data set (SNLI), as well as on the German test split of the Cross-Lingual NLI corpus (XNLI). One main focus is the assessment of whether the models are robust with respect to the choice of data, and could possibly also be applied in a real-world scenario. XLM-RoBERTa outperforms the other models significantly, most likely due to its extensive pre-training and multi-head attention layers. Still, the models do not generalize very well to the XNLI data, indicating that the training corpus is too limited in topics and contradiction types. We plan to address this issue in our future work.

  • Published in:
    2021 IEEE Symposium Series on Computational Intelligence (SSCI) IEEE Symposium Series on Computational Intelligence (SSCI)
  • Type:
    Inproceedings
  • Authors:
    L. Pucknat, M. Pielka, R. Sifa
  • Year:
    2021

Citation information

L. Pucknat, M. Pielka, R. Sifa: Detecting Contradictions in German Text: A Comparative Study, IEEE Symposium Series on Computational Intelligence (SSCI), 2021 IEEE Symposium Series on Computational Intelligence (SSCI), 2021, https://doi.org/10.1109/SSCI50451.2021.9659881, Pucknat.etal.2021,