This study presents a comparison on Natural Language Inference (NLI), specifically detecting contradictions, in German text. To that end, four state-of-the-art model paradigms are being compared with respect to their performance on a machine-translated version of the well-known Stanford Natural Language Inference data set (SNLI), as well as on the German test split of the Cross-Lingual NLI corpus (XNLI). One main focus is the assessment of whether the models are robust with respect to the choice of data, and could possibly also be applied in a real-world scenario. XLM-RoBERTa outperforms the other models significantly, most likely due to its extensive pre-training and multi-head attention layers. Still, the models do not generalize very well to the XNLI data, indicating that the training corpus is too limited in topics and contradiction types. We plan to address this issue in our future work.