Utilizing representation learning for robust text classification under datasetshift

Within One-vs-Rest (OVR) classification, a classifier differentiates a single class of interest (COI) from the rest, i.e. any other class. By extending the scope of the rest class to corruptions (dataset shift), aspects of outlier detection gain relevancy. In this work, we show that adversarially trained autoencoders (ATA) representative of autoencoder-based outlier detection methods, yield tremendous robustness improvements over traditional neural network methods such as multi-layer perceptrons (MLP) and common ensemble methods, while maintaining a competitive classification performance. In contrast, our results also reveal that deep learning methods solely optimized for classification, tend to fail completely when exposed to dataset shift.

  • Published in:
    CEUR Workshop at LWDA CEUR Workshop at Lernen. Wissen. Daten. Analysen. (LWDA)
  • Type:
    Inproceedings
  • Authors:
    M. Lübbering, M. Gebauer, R. Ramamurthy, M. Pielka, C. Bauckhage, R. Sifa
  • Year:
    2021

Citation information

M. Lübbering, M. Gebauer, R. Ramamurthy, M. Pielka, C. Bauckhage, R. Sifa: Utilizing representation learning for robust text classification under datasetshift, CEUR Workshop at Lernen. Wissen. Daten. Analysen. (LWDA), CEUR Workshop at LWDA, 2021, Luebbering.etal.2021c,