Within One-vs-Rest (OVR) classification, a classifier differentiates a single class of interest (COI) from the rest, i.e. any other class. By extending the scope of the rest class to corruptions (dataset shift), aspects of outlier detection gain relevancy. In this work, we show that adversarially trained autoencoders (ATA) representative of autoencoder-based outlier detection methods, yield tremendous robustness improvements over traditional neural network methods such as multi-layer perceptrons (MLP) and common ensemble methods, while maintaining a competitive classification performance. In contrast, our results also reveal that deep learning methods solely optimized for classification, tend to fail completely when exposed to dataset shift.
Utilizing representation learning for robust text classification under datasetshift
Type: Inproceedings
Author: M. Lübbering, M. Gebauer, R. Ramamurthy, M. Pielka, C. Bauckhage, R. Sifa
Journal: CEUR Workshop at LWDA
Booktitle: CEUR Workshop at Lernen. Wissen. Daten. Analysen. (LWDA)
Year: 2021
Citation information
M. Lübbering, M. Gebauer, R. Ramamurthy, M. Pielka, C. Bauckhage, R. Sifa:
Utilizing representation learning for robust text classification under datasetshift.
CEUR Workshop at Lernen. Wissen. Daten. Analysen. (LWDA),
CEUR Workshop at LWDA,
2021,