DortmundAI at LeQua 2022: Regularized SLD
The LeQua 2022 competition was conducted with the purpose of evaluating different quantification methods on text data. In the following, we present the solution of our team “DortmundAI”, which ranked first in the multi-class quantification task T1B. This solution is based on a modification of the well-known Saerens-Latinne-Decaestecker (SLD) method. Here, the SLD method, which is based on expectation maximization, is extended by a regularization technique. Additional experiments with the test data, which we took out after the competition closed, reveal that our excellent ranking stems primarily from an extensive hyperparameter tuning of the classifier.
- Published in:
Conference and Labs of the Evaluation Forum - Type:
Inproceedings - Authors:
Senz, Martin; Bunse, Mirko - Year:
2022
Citation information
Senz, Martin; Bunse, Mirko: DortmundAI at LeQua 2022: Regularized SLD, Conference and Labs of the Evaluation Forum, 2022, https://www.semanticscholar.org/paper/DortmundAI-at-LeQua-2022:-Regularized-SLD-Senz-Bunse/035d4e39071c46872732e2c9322baeaf7466ebf3, Senz.Bunse.2022a,
@Inproceedings{Senz.Bunse.2022a,
author={Senz, Martin; Bunse, Mirko},
title={DortmundAI at LeQua 2022: Regularized SLD},
booktitle={Conference and Labs of the Evaluation Forum},
url={https://www.semanticscholar.org/paper/DortmundAI-at-LeQua-2022:-Regularized-SLD-Senz-Bunse/035d4e39071c46872732e2c9322baeaf7466ebf3},
year={2022},
abstract={The LeQua 2022 competition was conducted with the purpose of evaluating different quantification methods on text data. In the following, we present the solution of our team “DortmundAI”, which ranked first in the multi-class quantification task T1B. This solution is based on a modification of the well-known Saerens-Latinne-Decaestecker (SLD) method. Here, the SLD method, which is based on...}}