Controlled Randomness Improves the Performance of Transformer Models
During the pre-training step of natural language models, the main objective is to learn a general representation of the pre-training dataset, usually requiring large amounts of textual data to capture the complexity and diversity of natural language. Contrasting this, in most cases, the size of the data available to solve the specific downstream task is often dwarfed by the aforementioned pre-training dataset, especially in domains where data is scarce. We introduce controlled randomness, i.e. noise, into the training process to improve fine-tuning language models and explore the performance of targeted noise in addition to the parameters of these models. We find that adding such noise can improve the performance in our two downstream tasks of joint named entity recognition and relation extraction and text summarization.
- Published in:
2023 International Conference on Machine Learning and Applications (ICMLA) - Type:
Inproceedings - Authors:
Deußer, Tobias; Zhao, Cong; Krämer, Wolfgang; Leonhard, David; Bauckhage, Christian; Sifa, Rafet - Year:
2023 - Source:
https://ieeexplore.ieee.org/document/10460040
Citation information
Deußer, Tobias; Zhao, Cong; Krämer, Wolfgang; Leonhard, David; Bauckhage, Christian; Sifa, Rafet: Controlled Randomness Improves the Performance of Transformer Models, 2023 International Conference on Machine Learning and Applications (ICMLA), 2023, https://ieeexplore.ieee.org/document/10460040, Deusser.etal.2023d,
@Inproceedings{Deusser.etal.2023d,
author={Deußer, Tobias; Zhao, Cong; Krämer, Wolfgang; Leonhard, David; Bauckhage, Christian; Sifa, Rafet},
title={Controlled Randomness Improves the Performance of Transformer Models},
booktitle={2023 International Conference on Machine Learning and Applications (ICMLA)},
url={https://ieeexplore.ieee.org/document/10460040},
year={2023},
abstract={During the pre-training step of natural language models, the main objective is to learn a general representation of the pre-training dataset, usually requiring large amounts of textual data to capture the complexity and diversity of natural language. Contrasting this, in most cases, the size of the data available to solve the specific downstream task is often dwarfed by the aforementioned...}}