Language Model Transformers as Evaluators for Open-Domain Dialogues

Author: R. Nedelchev, J. Lehmann, R. Usbeck
Journal: Proceedings of the 28th International Conference on Computational Linguistics
Year: 2020

Citation information

R. Nedelchev, J. Lehmann, R. Usbeck,
Proceedings of the 28th International Conference on Computational Linguistics,
2020,
6797–6808,
International Committee on Computational Linguistics,
http://dx.doi.org/10.18653/v1/2020.coling-main.599

Computer-based systems for communication with humans are a cornerstone of AI research since the 1950s. So far, the most effective way to assess the quality of the dialogues produced by these systems is to use resource-intensive manual labor instead of automated means. In this work, we investigate whether language models (LM) based on transformer neural networks can indicate the quality of a conversation. In a general sense, language models are methods that learn to predict one or more words based on an already given context. Due to their unsupervised nature, they are candidates for efficient, automatic indication of dialogue quality. We demonstrate that human evaluators have a positive correlation between the output of the language models and scores. We also provide some insights into their behavior and inner-working in a conversational context.