Pitfalls of Conversational LLMs on News Debiasing

This paper addresses debiasing in news editing and evaluates the effectiveness of conversational Large Language Models in this task. We designed an evaluation checklist tailored to news editors’ perspectives, obtained generated texts from three popular conversational models using a subset of a publicly available dataset in media bias, and evaluated the texts according to the designed checklist. Furthermore, we examined the models as evaluator for checking the quality of debiased model outputs. Our findings indicate that none of the LLMs are perfect in debiasing. Notably, some models, including ChatGPT, introduced unnecessary changes that may impact the author’s style and create misinformation. Lastly, we show that the models do not perform as proficiently as domain experts in evaluating the quality of debiased outputs.

  • Published in:
    The First Workshop on Language-driven Deliberation Technology
  • Type:
    Inproceedings
  • Authors:
    Schlicht, Ipek Baris; Altiok, Defne; Taouk, Maryanne; Flek, Lucie
  • Year:
    2024

Citation information

Schlicht, Ipek Baris; Altiok, Defne; Taouk, Maryanne; Flek, Lucie: Pitfalls of Conversational LLMs on News Debiasing, The First Workshop on Language-driven Deliberation Technology, 2024, European Language Resources Association (ELRA), Schlicht.etal.2024a,

Associated Lamarr Researchers

Prof. Dr. Lucie Flek

Prof. Dr. Lucie Flek

Area Chair NLP to the profile