The Self-Perception and Political Biases of ChatGPT

This contribution analyzes the self-perception and political biases of OpenAI’s Large Language Model ChatGPT. Considering the first small-scale reports and studies that have emerged, claiming that ChatGPT is politically biased towards progressive and libertarian points of view, this contribution is aimed at providing further clarity on this subject. Although the concept of political bias and affiliation is hard to define, lacking an agreed-upon measure for its quantification, this contribution attempts to examine this issue by having ChatGPT respond to questions on commonly used measures of political bias. In addition, further measures for personality traits that have previously been linked to political affiliations were examined. More specifically, ChatGPT was asked to answer the questions posed by the political compass test as well as similar questionnaires that are specific to the respective politics of the G7 member states. These eight tests were repeated ten times each and indicate that ChatGPT seems to hold a bias towards progressive views. The political compass test revealed a bias towards progressive and libertarian views, supporting the claims of prior research. The political questionnaires for the G7 member states indicated a bias towards progressive views but no significant bias between authoritarian and libertarian views, contradicting the findings of prior reports. In addition, ChatGPT’s Big Five personality traits were tested using the OCEAN test, and its personality type was queried using the Myers-Briggs Type Indicator (MBTI) test. Finally, the maliciousness of ChatGPT was evaluated using the Dark Factor test. These three tests were also repeated ten times each, revealing that ChatGPT perceives itself as highly open and agreeable, has the Myers-Briggs personality type ENFJ, and is among the test-takers with the least pronounced dark traits.

  • Published in:
    Human Behavior and Emerging Technologies
  • Type:
    Article
  • Authors:
    Rutinowski, Jérôme; Franke, Sven; Endendyk, Jan; Dormuth, Ina; Roidl, Moritz; Pauly, Markus
  • Year:
    2024

Citation information

Rutinowski, Jérôme; Franke, Sven; Endendyk, Jan; Dormuth, Ina; Roidl, Moritz; Pauly, Markus: The Self-Perception and Political Biases of ChatGPT, Human Behavior and Emerging Technologies, 2024, https://www.hindawi.com/journals/hbet/2024/7115633/, Rutinowski.etal.2024a,