Hate speech includes inhumane and discriminatory statements that are directed against certain groups of people. These attacks are often aimed at characteristics such as skin color, origin, sexuality, gender, age, disability, or religion. The groups most affected are often women, people with a migration background, and the LGBTQ+ community. The study “Lauter Hass – leiser Rückzug“ by the Kompetenznetzwerk gegen Hass im Netz shows that people who share several of these characteristics are more frequently affected by hate speech.
Hate on the Net: More than Just Offensive Comments
The term “online hate” goes beyond hate speech and encompasses a wide range of harmful digital behaviors, such as racist memes, unsolicited explicit images, or doxing (publishing personal information). Other forms of online hate include the distribution of (nude) photos without consent, stalking, harassment, threats of violence, and deepfakes. Artificial Intelligence plays a role in both generating and combating these phenomena. AI-generated deepfake technologies enable the creation of fake videos and images used for manipulation and defamation.
One example of the use of deepfake technology is the dissemination of fake videos of political figures. In 2019, a deepfake video emerged showing Nancy Pelosi, the Speaker of the US House of Representatives, appearing to be intoxicated and speaking incoherently. The video was edited to distort Pelosi’s voice and facial expressions to portray her negatively. The spread of such videos can undermine public trust in political actors and manipulate political processes, demonstrating how dangerous deepfakes can be in the context of hate speech and disinformation.
YouTube eventually removed the video, while Facebook downgraded it, reducing its visibility. However, on X (formerly Twitter) the video remained accessible as the platform lacked a policy for removing manipulated content. This example highlights how dangerous deepfakes can be in the context of hate speech and disinformation.
Freedom of Expression vs. Hate Speech: Where Do We Draw the Line?
In Germany, Article 5 of the German Basic Law guarantees freedom of expression. However, this right ends where the dignity of another person is violated or their free development is restricted. Some critics fear that combating hate speech might restrict freedom of expression. Artificial Intelligence could help distinguish between legitimate expression of opinion and hate speech. AI algorithms can analyze large amounts of content on social media platforms and flag potentially harmful statements. This could strike a balance between preserving freedom of expression and curbing harmful content.
AI to Detect and Combat Hate Speech
Artificial Intelligence plays a central role in combating hate speech on online platforms. AI systems based on Natural Language Processing (NLP) can automatically recognize offensive or inhumane content and take appropriate action. A particular challenge is recognizing and correctly classifying the subtle nuances between freedom of expression and hate speech.
One concrete example of using NLP algorithms to detect hate speech is the „Perspective“ tool, developed by Alphabet’s subsidiary Jigsaw in collaboration with Google. “Perspective” uses machine learning and NLP to analyze the “toxicity” of comments in online discussions. It assesses the degree of offensiveness or harmfulness in a text and gives platforms the opportunity to moderate potentially harmful comments.
Bots and AI-Supported Disinformation
AI-controlled bots play a major role in spreading online hate. They can automatically disseminate large volumes of hate messages and influence debates. At the same time, advanced algorithms are being developed to identify and halt these automated accounts. A prominent example is the use of bots in social media during the 2016 US elections, where bots on platforms such as X and Facebook spread targeted disinformation and hate messages. These bots used automated programs to spread fake news, polarizing content, and hate comments. Researchers have found that many of these bots aimed to influence public opinion and deepen social divisions.
To combat such hate bots, companies like Facebook have begun developing AI-powered algorithms to detect bot activity. These algorithms analyze suspicious behavior patterns, such as post frequency, interaction patterns, and account origins.
Effects of Hate Speech on Individuals and Society
Hate speech has profound effects on both individuals and society. Individually, consequences can range from stress and social withdrawal to mental health issues. On a societal level, digital violence shifts public discourse. Particularly problematic is the role of bots, which use automated AI algorithms to spread hate en masse, creating a distorted image of public opinion.
The Silencing Effect and the Threat to Democracy
The “silencing effect” describes how victims of hate speech and bystanders increasingly withdraw from public discourse. Artificial Intelligence can help by proactively identifying and removing hate speech before it has a broader societal impact. This is especially important as far-right groups often exploit this dynamic to undermine political and social diversity of opinion.
Studies by the Southern Poverty Law Center (SPLC) how that far-right groups seek to destabilize democratic institutions through targeted online campaigns and disinformation. These groups use social media to mobilize supporters, spread conspiracy theories, and attack minorities and marginalized communities. Similarly, reports from the Center for Strategic and International Studies (CSIS) show that far-right groups around the world use social media as a platform to spread extremist rhetoric and attack political and religious minorities. These groups often act subtly to avoid detection while inciting violence against minorities and other political opponents.
From Words to Actions: Radicalization Through the Internet
Online hate can lead to real-world violence, as seen in the murders of Walter Lübcke and the attacks in Halle and Hanau. The internet plays a central role, serving as a platform for radicalization.
On June 2, 2019, Kassel District President Walter Lübcke was murdered with a shot to the head on the terrace of his home. The murder was preceded by far-right hate campaigns after the CDU politician had strongly advocated for democratic values, the acceptance of refugees, and against right-wing extremism. The hatred escalated to the point where his private address was published on a far-right extremist blog, urging someone to “take care of it.” The perpetrator was a far-right extremist.
On October 9, 2019, a right-wing extremist in Halle attempted to break into a synagogue while heavily armed. After failing to gain entry, he killed two people. On February 19, 2020, a racist in Hanau shot nine people, seemingly targeting individuals with a migration background. All the perpetrators had radicalized themselves online.
Artificial Intelligence can play an important role in preventing such developments by recognizing dangerous patterns in online discussions and communication streams at an early stage.
A successful example of the preventive use of AI is fraud detection in the financial sector. Banks and financial institutions have been using AI for years to detect suspicious transactions and prevent fraud or money laundering. These technologies have proven highly effective, as they can analyze large amounts of data in real time and take immediate action. In particular, machine learning models used for credit card fraud detection have achieved high accuracy and provide greater security in the financial sector.
These successes in the financial sector demonstrate that AI can also play a crucial role in fighting the spread of hate and radicalization online. By identifying threatening behavior patterns early, AI systems can help make digital spaces safer and prevent the spread of hate messages.
How Technology and Initiatives Can Help: Support in the Fight Against Hate Speech
Hate speech does not have to be simply accepted or endured by those affected or bystanders. Various strategies exist for dealing with or responding to hate speech. Each of these counterstrategies has its advantages and disadvantages, and individuals should decide for themselves which approach is most helpful. In addition to options like ignoring, deleting, or reporting hate comments, countering them objectively, or blocking haters, an important strategy is utilizing support services.
A prominent example is HateAid, a German non-profit organization that campaigns against online hate and digital violence. In addition to legal support, HateAid also offers education on digital violence and provides tools to help those affected to defend themselves against hate comments. Especially in the digital space, where hateful content often remains permanently accessible, it is crucial to have access to such support services.
Technology can also make an important contribution to limiting hate speech. Artificial intelligence is already being used successfully in social networks to detect harmful content and take preventative measures. AI-based systems for detecting hate speech have been in use for a long time and have already achieved high success rates in other areas, such as fraud detection in the financial sector. These technologies not only help to identify potentially harmful content, but also offer the opportunity to take preventative action before words turn into actions.
The X-Fem project, run by our Lamarr partner organization Fraunhofer IAIS, also helps raise awareness. The project specifically targets female vocational students and aims to strengthen their digital skills. The soon-to-be-released e-learning course will cover topics such as fake news, hate speech, and responsible use of Artificial Intelligence. This helps raise awareness of these issues and provides participants with the tools they need to actively combat digital violence.
Sign up for the X-Fem newsletter now and learn more about the upcoming e-learning on fake news, hate speech, and dealing with AI.