29.05.2021 change 29.05.2021

Best way to deal with online hate is ‘be polite’, says new research

Credit: Fotolia Credit: Fotolia

Warsaw researchers have developed an ‘epidemic model of hate speech’ to help combat online hate and aggression.

In addition to helping understand how people become ‘infected’ with online hate, the artificial intelligence model also shows how best to deal with them. 

Professor Michał Bilewicz, from the university’s Center for Research on Prejudice at the Faculty of Psychology, said: “The model shows that contact with hate speech changes people in three ways. Firstly, their emotions change. Instead of empathy towards strangers, contempt begins to dominate. We lose the ability to empathize. 

“Secondly, behaviour changes: over time, we start using this type of speech ourselves. We lose the sense that it contains any aggression. Thirdly, our beliefs about the social norms change. Since we see so much hate in our environment, we start to treat this way of addressing others as a norm. 

“As a result, hate speech begins to dominate our interactions and further people +become infected+ with it.”

He continued: “We started cooperation with non-governmental organizations dealing with refugees, such as the Chlebem i Solą initiative or the Ocalenie Foundation, with whom we jointly created online workshops and campaigns attempting to restore people's ability to feel empathy, which should halt the hate speech epidemic.”

Bilewicz and his team also started cooperation with a tech start-up Samurai Labs, specialising in the creation of mechanisms reducing aggression and problematic behaviour in computer games, where participants communicating with each other often hurt each other with words. The researchers decided to use the expertise of Samurai Labs to develop automated technologies for early response to online aggression.

Bilewicz said: “Together, we developed a psychological model for influencing haters. We tested it on Reddit, a rating and discussion website where people discuss topics of interest to them. A new user appeared in a Reddit channel known for sexist comments and hate speech against women. 

“That user was, in fact, a bot, an account based on artificial intelligence mechanisms. As soon as the bot +spotted+ a hater, it would communicate its disapproval of the hateful statement in a very polite and empathetic manner.”

According to Bilewicz, the bot used several possible influencing methods. Some of them referred to social norms, pointing to the proper way of communicating on social media. Others, on the other hand, appealed to empathy, trying to make people aware of the feelings that victims of hate speech might feel.

The researchers checked haters' activity a month before contact with the bot and a month after their contact with it. They also compared haters' accounts to similar accounts that had no interaction with the bot. They noticed that regardless of whether the bot communicated social norms or appealed to empathy, its influence effectively reduced online aggression.

The experiment shows that people who use hate speech rarely come across someone who expresses disapproval of them in a polite and empathetic way. 

Bilewicz said: “When we encounter a hater, we usually ignore or confront them, using aggressive language ourselves. It turns out, however, that a calm expression of disapproval can make the author of hateful comments reflect. The research results suggest that when encountering online aggression, it is best to intervene politely.”

The publication describing this research appeared in the journal Aggressive Behavior.

PAP - Science in Poland

ekr/ zan/ kap/

tr. RL

Przed dodaniem komentarza prosimy o zapoznanie z Regulaminem forum serwisu Nauka w Polsce.

Copyright © Foundation PAP 2024