Online hate speech explodes after controversial incidents, including against unrelated groups | Technique

What happens on the street explodes onto the networks and now a study reveals a complex relationship between real-life events and incitement to hate online. By analyzing 1,150 extremist communities on six social networks, the researchers identified, for example, a spike in hate posts during protests by the movement. Black Lives Matter (Black Lives Matter), from the killing of African American George Floyd by the police. Most notably, however, is that not only have the number of racist crimes soared, but also those that focus on gender identity and sexual orientation, topics unrelated to the protests that were primarily framed by racial issues. The results show that racism is followed by seemingly unrelated insults, although it is not clear why.

Yonatan Lobo, professor at George Washington University and co-author of the study published today in the journal Plus one, he explains to EL PAÍS that after an event, such as demonstrations or elections, there is a reaction that starts on unmoderated platforms and that “quickly starts to appear” on the most popular social networks. “What’s happening at 4Chan [un foro de internet] Don’t stay on 4Chan. It is very clear that it has been transferred to Facebook, Twitter and more. It’s a really serious problem, because the content reaches a much wider audience and potentially helps extremists radicalize people who weren’t already extremist,” he asserts.

After the US elections in November 2020, there were several waves of extremist posts on social media: homophobic slurs against some politicians increased, and Vice President Kamala Harris was the target of sexual slurs. But after Floyd’s death, on May 25, 2020, there was a spike: The rate of racial hate speech rose 250% in early June, and by the end of the year, it was still twice as high. It was before this event. Most surprisingly, according to the authors, other types of hate speech also increased significantly, particularly those related to identity and sexual orientation (75%), race and nationalism (60%), and gender (50%).

See also  Cepsa lost 297 million euros up to March by including 323 million from the 2022 tax

The role of the media

The media plays a particularly important role because it creates a vision of these events and thus creates a kind of tone for extremist communities. “Part of the reason we saw such a massive increase in hate speech during the Black Lives Matter protests is because they received a lot of media attention. Anyway, that’s the duty of the platforms, but since some of them aren’t supervised, they’re not interested in doing that,” Lupu develops.

A team of researchers from George Washington University and Google, combining manual and automated methods to analyze 59 million posts in communities and groups across six social networks: Facebook, Instagram and VKontakte, considered more moderate, albeit to different degrees, and Gab, Telegram and 4Chan, with little moderation, between June 2019 and December 2020. Researchers looked at nearly 20,000 posts and coded them for seven categories of discrimination — homophobic, racist, religious, ethnic, sexist, xenophobic, and anti-Semitic — under US rulings on hate crimes and hate speech. Content that supports or promotes fascist or ultra-nationalist ideologies. From this manual analysis, they trained the algorithm to distinguish whether a particular piece of content was an insult or an argument.

Moderation on the most popular platforms is a double-edged sword. They are necessary and ought to exist, but if the offenders are expelled for more moderation of content, they have other places to go and they will find a place where they can say everything, without any retaliation, and this would intensify the extremism and harassment.

obscurity in the original

The study did not track users, so there is no “direct evidence” of how this migration from more moderate social networks, such as Facebook, to less restrictive ones, such as Telegram, might occur. However, previous investigations by Lupu’s team revealed some strategies for attracting an audience and avoiding moderation. “When extremists are still on traditional platforms, they create a kind of mirror with Telegram or Gab and post ‘If they get us out of here, we’ll move there,'” says the political science professor.

See also  Young Beef gives 5,000 euros and revolutionizes Callao

Silvia Magu-Vazquez, a postdoctoral researcher at the Reuters Institute for the Study of Journalism at the University of Oxford, agrees that the scientific community lacks concrete data on the consequences of unmoderated platforms for shaping public opinion and that they should be. Pay more attention to them. We know that they are moving from open to closed platforms to mobilize human capital and coordinate protest and counter-protest activities. But we don’t know what’s going on there. There are some clues that tell us, but we can’t quantify because we don’t have the data or access to it,” explains the expert.

Despite the fact that moderated platforms are forced to act immediately to remove content at certain time intervals, Magu Vazquez acknowledges that it is difficult to identify hate speech through automatic mechanisms. “In toxic language, there are many elements of irony, of irony that transcends literal language. This makes it difficult for platforms to detect and, therefore, take action. It does not mean that they do not have homework to do.”

Should governments be more involved in regulating content? It’s not an easy answer, says Yonatan Lobo. While hate speech is deeply troubling, it could be just as dangerous if governments were heavily involved in censoring content. “Unfortunately, I’m not sure of the right balance,” the professor concludes.

You can follow country technology in Facebook s Twitter Or sign up here to receive The weekly newsletter.

Subscribe to continue reading

Read without limits

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button