8 Febbraio 2022

Giovanni Quer, project manager del Kantor Center, riflette sulla diffusione dell’antisemitismo tramite internet

Fonte:

https://kantorcenter.tau.ac.il

Autore:

Giovanni Quer

Online Hatred

The Need for New Strategies

Antisemitism propagates on the internet because the unrestrained flow of information provides easy access to sites, discussion groups, or even individual posts that spread hatred.

In a world where the boundaries between the digital space and real life are blurred, this form of “meta-hatred” has direct consequences in daily life.

The perpetrator of the December 2019 attack in Monsey, New York, who invaded the home of a rabbi hosting a Hanukkah party while wielding a machete, had access to antisemitic materials through online sources. His mental health condition has rendered him unfit to stand trial, but the fact remains that the unrestrained circulation of information influenced his views on Jews.

Several months earlier, in October 2019, the perpetrator of the Halle attack in Germany, who planned to target the local synagogue, also engaged in online extremism. He did not show apparent signs of radicalization.

The perpetrators came from different backgrounds and had different motivations. Yet, in both cases, they decided to attack Jews because of antisemitic convictions formed or reinforced online.

If once extremists held secret meetings and clandestinely printed and distributed materials, social media platforms now maximize the distribution of hateful content and the ability of groups to recruit new adherents.

The result is that any internet user may be radicalized and inspired to commit a hostile act, including passive users.

Moreover, the publicity hate groups generate and receive through numerous media channels makes it easier for them to hijack the narrative of any event.

The reactions to the January 2022 attack on the Colleyville synagogue in Texas are an example of how different hate groups use events to reinforce their narratives.

A recent study published by the ADL (Anti-Defamation League), “Extremists Respond to Colleyville Hostage Crisis with Antisemitism, Islamophobia,” demonstrated hate groups exploited the Colleyville attack to promote narratives that denied the Holocaust or described the attack as a Jewish plot.

Another example is the conspiracy theories that emerged as a major component of hate narratives during the COVID-19 pandemic, where classic antisemitic tropes spread on the net together with other forms of intolerance and xenophobia.

Social media companies have developed guidelines that define what content is unacceptable on their platform, in addition to what national legislation characterizes and prohibits as hate speech.

Yet, their guidelines remain largely unapplied, while subcultures of hatred rapidly evolve in a context of loose and uneven compliance. Apparently corporal enforcement is not enough. The pervasiveness of social media requires state-guided measures to ensure safety, and quasi-judicial, transparent procedures for penalizing violations.

To better address the phenomenon of online antisemitism, we also need to understand the actual scope of hate groups and how they operate, emerge, and connect – and the contexts and subtexts of their speech. Research on online hate speech is only in its earliest stages and is already faced with considerable data collection and analysis challenges.

Beyond the affordability of advanced digital collection and analysis programs and systems capable of tracking and analyzing a phenomenon across the platforms, one needs to agree on the premises of what constitutes hate speech.

A keyword search is not always enough since the meaning of a word can change according to the context in which it is used; additionally, hate groups use code words for identifying targets. For example, “Hollywood” often means “Jews.”

Deeper cooperation between researchers and practitioners of the humanities and computer sciences can help develop suitable methodologies that focus on diverse cultural and historical frameworks.

A societal response is also indispensable in the fight against online hate speech. It is encouraging to see that grassroots activism to counter hate speech is also developing. For example, the Fight Online Antisemitism (FOA) organization gathers volunteers from various countries worldwide to monitor and report hate speech and incitement.

The efforts of FOA inform that online antisemitism needs not just to be tracked and removed – but that this must be done very, very fast. From its work on different platforms, FOA project manager Maya Hadar observed the danger caused by hateful content remains even after a specific post is removed or an account is closed.

In an interview with UNESCO in March 2021, Prof. Jonathan Bright of the Oxford Internet Institute explained: “One really big problem with online harms is that we are largely operating in a reactive way – we remove things only after they have become really widespread and potentially already done some damage.”

The process of radicalization is elusive, and any user “out there,” even without showing visible or apparent signs, can be inspired to commit the next violent act.