From, The New York Times:
[NOTE: I learned something valuable from this piece: “fear speech” is distinct from (but closely related to) “hate speech,” and its emergence in research further complicates such tangled issues as how to preserve free speech in the toxic media culture we seem stuck in. I didn’t find simple or easy solutions here. But being enabled to think more clearly about what we’re facing — to me that’s progress. Check it out.]
By Julia Angwin — May 6, 2023
Ms. Angwin is a contributing Opinion writer and an investigative journalist.
[F]ear is weaponized even more than hate by leaders who seek to spark violence. Hate is often part of the equation, of course, but fear is almost always the key ingredient when people feel they must lash out to defend themselves.
Understanding the distinction between fear-inducing and hateful speech is crucial as we collectively grapple with how to govern global internet platforms.
Most tech platforms do not shut down false fear-inciting claims such as “Antifa is coming to invade your town” and “Your political enemies are pedophiles coming for your children.” But by allowing lies like these to spread, the platforms are allowing the most perilous types of speech to permeate our society.
Susan Benesch, the executive director of the Dangerous Speech Project, said that genocidal leaders often use fear of a looming threat to prod groups into pre-emptive violence. Those who commit the violence do not need to hate the people they are attacking. They just need to be afraid of the consequences of not attacking.
For instance, before the Rwandan genocide in 1994, Hutu politicians told the Hutus that they were about to be exterminated by Tutsis. During the Holocaust, Nazi propagandists declared that Jews were planning to annihilate the German people. Before the Bosnian genocide, Serbs were warned to protect themselves from a fundamentalist Muslim threat that was planning a genocide against them.
Media“I was stunned at how similar this rhetoric is from case to case,” Ms. Benesch told me in an interview for The Markup.
“It’s as if there’s some horrible school that they all attend.” The key feature of dangerous speech, she argued, is that it persuades “people to perceive other members of a group as a terrible threat. That makes violence seem acceptable, necessary or even virtuous.”
Fear speech is much less studied than hate speech. In 2021 a team of researchers at the Indian Institute of Technology and M.I.T. published the first large-scale quantitative study of fear speech.
In an analysis of two million WhatsApp messages in public Indian chat groups, they found that fear speech was remarkably difficult for automated systems to detect because it doesn’t always contain the derogatory words that can characterize hate speech. “We observed that many of them are based on factually inaccurate information meant to mislead the reader,” wrote the paper’s authors . . . . Human judgment is often needed to differentiate between real fears and false fears, but the tech platforms often don’t spend the time or develop the local knowledge to research all the fears being expressed.
Three of the authors and one new collaborator, Narla Komal Kalyan, followed up this year with a comparison of fear and hate speech on the right-wing social network Gab.
The “nontoxic and argumentative nature” of fear speech prompts more engagement than hate speech, they found.
So how do we vaccinate ourselves against fear-based speech on social media that may incite violence? The first steps are to identify it and to recognize it as the cynical tactic that it is. . . .
Fighting fear will not be easy. But it is possibly the most important work we can do to prevent online outrage from begetting real-life violence.