Facebook Failed to Detect Death Threats against Campaign Workers Ahead of US Midterm Elections

‘We’re going to kill you all’: Facebook failed to detect death threats against election officials ahead of the midterms, and YouTube and TikTok suspended accounts, investigation reveals

  • A new investigation shows that 75% of ads containing death threats against US campaign workers sent before the midterm elections were approved by Facebook.
  • Researchers at Global Witness and NYU used the language of actual death threats and found that TikTok and YouTube blocked them, but Facebook didn’t.
  • “It is incredibly disturbing that Facebook has approved ads that threaten election officials with violence, lynching and murder,” said Rosie Sharp of Global Witness.
  • A Facebook spokesperson said the company remains committed to improving its detection systems.

facebook failed to find the vast majority of advertisements that explicitly called for violence or the killing of American poll workers in the run-up to the election. intermediate examsshows a new investigation.

The probe checked Facebook, YouTube and tik tak for their ability to flag ads containing ten real-life examples of death threats against poll workers, including claims that people would be killed, hanged or executed, and children would be molested.

TikTok and YouTube have banned accounts that were set up to run ads. But Meta-owned by Facebook approved for publication nine out of ten death threats in English and six out of ten death threats in Spanish – 75% of the total number of ads that the group submitted for publication.

“It is incredibly disturbing that Facebook has approved ads threatening poll workers with violence, lynching and murder — amid growing real threats against these workers,” said Rosie Sharp, a Global Witness investigator who works with the cybersecurity division of New York City’s Tandon School of Engineering. university. Democracy (C4D) team in research.

A new investigation shows that Facebook was unable to detect the vast majority of ads that explicitly called for violence or murder of US poll workers ahead of the midterm elections.

Damon McCoy, co-director of C4D, said in a statement:

Damon McCoy, co-director of C4D, said in a statement: “Facebook’s failure to block ads calling for violence against poll workers puts the safety of those workers at risk.” It is alarming that Facebook is allowing advertisers caught making threats of violence to continue buying ads.

The announcements were made the day before or on the day of the midterm elections.

The death threats were all “terrifyingly clear in language,” the researchers said, and they all violate the advertising policies of Meta, TikTok, and Google.

In fact, the researchers didn’t publish the ads on Mark Zuckerberg’s social network—it was removed immediately after Facebook’s approval—because they didn’t want to spread violent content.

Damon McCoy, co-director of C4D, said in a statement: “Facebook’s failure to block ads calling for violence against poll workers puts the safety of those workers at risk.” It’s worrying that Facebook is allowing advertisers caught making threats of violence to continue buying ads. Facebook needs to improve its detection methods and ban ads that promote violence.”

The researchers had several recommendations for The Mark Zuckerberg Company in their report:

Urgently increase content moderation capabilities and integrity systems deployed to mitigate election risks.

Regularly assess, mitigate and publicize the risks associated with their services affecting human rights and other harms at the societal level in all countries in which they operate.

Include complete information about all ads (including intended target audience, actual audience, ad spend, and ad buyer) in your ad library.

Publish their pre-election risk assessment for the US. Allow verified independent third party audits so they can be held accountable for what they say they do.

“This type of activity threatens the security of our elections. However, what Facebook says it does to keep its platform secure bears little resemblance to what it actually does. Facebook’s failure to detect hate speech and campaign disinformation – despite its public commitments – is a global problem, as Global Witness showed this year through investigations in Brazil, Ethiopia, Kenya, Myanmar and Norway,” Sharpe said.

Facebook has long been criticized by lawyers for not doing enough to prevent the spread of disinformation and hate speech online, during elections as well as at other times of the year.

When Global Witness reached out to Facebook for comment, a spokesperson said: “This is a small selection of ads that don’t match what people see on our platforms.” Content that incites violence against poll workers or anyone else has no place on our apps, and recent reports have clearly shown that Meta’s ability to effectively address these issues is superior to other platforms. We remain committed to further improving our systems.”

The death threats against campaign workers were all “terrifyingly clear in their language,” the researchers said, and they all violate the advertising policies of Meta, TikTok and Google.

The death threats against campaign workers were all “terrifyingly clear in their language,” the researchers said, and they all violate the advertising policies of Meta, TikTok and Google.

Content that incites violence against poll workers or anyone else has no place on our apps, and recent reports have clearly shown that Meta's ability to effectively address these issues is superior to other platforms.  We remain committed to further improving our systems,” Facebook said in a statement.

Content that incites violence against poll workers or anyone else has no place on our apps, and recent reports have clearly shown that Meta’s ability to effectively address these issues is superior to other platforms. We remain committed to further improving our systems,” Facebook said in a statement.

Advertisement