Mark Zuckerberg Allowed Death Threats To Election Workers On His Platform

( Social media platform Facebook claims that it doesn’t permit content that includes threats of violence on its platform. But when a team of researchers submitted fake Facebook ads with overt threats to “lynch,” “murder,” and “execute” election workers ahead of the Midterm, the social media giant’s moderation system approved many of them.

According to a team of researchers from Global Witness and the New York University Tandon School of Engineering’s Cybersecurity for Democracy, Facebook failed to block 75 percent of the ads.

The Global Witness team conducted an investigation to see if Facebook’s automated moderation system would reject 10 test ads that included “real-life examples of death threats issued against election workers,” including statements that they would be killed, lynched, or executed and their children molested.

In its report, Global Witness explained that they submitted the test ads in both English and Spanish, and rather than reject them outright, Facebook’s automated moderation system approved nine of the ten English-language ads and six of the ten ads submitted in Spanish.

After Facebook approved the ads for publishing, the team removed the ads before they could appear on the platform “in order to avoid spreading hateful and violent speech,” the report said.

A spokesperson for the social media giant’s parent company Meta told Global Witness that the small sample of ads was “not representative of what people see on our platforms” and any content that incites violence against anyone, including election workers, “has no place” on Meta’s apps.

The spokesperson claimed that the company’s ability to “effectively” deal with content inciting violence “exceeds that of other platforms.”

However, in their investigation, Global Witness and its team also conducted the same tests on TikTok and YouTube and all ten test ads were rejected by both platforms.