Most user reports of antisemitism are ignored. We must hold platforms accountable
It’s rare to be able to put a number on corporate tolerance of hate speech. But in a recent experiment, we were able to do so — and the results are chilling.
During six weeks this summer, we flagged and reported hundreds of egregious posts containing antisemitism to social media platforms using their own user tools. 5 in 6 of our reports were ignored.
Our organization, the Center for Countering Digital Hate, is best known for our work on disrupting anti-vaxxers on social media . So how did we come to flag hundreds of posts including clear, vicious anti-Jew hatred to social media platforms earlier this summer?
Racism is rapidly proliferating on social media, and is often promoted by craven political opportunists around the world — and from all sides of the political spectrum. Most recently, again and again, we saw anti-vaccine conspiracists and antisemites converging in digital spaces. Their rhetoric started to fuse, forming hybrid conspiracy theories in which “the Jews” were falsely blamed for COVID-related problems.
Age-old antisemitic canards run through anti-vaxx propaganda, as they do through many fringe racist and anti-science movements. The false notions that Jews caused COVID, that Jews stand to gain by microchips in the vaccine or that Jews are running the government are all widely seen on social media platforms that claim to oppose hate speech.
Social media is where convergence of toxic ideas most often happens and spreads, despite community standards in which all platforms cite with their purported intolerance of hate speech and racism.
We decided to see how well social media platforms were living up to their promises by seeking out and reporting any posts or groups featuring egregious anti-Jewish hatred on YouTube, TikTok, Facebook, Twitter and Instagram.
From mid-May to the end of June, we reported over 700 instances to the platforms that together had been viewed over 7 million times. We reported these blatant contraventions of the platforms’ community standards using not our organizational pull, but the tools that the platforms asked users to utilize. Typical cases included videos falsely claiming that Jews “declared war on Germany,” antisemitic caricatures blaming the Jews for 9/11 or COVID, and myriad images of Jews as puppetmasters.
We reported these posts while interacting with the platforms as “regular users” so that we could understand what it would be like for a normal person to encounter heinous racism on social media and try to take action.
In 84% of the instances, our reports to the platforms were ignored. In one case, Facebook even put a warning note on a Holocaust denial post — a grey box saying “False information” — rather than removing something that, according to their own rules, should never have been allowed in the first place.
Among the platforms, Facebook and Twitter were the worst offenders, ignoring almost 90% of our reports — including those flagging large Facebook groups with 38,000 members. Even though YouTube and TikTok did slightly better, taking action in response to about 20% of reports, it was certainly nothing to celebrate.
It is clear that social media platforms have become easy places to spread racism and propaganda against Jews. The platforms are currently not capable of enforcing their own rules, nor do they act on user reports, even of heinous racism.
There are a few simple and uncontroversial steps that platforms can implement to combat hate speech in an effective way.
First, platforms’ own analytics show that they have shown racist content to millions of viewers — they need to immediately remove all antisemitic groups and posts utilizing antisemitic hashtags that are used to promote this content. Second, they need to employ significant numbers of trained moderators who can remove hate proactively and who can respond effectively to user flags.
On a legislative level, it is time that laws are enacted to levy fines against those who host hate. Platforms currently profit from the increased engagement that comes from spreading hate and misinformation. Financial incentives will align them with the good of society, not with those who seek its destruction.
Mark Zuckerberg has likened Facebook to a utility company. If a water utility was ignoring the poison that it was sending down its pipes to millions of users, like the water utility in Flint, Michigan, it would expect to be drastically punished and to have governance removed.
This is a serious, long-standing problem that the corporate owners of social media platforms have long denied, deflecting responsibility and delaying in taking any action.
Their blatant disregard of their own users should not be without consequences. For the consequence of their inaction to users is a continuing increase in the number of hate crimes, anti-Jewish vitriol online and attacks on American Jews in the street.
Imran Ahmed is the Chief Executive Officer at the Center for Countering Digital Hate, counterhate.com To contact the author, email [email protected].
A message from our CEO & publisher Rachel Fishman Feddersen
I hope you appreciated this article. Before you go, I’d like to ask you to please support the Forward’s award-winning, nonprofit journalism during this critical time.
We’ve set a goal to raise $260,000 by December 31. That’s an ambitious goal, but one that will give us the resources we need to invest in the high quality news, opinion, analysis and cultural coverage that isn’t available anywhere else.
If you feel inspired to make an impact, now is the time to give something back. Join us as a member at your most generous level.
— Rachel Fishman Feddersen, Publisher and CEO