Skip To Content
JEWISH. INDEPENDENT. NONPROFIT.
Culture

Have social media giants been censoring posts about Israel and Gaza?

Almost as fast as social media posts about Israel and Gaza began multiplying, so too did complaints of censorship from both sides. Posts were identified as hate speech and taken down; influencers insisted that they had been shadow-banned — a term for when a user’s posts are left up but the algorithm does not show them to users — noting lower than usual views on posts about Israel, East Jerusalem or Gaza. There have been allegations of censorship on both sides of the political spectrum, but the issue appears to be more systemic and well-documented among those posting pro-Palestinian content.

This is not just a conspiracy theory.

Last week, Instagram posted an apology to Twitter explaining that a glitch had led to Instagram stories not posting or archived stories disappearing, resulting in reports of silencing from those advocating around events in both East Jerusalem and Colombia, where anti-government protests have resulted in bloody clashes with the police.

“This is a widespread global technical issue not related to any particular topic,” Instagram’s communications team said in a statement on Twitter. Another, longer statement posted the next day specifically named East Jerusalem in its apology, reasserting that Instagram had no intention of suppressing voices reporting from there.

Meanwhile, Instagram was also automatically hiding or removing posts tagged with al-Aqsa, in both English and Arabic. The tag refers to the Aqsa mosque compound, Islam’s third-holiest site, in the Old City of Jerusalem, which is known to Jews as the Temple Mount, and was the site of intense conflict between the Israeli police and Muslim worshippers at the onset of the current escalation.

Instagram had flagged “alaqsa” as associated with “violence or a terrorist association.” The tag was being used during the end of Ramadan as violence erupted between Israeli police and Palestinians at the holy site, and many trying to draw attention to the violence found their posts blocked right as Israeli police stormed the grounds with rubber bullets and stun grenades, injuring 220 Palestinians.

Eraser on smartphone post.

By iStock

The tagging issue has since been resolved, thanks to employees flagging it internally, and Facebook has apologized; an internal post obtained by Buzzfeed News said that the posts were flagged because al-Aqsa “is also the name of an organization sanctioned by the United States government.” Both issues disproportionately impacted Palestinian posts, blocking posts in the tens of thousands.

Israeli voices have also complained of censorship, though to date there are no reports of a systemic rule with as broad an impact as the al-Aqsa issue. The writer Hen Mazzig had an infographic defending Israel removed, though it was later reinstated; Mazzig’s post had been in response to a viral anti-Israel infographic that was not censored.

Another account, @the.israel.files has also posted complaints about censorship and removed posts, while several lifestyle influencers who posted pro-Israel content saw a dropoff in views, suggesting that they had been shadowbanned for posting pro-Israel content.

This week, the Israeli Defense Forces complained that one of its tweets warning of a rocket alert was not allowed to post. But as its own screenshot suggested, the tweet was likely blocked because Twitter does not allow identical posts within a short period of time.

Users across other social media platforms, including Twitter and TikTok, voiced similar complaints. Rest of World, a global nonprofit news outlet, reported that Venmo was flagging and delaying payments listed to “Palestinian emergency relief fund,” but payments listed with similar pro-Palestinian phrases such as “Free Palestine” or “Palestinian Fund” were processed without delay.

A Venmo spokesperson said the issues were “OFAC related,” referring to regulations from the U.S. Department of the Treasury’s Office of Foreign Assets Control, which has a list of groups and organizations under U.S. sanctions, including any groups suspected of being controlled by Hamas.

Posts rarely fall into simple categories

Moderation during rapidly unfolding events is a nightmare for tech platforms, which find themselves the arbiters of complex questions about what counts as misinformation, hate speech or incitement during a situation in which the truth is often unclear and events are quickly changing.

News outlets across the world reported that Israeli troops had invaded Gaza late Thursday, for example, based on an inaccurate statement an IDF spokesman made to international journalists. Some analysts believe the mistake, which took more than two hours for the IDF to correct, was intentional, part of a ploy to lure Hamas fighters into underground tunnels that Israel was targeting with airstrikes and artillery.

Posts showing violence are limited, unless they are deemed to be educational or to raise awareness about a world event. Yet Palestinians report that their posts have been taken down for being too violent, and further they have complained of Western standards being applied to other regions and language norms where they do not make sense.

Moderation during rapidly unfolding events is a nightmare for tech platforms, which find themselves the arbiters of complex questions

Also tricky is the question of what counts as hate speech.

Whether anti-Zionism is equivalent to antisemitism has been hotly debated throughout the Jewish community; those who believe anti-Zionism is inherently antisemitic have demanded that anti-Zionist posts be removed for hate speech, while those who believe criticism of the state of Israel is not inherently antisemitic criticize platforms’ discrimination if anti-Zionist posts are removed. The fact that anti-Zionism is sometimes, though not always, paired with overt antisemitism, such as using the terms Zionists and Jews interchangeably, does not help clarify the situation.

Facebook’s definition of hate speech states: “We define hate speech as a direct attack against people on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease. We define attacks as violent or dehumanising speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation.”

The definition also includes “some protections for characteristics such as occupation, when they’re referenced along with a protected characteristic” such as ethnicity or religion.

Blocked social media post.

By iStock

While the definition goes into detailed examples, it is nearly impossible to identify and list every potential form of hate.

In a global event in which many protected characteristics, such as ethnicity, national origin and religion, are all the subject of discourse, it is difficult to fairly moderate conversation from users, many of whom are deeply upset and prone to vitriolic statements.

Other forbidden statements include voicing a desire to segregate or exclude a group, which also crops up in discussions about the situation in Israel and the West Bank, which often includes opinions on where borders should be drawn that would limit the movements of Palestinians or Israeli Jews.

In all of these cases, the line between controversial opinion and misinformation or hate speech is hard to determine. In a situation as loaded as that in Israel, Gaza and Jerusalem, many feel the other side’s opinion is objectively misleading or hateful, flagging posts and accounts they disagree with — an issue the Forward’s own comments section struggles with.

How platforms adapt and enforce

Most platforms, including Facebook, Instagram and TikTok, increasingly use technology and algorithms to moderate hate speech and incitement.

Facebook, which shares a moderation team with Instagram, updated their technology to help identify “new forms of inflammatory speech,” according to a report from May 2020. The company told the Forward that improving and increasing the use of algorithms in the moderation process helps ensure that reviewers will spend more of their time reviewing truly borderline cases.

Facebook also partners with local experts and organizations to help contextualize issues, and told the Forward that the company has over 35,000 people working on safety and security, including 15,000 content reviewers.

The company said that the team consists of native language speakers who “understand local cultural context and nuances”; they also said their policies are “extremely prescriptive” to help ensure objectivity.

Any reliance on algorithms to flag and remove posts means human nuance can get lost

When breaking news events change a situation, content is often “escalated” to a Risk and Response team that is better qualified to make tough calls, according to a report from Vice. In this case, Facebook said it has established a “Special Operations Center,” staffed by experts from across the company, including native Arabic and Hebrew speakers.

Yet any reliance on algorithms to flag and remove posts means human nuance can get lost. Even human moderators reviewing individual posts often need to be deeply embedded in a particular community’s language and discourse to have a hope of effectively understanding the weight of different terms or accusations.

These questions are relevant for any outlet platforming or taking part in any public discussion of world events, including news outlets. But while such outlets have journalists focused on the details of a breaking story, social media content moderators are often from third party firms sitting in call centers trying to follow bullet-point guidelines issued to them. While experts may make the guidelines, it is an army of individuals adjudicating individual instances. They are seeing revenge porn and animal cruelty alongside posts about Israel or Gaza, and they often “lack cultural or political context” to apply during the 30 seconds they spend on each post, according to an investigation by The Verge.

Given the volume of posts on social media each day, it is hard to imagine a better system for moderation. But it’s just as clear that this one is flawed.

A message from our CEO & publisher Rachel Fishman Feddersen

I hope you appreciated this article. Before you go, I’d like to ask you to please support the Forward’s award-winning, nonprofit journalism during this critical time.

We’ve set a goal to raise $260,000 by December 31. That’s an ambitious goal, but one that will give us the resources we need to invest in the high quality news, opinion, analysis and cultural coverage that isn’t available anywhere else.

If you feel inspired to make an impact, now is the time to give something back. Join us as a member at your most generous level.

—  Rachel Fishman Feddersen, Publisher and CEO

With your support, we’ll be ready for whatever 2025 brings.

Republish This Story

Please read before republishing

We’re happy to make this story available to republish for free, unless it originated with JTA, Haaretz or another publication (as indicated on the article) and as long as you follow our guidelines. You must credit the Forward, retain our pixel and preserve our canonical link in Google search.  See our full guidelines for more information, and this guide for detail about canonical URLs.

To republish, copy the HTML by clicking on the yellow button to the right; it includes our tracking pixel, all paragraph styles and hyperlinks, the author byline and credit to the Forward. It does not include images; to avoid copyright violations, you must add them manually, following our guidelines. Please email us at [email protected], subject line “republish,” with any questions or to let us know what stories you’re picking up.

We don't support Internet Explorer

Please use Chrome, Safari, Firefox, or Edge to view this site.