Skip To Content
JEWISH. INDEPENDENT. NONPROFIT.
News

Twitter Gives Online Hate Speech Its Biggest Platform — Why?

Image by Thinkstock

During the last months, anti-Semitic attacks and trolling has risen to seemingly unprecedented levels on social media sites.

The problem’s gotten so bad that the Anti-Defamation League created a task force combating online hate speech. (Their first report will be released this Wednesday.)

Of course the biggest problem is those who are posting all that hateful comments.

But there is also the company providing the platform that trolls love the most: Twitter.

To be fair, fighting online trolls is a Sisyphean task. But experts say Twitter is the worst offender among big media giants in policing hate speech. Facebook or Google are doing a much better job, they say.

For example, when New York Times editor Jonathan Weisman quit Twitter this summer it was not because of the weeklong onslaught of anti-Semitic comments tweeted at him. It was because Twitter refused to do anything about it.

After Weisman publicly announced that he was leaving the social media site despite his 35,000 followers, Twitter started to remove some of the hateful tweets. But it let others with similar content remain.

“I am awaiting some sign from Twitter that it cares whether its platform is becoming a cesspit of hate. Until then, sayonara,” Weisman wrote in June.

Image by Screenshot

Andre Oboler, a scholar who studies social media, says Twitter’s lack of action reflects its corporate culture.

“Twitter’s problem is that they haven’t yet acknowledged the harm involved in hate speech, and as a result they don’t strongly believe in seeing hate speech removed,” Oboler told the Forward.

Oboler is the author of the “Report on Measuring the Hate – The State of Antisemitism in Social Media” that the Australian-based Online Hate Prevention Institute published in February.

The study tracked over 2,000 anti-Semitic posts on Twitter, Facebook and YouTube over a period of 10 months. During that time, only 20 percent were removed by the social media sites.

Facebook had the best response rate and removed about 75% of posts promoting violence against Jews. In comparison, Twitter only removed a quarter of anti-Semitic tweets.

In dealing with hate speech, Twitter is at the level Facebook was six years ago, Oboler said.

(Even worse than Twitter was YouTube that removed only about a tenth of anti-Semitic content, the study found. The video platform is not used to harass specific individuals though.)

This graphic from the report, shows how many of the reported anti-Semitic postings remained online.

“What Twitter lacks is real corporate social responsibility,” Oboler told the Forward.

“Twitter has neither the will nor appropriate systems to handle the problem that is running rampant on their platform,” he added.

His sentiment is mirrored by the British parliament. Their report on “Antisemitism in the UK” that was released this week rips into Twitter. “In the context of global revenue of $2.2 billion, it is deplorable that Twitter continues to act as an inert host for vast swathes of antisemitic hate speech and abuse,” the report states.

The British parliament report goes on to say that the company has the necessary resources to stop this problem, but isn’t using them. They demand that Twitter should proactively “identify hateful and abusive users” and assign more staff to their security teams.

“It is disgraceful that any individual should have to tolerate such appalling levels of antisemitic abuse in order to use Twitter — a social media platform now regarded as a requirement for any public figure,” the authors of the report write.

When asked for comment, a Twitter spokesperson told the Forward, that “hateful conduct has no place on Twitter and we will continue to tackle this issue head on.”

“People must feel safe in order to speak freely and there is a clear distinction between freedom of expression and conduct that incites violence and hate,” the spokesperson added.

In their ongoing fight against hate speech, the tech company tries to “leverage the platform’s incredible capabilities to empower positive voices, to challenge prejudice and to tackle the deeper root causes of intolerance.”

The company spokesperson also pointed to Twitter’s rules that forbid harassment and hateful conduct “on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease.” The abusive behavior policy also doesn’t allow “accounts whose primary purpose is inciting harm towards others on the basis of these categories.”

Despite their good intentions, Twitter is not doing a good job in enforcing their own rules.

There are many examples of this, but let’s illustrate it with this tweet that I received a couple of months ago when I first started working for the Forward.

Image by Screenshot

The screen-name and picture used by this Twitter-troll is Kurt Meyer, a Nazi and Waffen-SS member who was sentenced to death for war crimes after the end of World War II.

The three parentheses around my name are called an echo symbol, a recent “alt right”-trend to identify Jewish people on Twitter. The Anti-Defamation League describes it as similar to the Jewish Star and has added it to their online database of hate symbols.

The echo symbol is hard for Twitter to monitor, because most search engines ignore punctuation marks. So they have to rely on individuals reporting them.

When I reported the tweet to Twitter though, I got back an email saying that they have “determined the account is not in violation of the Twitter Rules on abusive behavior.”

I wrote back, explaining the parentheses and the historical meaning of Kurt Meyer’s name, but the answer was the same.

As of now, the account is still online and continues to harass Jewish journalists.


“As a general matter, social media Sites like Twitter have not been very good at all about upholding their own anti-hate policies,” Mark Potok, a senior follow at the Southern Poverty Law Center, a right-wing watch-dog organization, told the Forward.

Over the years, Twitter itself has acknowledged their problem with policing hate-speech.

“We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years,” then-CEO Dick Costolo said in an internal memo obtained by The Verge two years ago.

“We’re going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them,” he added.

But in the years since, it hasn’t become a priority.

Under current Twitter-CEO Jack Dorsey, “the company has been moving in the right direction, but on this stuff they have done virtually nothing,” Rabbi Abraham Cooper, who supervises the Digital Terrorism and Hate Project at the Simon Wiesenthal Center in Los Angeles, told Haaretz. “Maybe this will finally wake them up.”

Twitter is doing a much better job when it comes to combatting terrorism. Between February and August of 2016, they suspended 235,000 accounts for promoting terrorism.

“Twitter is making significant efforts to remove content promoting terrorism, which is unlawful in the US, but is making far less effort when it comes to content promoting hate speech,” online hate-expert Oboler said in a statement.

On the other hand, “social Media companies like Facebook are making significant efforts to increase the identification and removal of online hate content.”

A big difference between Twitter and Facebook is that users on Twitter are often anonymous, while Facebook is trying to enforce a clear name policy.

They argue that people are less likely to bully and threaten strangers online when they have to use their real names and are associated with their family and friends.

“There is no place for hate speech on Facebook,” a company spokesperson told the Forward. “If someone reports hate speech on Facebook, we will review the report and immediately remove the content if it violates our Community Standards.”

Facebook is also expanding their “Online Civil Courage Initiative” that gives advertising credits to a wide range of groups trying to counteract extremist messaging.

Google meanwhile is using artificial intelligence in their fight against online harassment. Their subsidiary, Jigsaw, has created a tool called Conversation AI that is designed to spot – and also moderate – hate speech.

According to the engineers, the machine is much faster and effective than humans ever could be. As a pilot program, the New York Times has started using the AI to moderate their comment section.


Not everybody believes that policing hate speech per se is a good thing. “Free speech isn’t just in the constitution,” free speech champion Lee Rowland told the Forward. “It’s also part of who we are.”

Rowland is a senior attorney with the American Civil Liberties Union (ACLU).

The ACLU is “concerned about the rules of large social media sites” and how these tech companies are censoring speech, Rowland said. “We are worried that censorship could impoverish conversation.”

Instead of forbidding certain speech, companies should make a push to empower users to chose what they themselves see and don’t see, Rowland told the Forward.

An exception is when people are targeted individually. “We recognize that when someone is subject to harassment or threat that can silent that person,” she said. “Threats and harassment should be banned.”

The situation is different in Europe, where – unlike the United States – many countries have laws against hate speech.

This May, the European Union released an online “code of conduct” in conjunction with Facebook, Twitter, YouTube and Microsoft.

The code it aimed at fighting hate speech, racism and xenophobia across Europe and requires the companies to review the “majority of valid notifications for removal of illegal hate speech” in less than 24 hours.

But the tech companies are not bound to these rules here at home, which Rabbi Cooper from the Wiesenthal Center critized harshly.

“Hate is hate, and if something offensive is removed because it’s posted from Germany, then they should voluntarily remove content if it’s being posted from the States,” he told Haaretz.

Facing stalling growth and shrinking ad revenue coupled with swarms of users leaving Twitter due to online harassment, the company created the “Twitter Trust & Safety Council” last February.

“To ensure people can continue to express themselves freely and safely on Twitter, we must provide more tools and policies,” Patricia Cartes, Head of Twitter’s Global Policy Outreach wrote in a blog post announcing the council.

Organizations like the ADL and the Southern Poverty Law Center (SPLC) are part of that council advising Twitter, and the ADL is also working on an individual report by their their anti-hate speech task force that is being released Wednesday.

While many see the council as a step in the right direction, we are now eight months later and not much has changed.

I think “they have to speed this up,” Heidi Beirich from the SPLC told ThinkProgress. “At the end of the day, they’re allowing this to happen on their platform while their position is against this. They could be dealing with it.”

Lilly Maier is a news intern at the Forward. Reach her at [email protected] or on Twitter at @lillymmaier

A message from our Publisher & CEO Rachel Fishman Feddersen

I hope you appreciated this article. Before you go, I’d like to ask you to please support the Forward’s award-winning, nonprofit journalism during this critical time.

We’ve set a goal to raise $260,000 by December 31. That’s an ambitious goal, but one that will give us the resources we need to invest in the high quality news, opinion, analysis and cultural coverage that isn’t available anywhere else.

If you feel inspired to make an impact, now is the time to give something back. Join us as a member at your most generous level.

—  Rachel Fishman Feddersen, Publisher and CEO

With your support, we’ll be ready for whatever 2025 brings.

Republish This Story

Please read before republishing

We’re happy to make this story available to republish for free, unless it originated with JTA, Haaretz or another publication (as indicated on the article) and as long as you follow our guidelines. You must credit the Forward, retain our pixel and preserve our canonical link in Google search.  See our full guidelines for more information, and this guide for detail about canonical URLs.

To republish, copy the HTML by clicking on the yellow button to the right; it includes our tracking pixel, all paragraph styles and hyperlinks, the author byline and credit to the Forward. It does not include images; to avoid copyright violations, you must add them manually, following our guidelines. Please email us at [email protected], subject line “republish,” with any questions or to let us know what stories you’re picking up.

We don't support Internet Explorer

Please use Chrome, Safari, Firefox, or Edge to view this site.