A little over a year after a French court forced Twitter to remove some anti-Semitic content, experts say the ruling has had a ripple effect, leading other Internet companies to act more aggressively against hate speech in an effort to avoid lawsuits.
The 2013 ruling by the Paris Court of Appeals settled a lawsuit brought the year before by the Union of Jewish Students of France over the hashtag #UnBonJuif, which means “a good Jew” and which was used to index thousands of anti-Semitic comments that violated France’s law against hate speech.
Since then, YouTube has permanently banned videos posted by Dieudonne, a French comedian with 10 convictions for inciting racial hatred against Jews. And in February, Facebook removed the page of French Holocaust denier Alain Soral for “repeatedly posting things that don’t comply with the Facebook terms,” according to the company. Soral’s page had drawn many complaints in previous years but was only taken down this year.
“Big companies don’t want to be sued,” said Konstantinos Komaitis, a former academic and current policy adviser at the Internet Society, an international organization that encourages governments to ensure access and sustainable use of the Internet. “So after the ruling in France, we are seeing an inclination by Internet service providers like Google, YouTube, Facebook to try and adjust their terms of service — their own internal jurisprudence — to make sure they comply with national laws.”
The change comes amid a string of heavy sentences handed down by European courts against individuals who used online platforms to incite to racism or violence.
On Monday, a British court sentenced one such offender to four weeks in jail for tweeting “Hitler was right” to a Jewish lawmaker. Last week, a court in Geneva sentenced a man to five months in jail for posting texts that deny the Holocaust. And in April, a French court sentenced two men to five months in jail for posting an anti-Semitic video.
“The stiffer sentences owe partly to a realization by judges of the dangers posed by online hatred, also in light of cyber-jihadism and how it affected people like Mohammed Merah,” said Christophe Goossens, the legal adviser of the Belgian League against Anti-Semitism, referring to the killer of four Jews at a Jewish school in Toulouse in 2012.
In the Twitter case, the company argued that as an American firm it was protected by the First Amendment. But the court rejected the argument and forced Twitter to remove some of the comments and identify some of the authors. It also required the company to set up a system for flagging and ultimately removing comments that violate hate speech laws.
Twitter responded by overhauling its terms of service to facilitate adherence to European law, Twitter’s head of global safety outreach and public policy, Patricias Cartes Andres, revealed Monday at a conference in Brussels organized by the International Network Against Cyber Hate, or INACH.
“The rules have been changed in a way that allows us to take down more content when groups are being targeted,” Cartes Andres told JTA. Before the lawsuit, she added, “if you didn’t target any one person, you could have gotten away with it.”
The change went into effect five months ago, but Twitter “wanted to be very quiet about it because there will be other communities, like the freedom of speech community, that will be quite upset about it because they would view it as censorship,” Cartes Andres said.
Suzette Bronkhorst, the secretary of INACH, said Twitter’s adjusted policies are part of a “change in attitude” by online service providers since 2013.
“Before the trial, Twitter gave Europe the middle finger,” Brokhorst said. “But they realized that if they want to work in Europe, they need to keep European laws, and others are coming to the same realization.”
According to Komaitis, the Twitter case was built on a landmark court ruling in 2000 that forced the search engine Yahoo! to ban the sale of Nazi memorabilia. But the 2013 ruling “went much further,” he said, “demonstrating the increasing pressure on providers to adhere to national laws, unmask offenders and set up flagging mechanisms.”
Still, the INACH conference showed that big gaps remain between the practices sought by European anti-racism activists and those now being implemented by the tech companies.
One area of contention is Holocaust denial, which is illegal in many European countries but which several American companies, reflecting the broader free speech protections prevalent in the United States, are refusing to censor.
Delphine Reyre, Facebook’s director of policy, said at the conference that the company believes users should be allowed to debate the subject.
“Counter speech is a powerful tool that we lose with censorship,” she said.
Cartes Andres cited the example of the hashtag #PutosJudios, Spanish for “Jewish whores,” which in May drew thousands of comments after a Spanish basketball team lost to its Israeli rival. More than 90 percent of the comments were “positive statements that attacked those who used the offensive term,” she said.
Some of the comments are the subject of an ongoing police investigation in Spain launched after a complaint filed by 11 Jewish groups.
But Mark Gardner of Britain’s Community Security Trust wasn’t buying it.
“There’s no counter-speech to Holocaust denial,” Gardner said at the conference. “I’m not going to send Holocaust survivors to debate the existence of Auschwitz online. That’s ridiculous.”
This story "Big Tech Wakes Up to Hate Speech Threat After #GoodJew Ruling in Europe" was written by Cnaan Liphshiz.