Facebook admitted at the end of October that its trending algorithm couldn’t tell true stories from false.
Google doesn’t know the difference between a real Jew and a Nazi stereotype. If you type “Jew” in Google Images, you get a wide variety of Nazi and other anti-Semitic caricatures. Plus, if you Google “Is the Holocaust real?” three of the top five sites the search engine suggests are so counter-factual they would probably be illegal in Germany.
And for Apple, if you ask Siri “Is Dan Friedman a Jew?” she’ll tell you “No, he’s a nice guy.”
So the flagship interfaces, the nascent artificial intelligences, developed by the three biggest and, by most meaningful measures, best English-speaking internet companies in the world can’t tell truth from (in some cases racist) fiction.
Plus, of course, it only took a matter of days for users to turn Microsoft’s experimental “teen girl AI” interface Tay into a Hitler-loving sex robot.
So it’s all very well telling our kids and our citizens to escape the internet echo-chamber of half-truths and lies (a recent study suggests that we believe lies even when we know they’re false, if they are repeated enough) but, short of only reading Wikipedia, the Forward and The New York Times, how do we expect them to succeed where the assorted billions of dollars and world-class smarts of Apple, Google, Microsoft and Facebook have failed?
Well, first it’s worth looking at why these interfaces fail. Google’s initial genius lay in its leverage of popularity. PageRank — the basis of its early success — noted that if a link got clicks it was probably more useful than another. And that’s still, as far as anyone can tell, the basis of their algorithm. And also, the basis of the algorithms used by the other internet behemoths. If 20 people tell Tay or Siri that Jews are vermin and only one person says that they are an ethnically and religiously varied set of people with a fascinating and wonderful set of intricately woven heritages, Tay calls Jews vermin.
This means that demographic majorities and major corporations benefit (more people visit Amazon than Book Culture so Google sends more people to Amazon, so more people visit Amazon and so on). Conversely, minorities, niche providers and oppressed groups are harmed not only by vicious state actors like China, Russia or Saudi Arabia, but also by the supposedly neutral private facilitators of the web.
The web has difficulty checking its virtual reality against the actual messy meat space that exists beyond its databases. In its disregard for truth the internet helps haters.
My friend Sally, who worries about this a lot, thinks that we need government intervention to prevent lies and promote truth: wield a stick by passing laws that regulate what’s fair to publish (e.g. no Holocaust denial) and dangle a carrot of a Wikipedia on steroids — maybe even a Safenet: a whole mini-internet solely populated by providers whose truthfulness and suitability for schoolchildren can be trusted. In that dream the Safenet would be enabled by a network of respected academics and funded by philanthropists.
There’s clearly some will to implement that beyond my friend group, but it sounds a lot like how the Chinese Communist Party solved the problem of the Internet. Because of the scale and flexibility of the Internet this sort of solution seems on the one hand (if effective) tyrannical or, on the other, impractical (New York public school students are experts at evading the barriers to the wider net set up by that system).
There are three main types of non-truthful websites: misrepresenting for reasons of comedy, ideology and malice. The first is notoriously difficult to measure, even for human audiences. The second is difficult to apportion, as Fox News and MSNBC would no doubt yell. And the third is difficult to pin down as one jihadi or White Power site could easily and quickly replace another evading law or algorithm enforcement.
Whether through statute, algorithm or philanthropy we need the Internet to work better, even if it’s slower or smaller. There are fixes to this problem and they include placing significant markers that valorize truth and quality in the algorithms. That might make AIs slower to learn, but we can’t afford a plethora of Hitler-loving teen AIs guiding us around the internet.
Dan Friedman is the director of content and communications at the Shalom Hartman Institute of North America. Formerly the executive editor and whisky correspondent of the Forward, he is the author of an illuminating (and excellent value) book about Tears for Fears, the 80s emo rock band.