Skip To Content
JEWISH. INDEPENDENT. NONPROFIT.
Fast Forward

AI has a reputation for amplifying hate. A new study finds it can weaken antisemitism, too.

Talking to ‘DebunkBot’ reduced users’ belief in antisemitic conspiracy theories, researchers have found

(JTA) — Every day, it can seem, brings a fresh headline about how AI chatbots are spreading hateful ideas. But researchers tasked with understanding antisemitism and how it can be stopped say they have found evidence that AI chatbots can actually fight hate.

Researchers affiliated with the Anti-Defamation League’s Center for Antisemitism Research trained a large-language model, or LLM, on countering antisemitic conspiracy theories, then invited people who subscribed to at least one of those theories to interact with it.

The result, according to a study released on Wednesday: The users soon believed in the antisemitic theories less, while at the same time feeling more favorable about Jews as a group. And the effects were still strong a month later, even without further engagement with the LLM.

The researchers are hailing the finding as a breakthrough in the quest for identifying actionable strategies in the fight against Jew-hatred.

“What’s remarkable about these findings is that factual debunking works even for conspiracy theories with deep historical roots and strong connections to identity and prejudice,” David Rand, a Cornell University professor who was the study’s senior author, said in a statement. 

“Our artificial intelligence debunker bot typically doesn’t rely on emotional appeals, empathy-building exercises, or anti-bias tactics to correct false beliefs,” Rand continued, referring to practices frequently employed by advocates seeking to fight antisemitism, including at the ADL. “It mostly provides accurate information and evidence-based counterarguments, demonstrating that facts still matter in changing minds.”

Matt Williams, who has headed the Center for Antisemitism Research since its founding three years ago, says the study builds on a growing body of research that views contemporary antisemitism as primarily a misinformation problem, rather than a civil rights problem.

“We need to think about antisemitism less like feelings about Jews, and more like feelings about Bigfoot,” he said in an interview. “And what I mean by that is, it’s not ‘Jews’ that are the problem. It is ‘the Jew’ as a function of conspiracy theory that is the problem. And the relationship between ‘Jews’ and ‘the Jew’ in that context is far more tenuous than we might want to think.”

Calling conspiracy theories “malfunctions in the ways that we make truth out of the world,” Williams said the study showed something remarkable. “People can correct those malfunctions,” he said. “They really can, which is super exciting and really impactful.”

The study emerges from the ADL’s relatively new effort to come-up with evidence-based ways to reduce antisemitism, working with dozens of researchers across a slew of institutions to design and carry out experiments aimed at turning a robust advocacy space into less of a guessing game.

The new experiment, conducted earlier this year, involved more than 1,200 people who said on a previous ADL survey that they believed at least one of six prominent antisemitic conspiracy theories, such as that Jews control the media or the “Great Replacement” theory about Jewish involvement in immigration. 

The people then were randomly assigned three different scenarios: A third chatted with an LLM programmed by the researchers to debunk such theories, built within Microsoft’s Claude AI model; another third chatted with Claude about an unrelated topic; and the final third were simply told that their belief represented a “dangerous” conspiracy theory. Then they were all tested again about their beliefs.

Members of the group that chatted with what the researchers are calling DebunkBot were far more likely than members of the other groups to have their beliefs weakened, the researchers found.

DebunkBot was hardly a panacea for antisemitism: The study found that those who believed in more antisemitic conspiracy theories experienced less change. And Williams notes that the study found only that belief in antisemitic conspiracies was reduced, not rooted out entirely.

But he said any strategy that can cut against what researchers believe has been a widespread explosion of belief in conspiracy theories is a good thing.

The proportion of Americans subscribing to conspiracy theories over the last decade has reached as much as 45%, more than twice the rate that had held steady for 70 to 80 years, Williams said.

“To me, the increase in that level of saturation is far more concerning than any particular conspiracy theory moving through different generations,” he said. “I don’t think that we’re going to ever create a world in which we go under 15% — but going from 45 back to 30 or 25 seems more doable.”

The new study comes as AI models vault into widespread use among Americans, raising concerns about their implications for Jews. When Elon Musk launched a model of his own earlier this year called Grok, it immediately drew criticism for amplifying antisemitism — kicking off a pattern that has played out repeatedly. Soon, the company apologized and said it would train its model to avoid the same behavior in the future. Criticism of Grok is still widespread, but it no longer praises Hitler — though even this week it reportedly told one user that the Nazi gas chambers were not designed for mass killing, prompting an investigation by French authorities.

Chatbot training is seen as essential for delivering high-quality AI results. DebunkBot can be found online on its own website now, but Williams said efforts were underway within the ADL to convince the companies operating major AI platforms to incorporate its expertise. 

“There’s far more receptivity than not, by any stretch of the imagination,” he said, while noting that the work was early and he could not share many details.

Whatever happens with that effort, Williams said, the new research demonstrates that combatting what’s sometimes called the world’s oldest hatred is possible.

“AI and LLMs — those are tools, right? And we can use tools for good and for evil,” Williams said. “But the fact that we can subject conspiracy theories to rational conversation and arguments and actually lead to favorable outcomes is itself, I think, relatively innovative, surprising and extraordinarily useful.”

Republish This Story

Please read before republishing

We’re happy to make this story available to republish for free, unless it originated with JTA, Haaretz or another publication (as indicated on the article) and as long as you follow our guidelines.
You must comply with the following:

  • Credit the Forward
  • Retain our pixel
  • Preserve our canonical link in Google search
  • Add a noindex tag in Google search

See our full guidelines for more information, and this guide for detail about canonical URLs.

To republish, copy the HTML by clicking on the yellow button to the right; it includes our tracking pixel, all paragraph styles and hyperlinks, the author byline and credit to the Forward. It does not include images; to avoid copyright violations, you must add them manually, following our guidelines. Please email us at [email protected], subject line “republish,” with any questions or to let us know what stories you’re picking up.

We don't support Internet Explorer

Please use Chrome, Safari, Firefox, or Edge to view this site.