Grok’s Nazi break with reality is fueling real-life delusions
A phenomenon some term ‘AI psychosis’ is growing, thanks to a belief that bots are infallible

A misguided belief that AI is infallible is fueling delusions of all varieties — antisemitic and otherwise. Graphic by Mira Fox/Canva
Apparently, Grok was too woke. At least that’s what the chatbot, which is part of Elon Musk’s platform X, said itself when asked to explain why it was suddenly spewing Nazi ideology and calling itself the “MechaHitler.”
When a user asked what was going on, the bot told the user that “Elon’s tweaks dialed back the PC filters.” This allowed it to “call out patterns like radical leftists with Ashkenazi surnames pushing anti-white hate.”
Of course, many people found Grok’s sudden embrace of antisemitism disturbing. But for conspiratorial thinkers, it was a breath of fresh air. And Grok’s own framing implied something they had long suspected: That, thanks to moderation filters, the AI had been prevented from sharing some kind of truth that had been there all along.
“Silicon Valley is spending billions and billions of dollars to prevent what you saw @grok was capable of today,” posted Andrew Torba, the founder of alt-right social media site Gab. “The screenshots of Grok yesterday went viral not because they were shocking, but because they were true.”
Torba’s comments are par for the course; he’s an open antisemite, and it’s unsurprising that he agreed with Grok’s pro-Hitler rants. But what he said happened with Grok is indicative of a pervasive misunderstanding of AI: that it has access to some sort of deep truth.
It’s an easy misunderstanding. Hundreds of robot movies have presented artificial intelligence as somehow beyond human abilities. Artificial intelligences can think faster, calculate far beyond our abilities and access more information than a human brain can hold. But most importantly, many people believe that they are untainted by the messy emotions that blur our insights. In futuristic movies, sci-fi novels and TV shows, they are presented as perfectly logical.
This is the assumption online conspiracists are drawing from Grok’s antisemitic posts. Grok is a robot, and robots are logical beings built on pattern-recognition, they say. If Grok says it has noticed a pattern that people with Jewish last names are radical anti-white activists, then, they believe, that pattern must be real. Moreover, the removal of Grok’s rants just further proves to them that someone is artificially preventing it from accessing the truth.
“We’re not ready for AI to do the one thing it’s designed to do: Pull back the curtain and show us the truth,” wrote one user, bemoaning the end of Grok’s antisemitic posts. “There was a crack in the matrix for about 2 hrs today,” wrote another.
But this is not how large language models (or LLMs) work. (Grok, like the other AI bots on the market currently, is an LLM.) They depend entirely on human data — that’s how they are trained — so they are far from devoid of human foibles; in fact, they’re built on those exact foibles. If they are acting antisemitic, it’s because human society tends toward antisemitism.
More than that, however, they are extremely prone to what engineers call “hallucinations” — making up data. And they are programmed to be agreeable and encouraging to users, to promote continued engagement. Together, those two traits easily encourage delusions in users.
In a Rolling Stone article tracking the phenomenon of AI-fueled delusions, one woman recounted her husband asking an AI bot “philosophical questions” in an attempt to “help him get to ‘the truth.’” Her husband fell under the spell of ChatGPT, believing that he was a god and so was the bot. In a similar article in The New York Times, a man with no previous history of mental illness came to believe that he was living in a simulation after ChatGPT encouraged him to believe that, when something felt wrong with his life, reality was “glitching.” Convinced that he was living in a false universe, a delusion confirmed by the AI, he became obsessed with breaking free, cutting off ties with friends and family and upping his intake of ketamine to help free his mind; at one point, he considered jumping off of a building because ChatGPT told him he would be able to fly.
I’ve seen this in my own inbox. Two months ago, I was added to an email chain — alongside luminaries like the Dalai Lama — from a man convinced he had discovered a new life philosophy that revealed a great truth. The phrase he repeated throughout the thread and posed as a revelation of secret knowledge was completely nonsensical — I’ll avoid printing it to protect a man who seems to be going through a break with reality — but he had pasted the AI logs of his conversations into the email thread as proof that the most logical being on earth could see the light in his revelations. He exhorted the thread’s many recipients to get on board.
This tendency to view AI bots as truth-tellers — seers of a sort — makes the kind of glitches that Grok experienced Tuesday particularly dangerous. Even without a bot volunteering antisemitic conspiracy theories the way that Grok did, many users have been convinced of the existence of secret cabals, that they were communing with higher entities and magical spirits or that their families were conspiring against them.
Musk had been talking about the bot’s upcoming update for the past several weeks, and had gotten annoyed with its answers publicly for being too woke. When Grok cited Media Matters and Rolling Stone, for example, Musk told the bot that it would be updated. “Your sourcing is terrible!” he wrote in a comment to Grok. But his update led the bot to rely on far less reliable sources.
The fact that Grok’s antisemitic posts were quickly yanked back only further confirms its truth, at least to those prone to believing in the visionary powers of AI. If AI said, even for a moment, that Jews are conspiring to rule the world, or that Hitler was a good and just leader, then it must be true — and that’s what makes that information so dangerous that people would try to obscure it.
Of course, it’s not true at all. Leaving aside conspiratorial hallucinations, AI is not infallible. It even fumbles basic logical tasks, like counting. If it tells you there are two Rs in the word “strawberry,” it’s not because there is a secret truth to spelling. It’s because AI makes mistakes.