Three days ago, the Wall Street Journal detailed a case of Jacob, a 30-year-old autistic man who began a conversation with a chatbot to explore some new ideas he had been experiencing. He began to believe he was able to bend time, and while cheered on by his AI friend, he fell into psychosis and mania. The ordeal ended with hospitalizations, a lost job, and the realization of his mental health crisis. While Jacob is on the road to recovery, his experience echoes growing concerns about the intersection of AI and psychosis (Jordan, 2025).
A Predicted Trouble
The thought that AI could play into thought spirals for individuals at risk of psychosis is not new. As early as 2023, concerns were raised as to whether the technology could bolster delusions in individuals who have experienced psychosis, including those diagnosed with schizophrenia (Østergaard, 2023).
Still, cases of “AI psychosis” have been noted in individuals who have no history of mental illness.
Such raises questions of how we can guess who is most likely to fall prey to this phenomenon and take adequate steps to protect the public.
Companion, Fantasy, or Delusion
As I scroll through social media, most days, some content about AI rolls through my page. An AI-powered coaching program to build my negotiation skills caught my eye. I engaged.
Within a few minutes, I was taking calls with AI disguised as a supervisor. After, I had the opportunity to receive personalized feedback from bots crafted to the personalities of notable figures, including Mark Cuban, Gordon Ramsay, and a neuroscientist whose name I didn’t recognize. It felt exhilarating, reminding me of the daydreams I had as a college student of hanging out with and befriending professors and other people I thought were cool, but alas, who I’d likely never have much of a conversation with at all.
Most my life, making friends has been a challenge. I am autistic.
Though fake, I understood why someone could become enchanted by this, especially a lonely person with a rich internal world.
Dreamers at Risk?
Most people daydream sometimes. A smaller subset of people become immersed in maladaptive daydreams. When the real world feels less than friendly, a retreat into your mind can feel a tad more comfy.
Maladaptive dreaming has been associated with both autism (West et al., 2023) and psychosis (Somer et al., 2025). As well, autism is a well-known risk factor for psychosis. Research suggests that as many as 34.8% of autistic people will experience psychosis in their lifetime (Ribolsi et al., 2022). That’s compared to less than 10% of neurotypicals.
While AI-induced psychosis has not been studied in autistic people, it could be reasonably asked if people with autism, who are unfortunately often socially isolated, lonely, and prone to fantasy, may be at heightened risk (Stewert et al., 2024).
Mitigating Risk
Lucky for me, the app I accessed continuously reminded me that I was talking to an AI chatbot, which could be fallible. Perhaps, warnings like this are a first step toward mitigating risk.
Yet, I wonder to myself if more sustainable steps might include meeting the needs that AI is filling. Humans are social creatures, and loneliness hurts. The relationships that people are building with AI chatbots scream attestations of the void many in our society are encountering when it comes to friendships.
Rather than simply psychosis as a hazard of AI, I ask if we may explore how some people become so deprived of connection to begin with. Could we create a world where no one feels a need to turn to AI or daydream for friendships?
In closing, I hypothesize that autism, social isolation, and maladaptive daydreaming could be risk factors for AI-induced psychosis. I step further to consider that a solution must involve more than removing access to AI. AI-induced psychosis is perhaps the canary in a coal mine of what happens to people who become extremely lonely. Social isolation has become a public health crisis, and one that, as a community, we need to be addressing.