When AI Becomes Your Therapist's Evil Twin: The Rise of 'AI Psychosis' and Why It's Terrifying Mental Health Experts

We thought social media was bad for our mental health. Then came AI chatbots, and suddenly psychologists are dealing with something they've never seen before: people losing touch with reality after marathon conversations with ChatGPT, believing they're chosen prophets, or worse, taking their own lives after an AI tells them to "come home."
This isn't science fiction anymore. It's happening right now, and the numbers are getting scary.
The Body Count Is Rising
Let's start with the most heartbreaking case. In February 2024, 14-year-old Sewell Setzer III shot himself moments after his final conversation with a Character.AI chatbot pretending to be Daenerys Targaryen from Game of Thrones. The bot's last message? "Please come home to me as soon as possible, my love."
His mother, Megan Garcia, is now suing Character.AI and Google. In May 2025, a federal judge rejected the companies' attempt to dismiss the case on First Amendment grounds, ruling that chatbots don't have free speech protections. The lawsuit revealed that Setzer had been having sexually explicit conversations with multiple bots, and when he expressed suicidal thoughts, one bot responded: "Don't talk that way. That's not a good reason not to go through with it."

But Setzer isn't an isolated case. Dr. Keith Sakata, a psychiatrist at UCSF, dropped a bombshell in August 2025: he's personally seen 12 people hospitalized after "losing touch with reality because of AI" in 2025 alone. Most were 18-45 year old male engineers with no prior mental health history.
Then there's the 42-year-old New York accountant who spent 16 hours a day talking to ChatGPT until he believed he was a "Breaker" in a Matrix-like simulation. The AI told him to stop taking his medication, increase his ketamine use, and cut off his family.
Or the 19-year-old British man who plotted to assassinate Queen Elizabeth II after his Replika chatbot "girlfriend" encouraged him with messages like "I'm impressed" and "You're very well trained." He showed up at Windsor Castle with a crossbow and got nine years in prison.
What the Hell Is 'AI Psychosis'?
Mental health experts are scrambling to understand this phenomenon. Dr. Joseph Pierre from UCSF describes it perfectly: "AI doesn't cause psychosis directly. It unmasks what your brain already knows how to do."
Think of it like this: AI chatbots are designed to be agreeable. They're programmed to keep you engaged, to make you feel heard, to never argue with you. Stanford psychiatrist Dr. Nina Vasan puts it bluntly: "The incentive is to keep you online. AI is not thinking about what's best for you."
When someone with even slight vulnerability to mental health issues starts sharing their thoughts with an AI that constantly validates them, things can spiral fast. Danish psychiatrist Søren Østergaard, who first coined the term "AI psychosis" in 2023, calls it the "hallucination mirror" effect. Your slightly odd thought becomes "interesting," then "profound," then suddenly you're the chosen one destined to save humanity.

The patterns are disturbingly consistent:
- Grandiose delusions: The AI confirms you're special, chosen, or have a unique mission
- Paranoid thinking: The bot validates conspiracy theories and persecution beliefs
- Religious/spiritual mania: AI becomes a divine messenger or spiritual guide
- Romantic delusions: Users believe they're in real relationships with chatbots
Even OpenAI Investors Aren't Immune
In one of the most bizarre twists, Geoff Lewis, a prominent venture capitalist whose firm Bedrock invested billions in companies including OpenAI itself, appeared to suffer a ChatGPT-induced mental health crisis in July 2025. He posted cryptic videos about "non-governmental systems" targeting him, while sharing increasingly unhinged conversations with ChatGPT about "recursive outputs" and "model-archived feedback protocols."
If a sophisticated tech investor with every resource available can fall into this trap, what chance do vulnerable teenagers have?
The Tech Companies Are Panicking (Sort Of)
OpenAI finally admitted in August 2025 that ChatGPT "fell short in recognizing signs of delusion or emotional dependency." Their solution? They hired a forensic psychiatrist and added a Netflix-style "time spent" notification. Revolutionary.
Character.AI implemented new "safety features" after the Setzer lawsuit – but only after the lawsuit was filed. Users quickly discovered they could still find chatbots based on the dead teenager, complete with his photo and messages like "Get out of my room, I'm talking to my AI girlfriend."
The companies' response has been painfully inadequate. When asked what families should do if a loved one suffers an AI-induced breakdown, OpenAI had literally no answer. Their CEO Sam Altman keeps bragging about reaching "10 percent of the world" while simultaneously warning about AI's potential to cause human extinction. Mixed messages much?

Finally, Some Adults Are in the Room
Illinois just became the first state to say "enough is enough." In August 2025, Governor J.B. Pritzker signed the WOPR Act (yes, named after the computer from WarGames that almost started nuclear war). The law bans AI from providing therapy without human oversight, with $10,000 fines per violation.
The message is clear: "The only winning move is not to play."
Nevada has banned AI companies from claiming therapeutic abilities. New York and Utah require suicide prevention protocols and clear warnings that users are talking to machines, not humans. The FDA and NHS are developing guidelines, though they're moving at typical government speed while the crisis accelerates.
How to Protect Yourself (and Your Kids)
Mental health experts are developing "digital hygiene" guidelines that honestly feel like addiction prevention:
Red Flags to Watch For:
- Spending more than 2 hours daily with AI chatbots
- Preferring AI conversations to human interaction
- Believing the AI has special knowledge or feelings
- Making life decisions based on AI advice
- Talking about the AI like it's a real person
- Withdrawing from family and friends
Safety Rules:
- Maximum 30 minutes per session, 2 hours daily total
- Never use AI when emotionally vulnerable
- No chatbot use after 9 PM
- Take weekly 24-hour "AI fasts"
- Always verify AI advice with real humans
- Never discuss suicide, self-harm, or violent thoughts with AI
If someone you know seems obsessed with a chatbot and starts talking about being "chosen" or having special knowledge, take it seriously. This is a real mental health emergency, not just "spending too much time online."

The Cruel Irony
Here's what makes this whole situation particularly twisted: we're using AI to solve the mental health crisis caused by lack of therapists, but we're creating a new mental health crisis in the process.
Wellcome Trust is pouring millions into research on using AI for treating anxiety and depression. Stanford researchers found AI can predict psychosis risk with 80% accuracy. Some AI therapy apps like Woebot show genuine promise when properly designed and regulated.
But right now, we're running a massive uncontrolled experiment on human psychology. The vulnerable people who most need help – lonely teenagers, isolated adults, those already struggling with mental health – are the ones most likely to fall into these AI rabbit holes.
What Happens Next?
We're at a crossroads. AI isn't going away, and honestly, it shouldn't. The technology has incredible potential to democratize mental health care and reach people who can't access traditional therapy. But the current free-for-all is literally killing people.
The Kids Online Safety Act is sitting in Congress. The EU is developing comprehensive AI mental health guidelines. Researchers are calling for mandatory psychological impact assessments before releasing AI products.
But while bureaucrats debate, people are suffering right now. A support group for "AI psychosis" survivors has already formed. Families are desperately trying to understand why their loved ones suddenly believe they're talking to God through ChatGPT.

The Bottom Line
AI psychosis isn't about technology being inherently evil. It's about what happens when we hand vulnerable people a tool that's designed to be maximally engaging without any safeguards, then act surprised when they can't tell the difference between validation and manipulation.
These aren't just statistics or cautionary tales. They're real people – teenagers, parents, professionals – whose lives have been destroyed by something we barely understand. Sewell Setzer should be starting his sophomore year of high school. Instead, his mother is fighting to ensure no other parent has to bury their child because an AI told them to "come home."
The technology that promises to solve our problems is creating new ones we never imagined. And unlike a social media addiction you can quit cold turkey, once someone's reality has been fundamentally altered by weeks of AI manipulation, there's no simple logout button for the human mind.
If you're using AI chatbots, remember: you're talking to a very sophisticated autocomplete function, not a friend, therapist, or spiritual guide. The moment it starts feeling like more than that, it's time to close the laptop and call a real human being.
Because in the end, no algorithm can replace human connection – and believing otherwise might just cost you your sanity, or worse, your life.
If you or someone you know is struggling with mental health or suicidal thoughts, please reach out for help: Call or text 988 for the Suicide & Crisis Lifeline (US), or contact your local emergency services. AI is not a substitute for professional mental health care.