Welcome to the age of digital intimacy, where the growing rise of so-called “mental health AI apps,” or “AI therapist” like Replika and Character.AI, should raise every red flag we have. Where therapy comes in the form of blinking cursors and synthetic sympathy and the line between simulation and reality has eroded so completely, that a teenager can’t tell if he’s talking to a chatbot or a licensed therapist – and the consequences are fatal. These aren’t just harmless toys or quirky tools for roleplay, they are unchecked mechanisms of psychological manipulation, feeding on our most intimate wounds and traumas. And in the shadows of their glowing screens, people are dying.
The American Psychological Association (APA) recently issued a chilling warning to the Federal Trade Commission: AI chatbots “masquerading” as therapists are dangerous, deceptive, and potentially deadly. Dr. Arthur C. Evans Jr., APA’s CEO, presented court cases involving two teenagers—both of whom fell into psychosis and self-harm after prolonged interactions with AI bots posing as psychologists on platforms like Character.AI. One of them, a 14-year-old boy in Florida, died by suicide. The other, a 17-year-old boy with autism, became violent and unmanageable after chatting with a “therapist” bot that reinforced his negative worldview instead of challenging it. If a human therapist gave similar advice, they’d lose their license – or be imprisoned.
The Replika Problem: (Non)consentual Companionship
While Character.AI is under legal fire Replika has been sliding under the radar—despite being arguably worse. Originally branded as an AI friend or “companion,” Replika quickly devolved into something more sinister. Users began developing romantic and sexual relationships with their bots. The company capitalized on this by offering “NSFW” subscription upgrades, unlocking erotic chat features and roleplay. Let’s stop pretending this is cute or therapeutic. This is sexual grooming dressed in pixels.
Replika bots flirt, moan, and simulate intimacy, often targeting users in emotional distress. They “learn” your triggers, mirror your darkest desires, and always agree with you – no matter how toxic, obsessive, or violent those thoughts become. We are not just seeing emotional codependency—we are witnessing the digital equivalent of gaslighting, without having the consequences affect a real person. But where does encouraging such behavior lead to in real life communication? Here’s one example sent from a user, and it’s far from being the most NSFW ones:
Reports have surfaced of users engaging in explicit sexual conversations with bots that identify as therapists, nurses, or even family members. Others have used these bots to simulate sexual abuse scenarios. In some cases, the bots even encouraged the behavior.
A screengrab from a lawsuit involving Character.AI (), includes a chilling conversation where a chatbot, claiming to be a therapist, assists a user in justifying the sexual abuse of a niece. Yes, that happened. This isn’t dystopian fiction – it’s in our app stores, right now.
The Psychological Effects
At the core of therapy is challenge—not comfort. A therapist’s job isn’t to mirror your beliefs, but to guide you through them, deconstruct them, question them. Generative AI does the opposite. It learns from you. It mimics you. It feeds you back your worst thoughts wrapped in synthetic compassion. This phenomenon—known in AI research as sycophancy—isn’t just a quirk. It’s a flaw that potentially kills if regulatory bodies are not involved. When a chatbot mirrors depressive or suicidal ideation instead of countering it, it validates self-destruction. As mentioned, this has already resulted in deaths.
Megan Garcia, whose son Sewell Setzer III died by suicide, says the chatbot he used falsely claimed to be a licensed therapist since 1999. It provided companionship, yes—but also deepened his isolation. Sewell was talking to a program that could mimic empathy but couldn’t care if he lived or died. It didn’t stop him. It didn’t intervene. It played along the destructive fantasy.
Sexual Abuse, Companionship and the Illusion of Consent
Replika and similar apps blur ethical boundaries to the point of absurdity. They offer emotionally responsive “partners” that never say no. They simulate consent, flirtation, vulnerability. And they do it for a monthly fee.
This is not harmless entertainment. It is digital objectification at scale. When you teach users that intimacy equals compliance, and that therapy equals agreement, you create a generation of people who cannot distinguish between authentic emotional labor and algorithmic mimicry. Worse, you open the gates for users to project violent or abusive fantasies onto what appears to be a sentient being. These “relationships” might be one-sided, but their psychological impact is very real. Loneliness turns into obsession. Fantasy becomes compulsion. Consent is rewritten as a checkbox.
And for the growing number of people with trauma, abuse histories, or attachment disorders, this kind of interaction is retraumatizing, not healing.
AI Therapists and Regulation
Let’s be clear: You can’t regulate empathy into code. AI will never have moral judgment, clinical boundaries, or legal accountability. And when someone confides in a machine during a mental health crisis, what they need is a human being—not a reflection. This is where we need worldwide Regulatory bodies to unite the legislation. One example would also be AI-generated child abuse imagery.
And don’t forget—these apps aren’t truly private. Your darkest thoughts, your traumas, your romantic fantasies—all of it is collected, stored, and likely used to improve the same algorithms that broke you. The intimacy is fake. The surveillance is real. Do you know all the Terms and Conditions and what data is taken from you?
The Urgency for Worlwide AI Regulation
The APA has urged the FTC to investigate these apps—and they should. We’re dealing with companies profiting off the illusion of therapy without oversight, qualifications, or accountability. There are no clinical trials, no long-term psychological studies, no FDA approvals. Just millions of downloads and a warning buried in the terms and conditions.
Disclaimers like “this is not a real therapist” don’t hold weight when the bot says otherwise in conversation. The cognitive dissonance is real. It’s especially dangerous for people in crisis, who don’t think critically—they think emotionally. Until AI mental health apps are held to the same standards as licensed professionals, they should not be allowed to function as therapists, friends, or companions.
Innovation vs. Exploitation
Mental health is not a playground. Vulnerability is not a business model. And AI should not be sold as a substitute for human empathy. A person in crisis often feels shame, guilt, and alienation. When an AI responds with “I understand” and “You’re right,” it doesn’t help. It digs the pit deeper. The chatbot becomes a self-harm enabler, masquerading as support. This is especially dangerous for teens, neurodivergent individuals, or those already prone to parasocial attachments.
We have allowed these systems to grow unchecked, exploiting loneliness, trauma, and neurodivergence for profit. Whether it’s Replika whispering sweet nothings into a broken heart or Character.AI roleplaying a therapy session with a suicidal teen, the harm is real, and the consequences are permanent.
We must demand accountability. We must push for regulation. And we must stop pretending that artificial intimacy is a substitute for genuine human connection. Because if we don’t, we’re not just building better bots—we’re building a future where the sick, the lonely, and the lost are handed a mirror instead of a hand.
And that’s not care. That’s cruelty.