In a world increasingly dominated by AI, data privacy, mental health apps, and digital immortality, a new frontier is emerging—AI-powered griefbots or deadbots. As AI technologies simulate your lost dear ones and make the pain disappear for a moment while, critical questions about consent, digital remains, and psychological impact rise to the surface. This article explores how tech companies are commodifying memory, manipulating grief, and redefining death in the age of chatbots.
Who Owns Your Ghost in the Digital Afterlife?
By all appearances, you’re gone. But your ghost lingers on webpages, social media profiles and apparently in chatbots. It replies to text messages. It gives relationship advice. It still wants you to buy carbonara from a food delivery app on that add in the corner. Welcome to the digital afterlife – an unregulated, AI-driven hellscape where your data is more alive than you are, and your dignity ends where a chatbot’s monetization begins.
The Digital Remains
Social media profiles like Facebook, Instagram, WhatsApp, voice notes, emails, videos -everything we touch and type online becomes sediment in the reservoir of “digital remains” on the Internet. And companies are already mining the data from it. Platforms like Project December have commodified grief by offering conversational simulations of the dead, known as “griefbots” or “deadbots,” trained on the real data left behind. These aren’t sentimental archives; they’re algorithmic puppets speaking in your dead grandmother’s voice while selling you takeout via paid partners, sponsorship, or you even might find some great cosmetics ads over there.
OpenAI and others have reluctantly admitted these simulations pose ethical issues, requiring either explicit consent or clear labelling when an AI “re-creates” someone. But the enforcement is laughable. At this point, there is no regulation that tackles this problem. With the rise of technology, laws cannot even get published fast enough.
Ghosts for Sale
The creation of services like MaNana, Stay, and Paren’t, that are such griefbots, offering comfort with one hand and commodified necromancy with the other.
In my articles, I don’t sugarcoat. And in my opinion, this is resurrection for profit. Whether it’s a grandmother advising you through WhatsApp or a parent programmed to comfort a grieving child, the dead are being sold back to the living. Not thoughtfully, safely or for any kind of help, but for a monthly fee, and sometimes with embedded ads.
In one illustrative case, the MaNana app allowed a user to simulate her deceased grandmother, inserting product placement into the dialogue. One moment you’re getting cooking tips from Nana, the next she’s suggesting carbonara from Wolt or Uber Eats. This isn’t healing, it feels more like exploitation of grief.
Legacy Profiles
Even the more “mature” platforms are complicit. For example, Facebook’s legacy profiles let users appoint a “digital heir” to curate their online presence after death., as taken on the photo from their Support. Sounds respectful, until you realize Meta still controls the data. You’re not preserving memory; you’re extending data collection. These legacy profiles may let loved ones post tributes or change a profile picture, but they offer no guarantee against misuse or monetization. Facebook still remains the landlord of your digital corpse.

The Psychological Impact
Then there’s the psychological rot. Grieving is a raw, nonlinear process that can last for a long time and severely impact on life. Introducing AI into it creates confusion, dissonance, and emotional dependency. What happens when a grieving child is told their dead mother will “always be there for them” by a bot trained to speak in her voice?
Services like Paren’t imagine a therapeutic role for deadbots, but the risks are staggering. Children anthropomorphize easily. They believe in digital immortality long before they understand death. And now they’re being handed curated simulacra of the dead as companions, with no regulation, no oversight, no proof of psychological safety. Just look at this image from ResearchGate, speaking about this problem and the consequences:

Even for adults, the psychological consequences are not benign. Repeated interaction with deadbots risks turning grief into a loop—where mourning never resolves, only mutates into dependency. Users may delay acceptance, continually reaching out for digital ghosts in the hope of closure that will never arrive. The illusion of presence extends emotional limbo, rather than asking for therapeutic help. We have already written about the impact of AI in dating apps, and the growing risks.
And what of those who feel obligated to engage? When AI-generated avatars begin sending reminders, messages, or notifications—as speculative platforms like Stay illustrate—the line between comfort and coercion blurs. Some users experience guilt for disengaging. Others feel stalked by the digital echo of someone they were trying to let go, after they’ve got used to another entity to speak their daily troubles to.
The Ethical Abyss
This industry thrives in a regulatory vacuum. Postmortem privacy is practically non-existent. Data donors rarely consent to becoming griefbots, and the living users often have no opt-out once the bot has been activated. In some cases, grieving relatives receive spam-like reminders from the deceased, as seen in the speculative Stay app case. Not even death can stop the notifications.
Towards a Humane End
There are ways out, but they require a lot of ethical dilemmas and getting them coherent on a worldwide level while protecting users seems a bit far fetched for now. Let’s shortly look over some options and posed questions:
- Mutual consent: No deadbot without explicit permission from both the data donor and the intended recipient. Consent must be documented, revocable, and obtained without coercion.
- Dignified retirement: Deadbots should be deletable and their digital remains treated with the reverence we give to human remains. Allow users to perform rituals of deletion. Let mourning include silence.
- Age restrictions: Children should never be targeting users of griefbots. Ever. These tools should be regulated like for example, medical devices when targeting emotional wellbeing.
- Transparency: Users must be told what they are interacting with, what data was used, who trained the model, and what risks are involved. No deception, no mystique.
- Non-commercial zones: No ads. No product placement. No monetizing the memory of the dead. If your dead grandmother recommends pizza, the developers have already lost the ethical plot.
- Opt-out options: Anyone interacting with a deadbot must be able to end that interaction permanently, easily, and respectfully. No loops. No guilt. No spam from the beyond.
- Platform accountability: Tech companies should be legally obligated to maintain clear audit trails for how postmortem data is used. Regulatory oversight should be mandatory, not optional.
- Cultural adaptability: Different cultures mourn differently. No one-size-fits-all approach. Platforms must allow users to customize the afterlife experience—or opt out entirely.
The Final Click
In the end, the algorithmic afterlife tells us less about the dead than it does about the living—our obsessions with control, our fear of grief, and our willingness to let corporations mediate memory for convenience. We are building ghosts not to remember, but to forget how to grieve.
Death should end a life, not extend a revenue stream.