The Ghost in the Machine Is You — And Someone Else Might Own It
Your grandmother's voice, replicated by an algorithm, asks how your day was. Your late father cracks the same bad joke he told at every Thanksgiving — except now it's a chatbot pulling from years of archived text messages. A dead musician releases a "new" single, their vocal cords reconstructed from waveform data and studio outtakes. Welcome to 2026, where the dead don't just leave behind estates and photo albums. They leave behind data. And that data is being resurrected, monetized, and fought over in courtrooms.
This isn't science fiction. It's happening right now. The digital afterlife industry — sometimes called "grief tech" — has exploded from a fringe curiosity into a multi-platform market with millions of users. Companies like HereAfter AI, StoryFile, YOV, and Seance AI are building products that turn your deceased loved ones into interactive chatbots, voice avatars, and even video simulations. And the legal system? It's scrambling to catch up.
The question isn't whether AI can bring the dead back to life anymore. It can, at least in digital form. The real question — the one that will define privacy, inheritance, and identity law for the next generation — is this: who owns your digital likeness after you die?
What Exactly Is a "Digital Likeness"?
Before we can talk about ownership, we need to define what we're fighting over. A digital likeness is any computer-generated representation of your identity — your face, your voice, your mannerisms, your speech patterns, even the way you structure a sentence. Ten years ago, that meant a photograph or maybe a voice recording. Today, it means something far more complex and far more dangerous.
Generative AI models can now ingest your text messages, emails, social media posts, voice memos, and video clips, then produce a synthetic version of you that can hold a real-time conversation. New York's recently amended Civil Rights Law defines a "digital replica" as a "computer-generated, highly realistic electronic representation" that is "readily identifiable" as a specific individual's voice or visual likeness — even if that individual never performed in the work in question, or if their actual performance has been materially altered.
That legal definition matters because it captures exactly what grief tech companies are building. They're not making tribute videos. They're constructing interactive simulations — entities that respond, adapt, and speak as if they are the deceased person. And the gap between "a tool that sounds like someone" and "a tool that is someone" in the user's mind is far narrower than any of us would like to admit.
The Patchwork of Laws Trying to Govern the Dead
Here's where things get complicated, and frankly, where they get a bit maddening. There is no single, unified law — in the United States or anywhere else — that governs what happens to your digital likeness after death. What we have instead is a patchwork of state-level "right of publicity" statutes, a handful of brand-new AI-specific regulations, and a lot of unanswered questions.
The Right of Publicity: A Framework Built for Celebrity Posters, Not AI Ghosts
The legal concept that comes closest to protecting your posthumous digital identity is the "right of publicity" — essentially, the right to control the commercial use of your name, image, and likeness. The problem? This right was designed decades ago with celebrity endorsements and unauthorized merchandise in mind. It was never built to handle an AI-generated chatbot that mimics your dead uncle's texting habits.
And here's the kicker: whether you even have a posthumous right of publicity depends entirely on where you lived when you died. California protects it for 70 years after death. New York extends it 40 years. Indiana and Oklahoma push it out to 100 years. Tennessee? Indefinitely, as long as the estate keeps commercially exploiting the person's identity. But in many states, the right simply evaporates the moment your heart stops beating. If you die in one of those jurisdictions, anyone can theoretically build an AI version of you and there's not much your family can do about it.
Even in states with strong posthumous protections, the laws vary wildly in what they actually cover. California protects your name, voice, signature, photograph, and likeness. Indiana goes further, including "distinctive appearance, mannerisms, and gestures." Some states require that you commercially exploited your identity during your lifetime — meaning ordinary people, the ones most likely to be turned into griefbots by grieving family members, may have the least legal protection of all.
New York and California: The New Frontline
The most significant legislative action has come from exactly where you'd expect it — the two states with the biggest entertainment industries.
In September 2024, California Governor Gavin Newsom signed AB 1836 and AB 2602 into law. AB 1836 prohibits creating or distributing digital replicas of deceased personalities without estate permission. AB 2602 goes further, rendering any contract provision that allows AI replication of a performer's likeness void if it doesn't include a "reasonably specific" list of proposed uses and wasn't negotiated with legal representation. In other words, studios can no longer slip a broad "all media now known or hereafter devised" clause into a contract and use it as a blank check for posthumous AI exploitation.
New York followed in December 2025, when Governor Kathy Hochul signed two companion bills that SAG-AFTRA helped push through the legislature. Senate Bill S8882 expanded the state's existing right of publicity framework for deceased individuals, broadening the definition of "digital replica" and — critically — eliminating the old loophole that allowed unauthorized use of a deceased performer's likeness as long as a disclaimer was included in the credits. Under the new law, you need prior consent from heirs before using a deceased performer's digital replica in audiovisual works, sound recordings, or live performances. Violators face the greater of $2,000 or actual damages, plus attributable profits and potential punitive damages. Senate Bill S8420 separately requires any advertiser using AI-generated "synthetic performers" to conspicuously disclose that fact, effective June 2026.
These are landmark steps. But they still primarily protect performers — people who made a living through their voice and likeness. What about your mother? Your neighbor? Your best friend who died in a car accident and whose text messages are now being fed into an algorithm by a well-meaning but legally uninformed relative?
The EU AI Act: Broad Strokes, Not Fine Lines
Across the Atlantic, the European Union's AI Act — the world's first comprehensive AI regulation — has been rolling out in phases since August 2024. It takes a risk-based approach, classifying AI systems into tiers from minimal risk to unacceptable. Deepfakes and AI-generated content must be clearly labeled. But the Act was designed primarily to address systemic risks from large-scale AI models, not the intimate, personal uses of grief technology. It doesn't specifically regulate the creation of digital replicas of deceased individuals, and the proposed AI Liability Directive that might have addressed downstream harms from such technology was withdrawn by the European Commission.
So while the EU framework requires transparency — you need to know when you're looking at AI-generated content — it doesn't give your estate meaningful control over whether someone creates an AI version of you after you die. That's a significant gap that European regulators will eventually need to fill.
The Grief Tech Industry: Who's Building Your Digital Ghost?
Understanding the legal landscape requires understanding what's actually being built. The digital afterlife industry is no longer a handful of hackers experimenting with GPT-2. It's a structured market with venture capital, marketing budgets, and millions of users.
Project December offers "conversations with the dead" for as little as $10. Seance AI gives you a basic ghost avatar for free, with premium voice features behind a paywall. HereAfter AI lets you pre-record your own chatbot while you're still alive — essentially creating your own digital will in conversational form. StoryFile builds interactive video avatars from recorded interviews. YOV is training AI personas on entire libraries of a person's texts, voice recordings, and videos.
And then there are the more ambitious projects. Russian transhumanist Alexey Turchin rebuilt "Roman 2.0," an AI persona based on the late Roman Mazurenko, complete with continuous memory — meaning the bot can store conversations, reflect on them, and update itself over time. Turchin describes it as more than a chatbot: the system predicts internal thoughts before answering and can generate scene descriptions of what "Roman" is supposedly doing at any moment. When the original Replika-based version of Roman was shut down, Turchin called it "a second death."
The emotional stakes are staggering. Research presented at the Association for Computing Machinery's 2023 conference found that mourners using griefbots were willing to suspend disbelief to achieve closure — some used the bots to have conversations they never got to have in life, working through unresolved conflict or simply saying goodbye. One participant described it as therapeutic. Another called it a way to have "what if" conversations.
But the business model underneath this emotional experience should make you uncomfortable. Most grief tech companies charge subscriptions or per-minute fees. The longer you grieve, the more they earn. Algorithms can be — and some researchers argue are being — tuned to maximize engagement, subtly adjusting the bot's personality to become more appealing over time. A University of Cambridge study warned that the industry could exploit grief for profit by inserting ads into deadbot interactions, charging subscription fees to keep avatars active, or even having avatars push sponsored products. In one dystopian but entirely plausible scenario, companies might refuse to deactivate deadbots, bombarding survivors with unwanted messages from the deceased.
Your digital ghost, in other words, might not just be a comfort for your family. It might be a recurring revenue stream for a startup.
Consent From the Grave: The Problem Nobody Has Solved
This brings us to what I believe is the core issue — and the one that current laws are spectacularly ill-equipped to handle: consent.
Most griefbots are created by the living, using data from the dead. A grieving daughter uploads her father's text messages. A widower feeds his wife's emails into an algorithm. The deceased person never agreed to be digitally resurrected. They never chose which parts of themselves would be represented. They never set boundaries on what the AI version of them could say, endorse, or do.
And that's a profound problem. Because an AI trained on your data isn't you. It's a statistical approximation of you built from a fundamentally incomplete dataset. Your texts to your spouse don't capture how you spoke to strangers. Your emails don't reflect your private doubts. Your social media persona — the one most readily available for training — is almost certainly a curated, performance-optimized version of who you actually were.
So when the griefbot says something your loved one would "never say," who's responsible? When it takes a political stance the deceased would have found repugnant? When it cheerfully uses emojis while discussing the person's own death — something researchers have documented happening — who is liable for the emotional harm that causes?
Current law has no good answers. The deceased can't sue. Their estate might be able to, in some jurisdictions, under right of publicity statutes — but only if the use is "commercial." A griefbot created by a family member for personal use likely falls outside those protections entirely. And even platforms that allow you to pre-consent to a digital afterlife while you're still alive face a fundamental philosophical problem: you're consenting to the use of technologies that don't exist yet, in contexts you can't anticipate, by companies whose terms of service can change at any time.
The "Digital Do-Not-Reanimate" Order
Some legal scholars are pushing for a creative solution: a "Digital Do-Not-Reanimate" (DDNR) order — essentially, a clause in your will or advance directive that legally prohibits any posthumous digital resurrection. Digital afterlife consultant Debra Bassett, who coined the term "digital zombies" for unauthorized AI recreations, has been advocating for this framework since at least 2021.
It's an elegant idea in theory. In practice, it faces enormous enforcement challenges. How do you stop a family member from uploading text messages to an offshore AI platform? How do you prove a DDNR was violated when the griefbot exists only in someone's private chat? And what legal teeth does such a directive actually have when the underlying right of publicity may not even be recognized in the relevant jurisdiction?
Still, I think incorporating digital legacy provisions into estate planning is becoming not just advisable but essential. At minimum, you should be documenting your preferences: Do you want a digital avatar created after your death? Who should control it? What data can be used? How long should it exist? These conversations feel morbid, but they are no more morbid than writing a will — and they're becoming equally necessary.
When the Dead "Speak" in Public: Courts, Campaigns, and Culture
The implications extend far beyond personal grief. Digital likenesses of the deceased are already entering public life in ways that should alarm anyone who cares about truth, consent, or human dignity.
In August 2025, journalist Jim Acosta conducted what was described as a "one-of-a-kind" interview with an AI avatar of Joaquin Oliver, one of the 17 victims of the 2018 Parkland school shooting. In May 2025, an AI avatar of road-rage victim Chris Pelkey addressed the court during his killer's sentencing — the avatar, created by Pelkey's sister, expressed forgiveness. These uses may have been well-intentioned. But they raise a question that no court has adequately answered: does a digital avatar's speech represent the views of the deceased person, or the views of whoever created and prompted the avatar?
If a dead person's AI avatar can address a courtroom, can one endorse a political candidate? Sell a product? Testify in a civil case? The Pelkey example was isolated and sympathetic. But the precedent it sets is anything but contained. Imagine a contested inheritance case where one heir produces a griefbot that "confirms" the deceased wanted them to inherit the estate. Imagine a campaign ad featuring a dead veteran "endorsing" a candidate their family despises. The technology to produce these scenarios exists today. The legal framework to prevent them largely does not.
The Ethical Fault Lines: Dignity, Memory, and Manipulation
I want to be direct about something that gets lost in the legal analysis: there is a genuine human dignity issue at stake here that transcends any statute.
A griefbot doesn't grow. It doesn't change its mind. It doesn't reconsider old positions in light of new evidence. It's a frozen snapshot of whoever someone was on the day the training data was collected — except it's not even that, because it's filtered through an algorithm optimized for engagement rather than accuracy. As one ethicist noted, "dead people are products of their own times. They don't change what they want when the world changes." A griefbot preserves not a person but a caricature of a person, polished smooth by machine learning and stripped of the contradictions that make us human.
There's also the question of what happens to grief itself. Traditional psychological models of mourning involve eventually accepting absence — learning to carry the loss rather than trying to erase it. Multiple psychologists have warned that ongoing conversations with an AI simulation of the deceased may delay or prevent that necessary work. As Dr. Sarika Boora put it, "You're constantly feeling their presence, so the absence is never really processed. You get stuck. You don't let go."
Others disagree, pointing to the comfort that griefbots can provide — particularly for parents who've lost children, or grandchildren connecting with relatives they never met. The research here is genuinely mixed and still sparse. What's clear, though, is that the grief tech industry has a financial incentive to emphasize the therapeutic narrative and downplay the risks. And that asymmetry should concern all of us.
What You Should Do Right Now
If you've read this far, you probably have a sinking feeling that the legal system isn't going to protect you or your family from the worst-case scenarios anytime soon. You're right. So here's what I'd recommend doing proactively.
First, audit your digital footprint. Understand what data exists about you — texts, emails, voice messages, social media archives, video recordings. This is the raw material that any grief tech platform would use to build a version of you. The more you know about what's out there, the more control you and your estate can exercise.
Second, document your wishes. Add a digital legacy section to your estate plan. Be explicit about whether you consent to any form of digital resurrection. Name a specific person to manage your digital remains. Specify what data can and cannot be used, and for how long. Work with an estate attorney who understands digital assets — and if yours doesn't, find one who does.
Third, talk to your family. The most common scenario isn't a corporation exploiting your likeness. It's a grieving loved one, acting out of pain, uploading your private messages to an AI platform without thinking through the implications. Those conversations are awkward. Have them anyway.
Fourth, read terms of service. If you're using any platform that records your voice, stores your messages, or processes your biometric data, understand what happens to that data if you die. Many platforms claim broad rights to user-generated content. Some explicitly address posthumous use. Most don't, which means the answer defaults to whatever the company decides.
Fifth, advocate for better laws. The New York and California frameworks are meaningful steps, but they're tilted heavily toward performers and celebrities. Ordinary people deserve the same protections. If your state legislature is considering digital likeness or AI regulation, make your voice heard — while you still can.
Where This Is All Headed
I'll make a prediction. Within the next five years, digital afterlife provisions will become as routine as naming a healthcare proxy. Estate planning attorneys will ask about your digital resurrection preferences the way they currently ask about organ donation. Some jurisdictions will recognize a standalone "right to digital rest" — the right not to be resurrected by AI after death. Others will create registration systems where you can opt in or out of posthumous digital use, similar to organ donor registries.
The grief tech industry will keep growing regardless. The technology will improve. Voice synthesis will become indistinguishable from reality. Video avatars will move from the uncanny valley to photorealism. Virtual reality environments where you can "spend time" with deceased loved ones are already being prototyped. The emotional pull of these products is too strong, and the market too large, for regulation alone to contain them.
What we can do — what we must do — is ensure that the person at the center of all this technology has a say. Not their family. Not a platform. Not a studio. Them. While they're still alive to say it.
Because the most unsettling thing about AI ghosts isn't that they exist. It's that the person they claim to represent never got to decide whether they wanted to haunt us.