Imagine losing a loved one and having the chance to interact with their digital doppelgänger—a voice, a personality, a semblance of their presence, all powered by AI. It sounds like something out of a sci-fi novel, but it’s happening today. But here’s where it gets controversial: is this a comforting way to keep their memory alive, or does it blur the line between tribute and exploitation? Let’s dive in.
In 2024, James Vlahos shared a deeply personal story with the BBC. After learning his father was terminally ill with cancer, he recorded his dad’s voice and used AI to create a chatbot that could mimic his speech and personality. Vlahos described the experience as a bittersweet gift. While it didn’t erase the pain of loss, it offered something unique: a living, interactive memory. ‘It’s like having a wonderful compendium I can turn to,’ he said. Yet, this raises a question: can technology truly bridge the gap between life and death, or does it risk reducing a person’s legacy to code and algorithms?
The Workplace Bereavement support group notes that while ‘deathbots’ aren’t yet mainstream, curiosity is growing. Founder Jacqueline Gunn cautions, ‘These tools are only as good as the data they’re fed. They don’t evolve with grief. For some, they might be a stepping stone, but they can’t replace the destination.’ And this is the part most people miss: grief is a deeply human process, one that demands time, understanding, and genuine human connection. AI, no matter how advanced, can’t replicate that.
Researchers Eva Nieto McAvoy from King’s College London and Bethan Jones from Cardiff University explored how these technologies work in practice. They examined how AI systems use digital traces—voices, speech patterns, even quirks of personality—to recreate individuals who have passed away. While these tools are often marketed as sources of comfort, the researchers argue they rely on oversimplified notions of memory, identity, and relationships. Here’s the kicker: when asked if they’d want their own families to recreate them digitally, their answers were far from unanimous.
Kidd, one of the researchers, admitted his initial reaction was neutral—if it’s playful and harmless, why not? But he quickly pointed out the risks: ‘If the AI starts evolving in ways I wouldn’t approve of, or says things I’d never say, it could distort how people remember me. That’s where I’d draw the line.’ Dr. Nieto McAvoy, on the other hand, was more relaxed. ‘I’m not religious, and I don’t worry about the afterlife,’ she said. ‘If it helps my family, great. But it’s complicated—do I want them paying for a service like this? I’m not sure.’
Now, here’s where you come in: What do you think? Is creating a digital version of a loved one a beautiful way to honor their memory, or does it cross an ethical boundary? Could it ever truly capture the essence of a person, or does it risk turning grief into a transaction? Share your thoughts in the comments—let’s spark a conversation that’s as complex and nuanced as the topic itself.