I still think about Jason Fagone and Vauhini Vara‘s excellent pieces about AI and death from last year, which explore the possibilities of interacting with deceased loved ones in written form via GPT-3 technology. This story by Charlotte Jee, about speaking to or video-chatting with our beloved dead, takes this even further. Experimenting with software created by HereAfter AI, Jee talks with digital simulations of her parents, who are very much alive. “Grief tech” like HereAfter AI uses a lot of data — hours of conversations with subjects about their lives and memories — to create virtual versions of people. Jee finds it all “undeniably weird” and ethically complex. But ultimately, she knows she’s only human: “If technology might help me hang onto them, is it so wrong to try?”
And what if that person is not, in fact, dead? There’s little to stop people from using grief tech to create virtual versions of living people without their consent—for example, an ex. Companies that sell services powered by past messages are aware of this possibility and say they will delete a person’s data if that individual requests it. But companies are not obliged to do any checks to make sure their technology is being limited to people who have consented or died. There’s no law to stop anyone from creating avatars of other people, and good luck explaining it to your local police department. Imagine how you’d feel if you learned there was a virtual version of you out there, somewhere, under somebody else’s control.
You might also like these picks:
“I didn’t know how to write about my sister’s death—so I had AI do it for me.”
“The death of the woman he loved was too much to bear. Could a mysterious website allow him to speak with her once more?”