Will we live on in the form of virtual avatars?
- Recent advances in AI have taken the digital preservation of the dead to a new level.
- Companies are making “virtual immortality” possible by offering deadbots that make it possible to chat artificially with a deceased person.
- These virtual doppelgangers generate content using generative AI fed with all types of data created by the person before they died: recordings, messages, anecdotes, etc.
- Despite advances in AI, these imperfect representations worry some professionals about the risks of anthropomorphism, attachment to the machine or isolation.
- Users need to be educated about the risks and challenges of these tools, and the issue of data rights needs to be addressed to provide a framework for these practices.
What happens to a person’s digital data after they die? Much of it survives in digital space, such as the profiles created on websites and social networks. This gives rise to memorial uses of the web, such as Facebook pages. For years, the platform has offered the possibility of turning a deceased person’s account into a memorial page, allowing people to pay their respects and leave messages, photos and so on.
Today, the digital preservation of the dead is taking a new step forward with artificial intelligence. Several companies are now offering to turn a person’s digital legacy into a virtual avatar or “deadbot” that can be used to communicate with deceased loved ones, promising a degree of virtual immortality. Back in 2017, Microsoft had filed a patent, which was granted four years later, for the creation of a conversational agent based on a person’s data. The idea was to create a virtual doppelganger to bring deceased people back to life. “People have always wanted to be invincible, immortal. It’s part of our founding myths – nobody wants to die. Then a virtual avatar of the deceased, a chatbot or a robot, per person, is financially advantageous,” explains AI researcher and professor Laurence Devillers.
Since then, a whole new industry has sprung up. In 2018, James Vlahos trained a chatbot to speak in the manner of his father, who had died of cancer. The American journalist had collected data, interviewed him, and recorded his voice. James Vlahos then co-founded the HereAfter AI platform, described as an “interactive memory application”. The aim is to collect a person’s stories, memories and recordings while they are still alive, and talk to them virtually after their death using a chatbot. Many start-ups offer to create digital doppelgangers that live on after death. Deepbrain AI offers a service called Re;memory. For $10,000, it creates a virtual avatar with the face, voice and expression of the deceased, which relatives can view in a studio. Somnium Space wants to go even further, creating a metaverse in which users can immerse themselves to visit the deceased.
Creating a virtual avatar from billions of data
These technologies are made possible by rapid advances in generative AI systems. Conversational agents, which detect speech, make semantic interpretations and trigger responses based on what has been detected, are common on the internet. These “deadbots” use billions of pieces of data to generate sentences and respond as if a person were speaking. In this way, a person’s voice recordings, any e‑mails or text messages they may have written, their testimonials and their story are used to create a chatbot, a sort of virtual avatar. “The machine learns regularities in the deceased’s existing data. Generative AI makes it possible to model huge bodies of data that can then be adapted to a person and a voice. The AI will search this large model for information related to the theme evoked by the user. In this way, the AI produces words that the deceased might never have uttered,” explains Laurence Devillers.
These algorithms will give the illusion of talking to a deceased person. But the AI specialist insists that this is just an illusion. The start-ups offering these services present a kind of immortality, or an extension of the memory of a deceased person, by reproducing their voice, their way of speaking and their appearance. However, these “deadbots” will remain imperfect representations of individuals. “With the current state of technology, we can reach a fairly high degree of imitation, of resemblance, no doubt in the voice, perhaps in the vocabulary, but it won’t be perfect. There will be hallucinations, the machine will inevitably make mistakes and invent things to say”, warns the researcher.
It’s not necessarily positive or negative, but I think that as a society we’re not yet ready.
The machine works like a statistical mill. The AI creates puzzles based on the words spoken by the person. When there is no data, it can look at nearby data and bring out words that are not necessarily what the person would have said. What’s more, the AI will not adapt over time and in response to conversations with the user. “The core of the model is rich in different contexts, so we get the impression that the machine will more or less adapt to us when we ask a question. In reality, it takes a history of what we’ve said as we go along, enriching it with our answers and the questions we’ve asked. It’s getting more and more precise. Tomorrow we may be able to have objects that adapt to us, but that’s not the case today,” says Laurence Devillers.
Significant risks for users
So it’s not really a question of immortality, but these “deadbots” seem to be more like ways of bringing memories to life, which can be consulted and interacted with. The developers of these technologies claim that they can not only help us learn more about our ancestors, but also help us to mourn. However, it is far from certain that these tools are wholly beneficial to their users. In its 2021 report, co-authored by Laurence Devillers, the French National Committee for Digital Ethics (CNPEN) was already pointing out the risks of classic chatbots, such as those used on commercial websites. When users are not really aware that they are talking to robots, there is a risk of anthropomorphism or attachment to the machine. For Laurence Devillers, this danger could be amplified if the chatbot uses the anecdotes, expressions, voice or face of a deceased loved one. “This could lengthen the mourning process and perpetuate the lack and suffering, because the object is there. It blurs the relationship with the machine. And you can’t turn them off, because they represent someone you love”, she fears.
The risk is all the greater because the machine has no real reasoning or morals. In the case of deadbots, for example, the report points to a possible “uncanny valley” effect for the user: either the chatbot says something offensive, or, after a sequence of familiar lines, it utters something completely different from what the person being imitated might have said. This effect could lead to a “rapid and painful psychological change”, the authors fear. Laurence Devillers also points to the possibility of addiction to these platforms, with a risk of individual withdrawal and isolation.
The need for a collective consideration of these tools
Over and above concerns about the psychological effects these technologies may have on users, there are questions regarding data. To create these virtual avatars, AI systems need a huge amount of data from the deceased. For the time being, the 2016 Law for a Digital Republic provides for the possibility of giving instructions on the retention, deletion, or communication of one’s data, and of designating another individual to carry them out. But while these deadbots are multiplying, the collection, storage, and use of data from the deceased raises questions: can children have rights over the data? Do the avatar and its data have an expiry date? Laurence Devillers explains that existing platforms involve a contract between the manufacturer and the user, and that for the time being it is up to the user to verify the future of their personal data.
The deadbot market is still in its infancy, and it is not yet certain that users will make massive use of these tools on a daily basis. However, virtual avatar services have been proliferating in recent years. With the development of connected objects, these conversational robots could become an integral part of our lives. Laurence Devillers believes that a collective debate on these tools is needed. “It’s not necessarily positive or negative, but I think that as a society we’re not yet ready,” she says. We need to educate users, so that they understand the challenges and risks of this artificial world. Laurence Devillers also advocates the creation of a committee to establish rules to govern these practices. “All this has an impact on society, so we urgently need to give it some real thought, rather than leaving it to a few industrialists to decide,” she concludes.
Sirine Azouaoui
Reference:
Report by the French National Digital Ethics Committee on conversational agents, 2021