Deadbots and regulation: an ethical and legal matter that demands discussion
en-GBde-DEes-ESfr-FR

Deadbots and regulation: an ethical and legal matter that demands discussion


The new European legal framework on artificial intelligence, the EU AI Act, came into force on August 1, aimed at preventing rights violations through the use of this technology. The legislation classifies AI according to the level of risk it may pose to individuals and society, and prohibits technologies that pose an "unacceptable risk," such as those that manipulate and exploit people’s vulnerabilities.

One technology that could fall into this category is deadbots, which some companies are already developing and planning to market in the near future. These are chatbots based on the digital identity of a deceased person (WhatsApp messages, social media, emails, etc.) and which are capable of holding conversations with the deceased person's family and friends, emulating their personality. Although it may seem like science fiction, it is not, and services of this type are closer than we may imagine.

Belén Jiménez, who holds a PhD in Psychology and is a member of the Faculty of Psychology and Educational Sciences and a researcher in the IN3 CareNet group at the Universitat Oberta de Catalunya (UOC), is a specialist in the technological mediation of grief. Part of her research focuses on deadbots, an area in which she has published several studies.

“Certain precautions must be taken when using deadbots and it is essential to regulate their use, since the profit motive of the companies that market them may not be aligned with the potential therapeutic use”
A complex debate without clear answers

"Although deadbots have not yet been marketed, we need to reflect on the bioethical aspects of this technology. Their use may soon become normal, as has happened with other applications that may initially have surprised us, but which are now widely used, such as dating apps. More and more companies are emerging in what is is known as digital afterlife industry, and they are improving the technology," Jiménez explained. She believes it is essential to "study how deadbots mediate grief and can transform it. It is a field in which there are hardly any scientific studies and there are no clear answers, since their use and effects depend on various factors, including how these technologies are designed."

Among other things, the new European legislation stipulates that chatbots must inform the user that they are communicating with a computer program and not with a person. Although it classifies this technology as "limited risk", in sensitive contexts such as health, which would be the case with deadbots, the implications of these programs must be carefully analysed.

Research carried out by Belén Jiménez, who is also a member of the CERPOP research group at the University of Toulouse, has shown that the bereaved display ambivalent attitudes to this new technology: the desire to maintain emotional ties with their loved ones is combined with an uneasiness that comes from interacting with a program based on the deceased person's digital identity.

Deadbots are based on so-called "continuing bonds" between the bereaved and the deceased, a term frequently used in the psychology of grief. The UOC researcher said that "these technologies take advantage of people's need to establish emotional bonds". Indeed, they could be equivalent to an advanced and technological version of having an imaginary conversation with our loved one in front of their grave or preserving their memory through photographs and videos. "This need to maintain bonds doesn't necessarily have to be pathological," explains Jiménez, "and it is normal for many people. However, certain precautions must be taken when using deadbots and it is essential to regulate their use, since the profit motive of the companies that market them may not be aligned with the potential therapeutic use of this technology."

In the absence of studies, Jiménez pointed out that the psychological effects of these technologies will depend on the users themselves, on the relationship they had with the deceased and the relationship they establish with the chatbot. "One of the dangers is that it could lead to negative effects, such as the creation of a relationship of dependency, and even suffering caused by a second loss, if the deadbot disappears – for example, due to technical problems."

Regulating the digital afterlife industry

Our desire for immortality and technological progress is stimulating the digital afterlife industry, a sector that exploits the digital presence of deceased people to perpetuate their memory and even extend their digital activity. This has many ethical and social implications. Companies pursue commercial and economic ends that may be in conflict with the potential therapeutic objectives of these tools. Strategies such as having deadbots send notifications and other actions to keep the bereaved "hooked" may be ethically questionable, according to Jiménez.

"We are dealing with a new technological development based on artificial intelligence, involving great risks, and it must be regulated to anticipate its possible negative effects, while we must also take its ethical dimension into account," said the researcher. "The new European regulations focus on promoting the transparency of these technologies, which is essential in such sensitive areas as grief. In addition, companies that develop these services must comply with rigorous standards and invest in auditing, transparency and documentation programmes," she explained. The AI Act provides for fines of up to €30 million or 6% of a corporation's turnover if it fails to comply with the law.

In the absence of specific regulations for deadbots , Jiménez proposes that the regulations "should particularly ensure respect and dignity for the deceased person, as well as promoting the psychological well-being of the user, especially if they are grieving."

This research supports Sustainable Development Goal (SDG) 3, Good Health and Well-being.

Jiménez-Alonso, B., & Brescó de Luna, I. (2024). AI and grief: a prospective study on the ethical and psychological implications of deathbots .. In S. Caballé, J. Casas-Roma, & J. Conesa (Eds.), Ethics in online AI-based systems (pp. 175-191). Academic Press. doi: https://doi.org/10.1016/B978-0-443-18851-0.00011-1

Jiménez-Alonso, B., & Brescó de Luna, I. (2022). Mediación tecnológica en el duelo: un análisis de los griefbots desde la psicología cultural. Pensamiento Psicológico, 20. https://doi.org/10.11144/Javerianacali.PPSI20.mdpc
Regions: Europe, Spain
Keywords: Society, Psychology, Applied science, Artificial Intelligence, Policy - applied science, Technology

Disclaimer: AlphaGalileo is not responsible for the accuracy of news releases posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Témoignages

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Nous travaillons en étroite collaboration avec...


  • BBC
  • The Times
  • National Geographic
  • The University of Edinburgh
  • University of Cambridge
  • iesResearch
Copyright 2024 by DNN Corp Terms Of Use Privacy Statement