Dying To Tell You: "Deepfake Resurrections" To Promote Public Good Explored By Researchers

Deepfake narratives from victims of domestic abuse and drunk drivers may impact public service messages.

Russell is a Science Writer with IFLScience and has a PhD in the History of Science, Medicine and Technology

Dr. Russell Moul

Russell is a Science Writer with IFLScience and has a PhD in the History of Science, Medicine and Technology.

Science Writer

A woman's face is being scanned by computers to generate a deepfake.

Deepfakes have are becoming increasingly common online, but can this typically abusive phenomenon be used to promote public good? Image Credit: metamorworks/Shutterstock. 

Deepfakes inspire a range of responses, from fascination and entertainment to outright fear. Their increased appearance online raises all sorts of moral, social, and legal challenges – but psychologists have recently taken an interest in their potential as tools to promote positive social and political policy-related change as well.

Deepfakes are seemingly realistic, digitally created videos that depict people, events, and things in ways that mimic reality. In some cases, these computer-generated illusions can appear indistinguishable from the real thing, and they are often used in unethical and abusive ways. For instance, in 2019 AI firm Deeptrace found 15,000 deepfake videos online – nearly double what was online nine months earlier – and 96 percent were pornographic. In these videos, celebrities – all women – had their faces mapped onto those of real people in pornographic videos. Aside from satisfying the fantasies of individuals, the content is often used for malicious acts and revenge porn, but some content can be created to discredit prominent political figures too. 


While most of our experience of deepfakes is decidedly negative, one set of researchers has started investigating their persuasive capabilities, especially in relation to “deepfake resurrections” - engineered reconstructions of deceased people. 

Hang Lu, Assistant Professor of Media Psychology at the University of Michigan, and Haoran Chu, Assistant Professor of Public Relations at the University of Florida, undertook research into what they call “prosocial deepfakes”. In essence, the potential positive applications of deepfakes have been overshadowed by their potential for abuse, so we know less about how such media technologies can be used for good, especially in relation to public service announcements. 

To investigate this, the team turned their attention to two common and potentially lethal social problems – drunk driving and domestic violence, real-life issues often targeted by social policy changes and activism efforts. Lu and Chu wanted to see how public service announcements that showed deepfakes of deceased victims of drunk driving and domestic violence would impact viewers as they narrated the stories of their deaths. 

“The prosocial deepfakes investigated in the current study take the form of deepfake resurrection”, the authors explained in the paper, “which features a dead victim who is brought back to life with deepfakes that enable this victim to advocate for an issue related to the cause of their death.”


The team recruited just under 2,000 online participants to take part in a between-subjects experiment – a type of experiment designed to see if one type of condition is more impactful than another – which varied in the use or non-use of deepfake resurrection narratives. They also investigated a number of psychological processes related to the effects of such deepfake narratives. These included the perceived realism of the video’s content, participant’s ability to identify with and show compassion towards the victims, perspectives related to the desecration and disrespect for the dead, surprise related to the video’s content, and the overall impact such narratives have on participant’s support for policies. 

The researchers found that the presence of deepfake death narratives provided a “small but negative effect” on the overall persuasion of a public service message in the contexts of domestic abuse and drunk driving. This, they believe, relates to the larger social perceptions surrounding deepfakes more generally, though they suggest this may change as more people get used to deepfakes as a phenomenon. 

“It is possible that as people become more familiar with deepfakes, they may develop a more nuanced perception of this technology based on how and for what purpose it is used”, the authors noted. 

Interestingly, Lu and Chu also found that the narrative-based and surprise-focused aspects of deepfake resurrection narratives seemed to be effective in motivating viewers to get involved more actively in challenging domestic violence and drunk driving. However, they note that deepfake death narratives did not perform so well for their perceived realism, identification with the victim, and raising compassion. Moreover, prevailing attitudes towards the perceived desecration of the dead seemed to negatively impact their reception. 


Despite this, further research is needed to draw any broader conclusions, especially research into wider and more diverse groups of people with different cultural and religious backgrounds. 

“Since this is the first study to empirically investigate the impacts of deepfake resurrection narratives, it might be too early to conclude that this type of narrative will not be effective in promoting prosocial outcomes under any circumstances,” the authors write.

The study was published in Computers in Human Behavior


  • tag
  • psychology,

  • future,

  • AI,

  • policy,

  • deepfake,

  • science and society