A new Monash University study has provided the first detailed look at both perpetrator and victim experiences of sexualised deepfake abuse, highlighting how increasingly accessible AI tools are enabling the creation and spread of fabricated nude and sexual imagery. The research, funded by the Australian Research Council, interviewed both victims and individuals who admitted to creating deepfake sexual content, offering rare insight into motivations, harms and emerging behavioural patterns.
Led by Professor Asher Flynn from the School of Social Sciences and Chief Investigator with the ARC Centre of Excellence for the Elimination of Violence Against Women (CEVAW), the study found that deepfake sexual imagery is becoming normalised among some groups, particularly young men. Participants described creating and sharing fake sexual images as a way to gain status, demonstrate technical skill or bond with peers.
Professor Flynn said peer encouragement played a significant role in motivating perpetrators. “Many participants frequently pointed to the positive reinforcement from peers about their technological prowess in creating realistic, but fake sexualised images as a key motivation,” she said.
The study found that perpetrators often minimised the harm caused, claiming the ease of AI tools reduced their responsibility. Some framed the behaviour as a joke, while others shifted blame onto victims or denied wrongdoing—patterns commonly seen in other forms of sexual violence.
Despite the severity of the abuse, none of the perpetrators interviewed had faced legal consequences, and victims reported limited avenues for support or redress, even when incidents were reported to police. Women were the most frequent targets, especially in cases involving sexualisation, control or harm, while men were more often targeted in scenarios linked to sextortion, humour or humiliation.
Professor Flynn said reforms are urgently needed, including tighter regulation of deepfake tools, improved education on the consequences of image-based abuse, and legal frameworks that address both the creation and consumption of sexualised deepfake content.
“The growing proliferation of AI tools, combined with the acceptance or normalising of deepfake creation more generally, has provided access and motivation to a broader range of people who might not otherwise engage in this type of abuse,” she said.
The findings highlight gaps in current policy, policing and public awareness, and point to the need for stronger protections as AI-generated sexual imagery becomes increasingly easy to produce and distribute.

