Sexual deepfake abuse silences women and causes lasting harm, yet laws to protect them are inconsistent. Digital alterations like deepfake images raise big ethical and legal questions, making it necessary to rethink how we handle these issues under the law.
Overview
In early 2024, pop megastar Taylor Swift found herself at the center of a disturbing controversy. Millions of sexually explicit deepfake images of her flooded social media, raising alarms about the misuse of AI technology. It was only after one image garnered over 47 million views that the social media platform X (formerly Twitter) removed the content.
Swift’s case highlighted how easily people can exploit generative AI to create non-consensual pornographic content, leaving victims with limited legal recourse and enduring severe psychological, social, physical, economic, and existential trauma.
The trend began in 2017 when a Reddit user uploaded realistic but entirely fabricated sexual images of female celebrities. By 2024, nudify apps are widely accessible and advertised on social media platforms like Instagram and X. A Google search for “free deepnude apps” in Australia yields about 712,000 results.
A 2019 survey in the UK, Australia, and New Zealand found that 14.1% of respondents aged 16 to 84 had experienced someone creating, distributing, or threatening to distribute a digitally altered sexualized image of them. Vulnerable groups, including people with disabilities, Indigenous Australians, LGBTQI+ individuals, and young people aged 16 to 29, were the most affected.
Sensity AI has monitored online sexualized deepfake video content since 2018, consistently finding that around 90% of this non-consensual content features women.
Unfortunately, Swift’s experience is not unique. Numerous reports have surfaced of sexualized deepfakes involving female celebrities, young women, and teenage girls.
Legal Ambiguities
These manipulated images pose significant ethical and legal challenges, necessitating a reevaluation of existing laws and responses. The issue is further complicated on platforms with encrypted content, such as WhatsApp, where deepfakes can be shared without fear of detection. This concern has been echoed in public campaigns by victims, comments from the Australian Federal communications minister, and discussions in the US House of Representatives.

Australia has been proactive in criminalizing image-based abuse and its harms. The eSafety Commissioner has the legal authority to compel the removal of sexualized deepfake content. However, only Victoria specifically addresses the non-consensual production or creation of sexualized deepfakes. Elsewhere in Australia, the legal status of creating or possessing such content remains ambiguous.
Globally, the situation is similar. The UK’s Online Safety Act 2023 criminalizes the non-consensual sharing or threatening to share sexualized deepfakes but does not address their creation. In the US, there is no national law criminalizing the creation or distribution of sexualized deepfakes, though some states have taken action.
A UK review ranked deepfakes as a significant social and criminal threat. With advances in open-source technology making deepfakes increasingly undetectable, there is a pressing need to improve legal responses. An Australian Research Council study aims to address this by exploring the extent of sexualized deepfake abuse and focusing on improving responses, interventions, and prevention.
Need for Legal and Ethical Frameworks
The ambiguity around the legality of creating and possessing non-consensual sexualized deepfakes suggests that further legal changes are needed. Making the creation of such content illegal could reduce the prevalence of nudify apps.
New laws should be accompanied by regulatory measures, corporate responsibility initiatives, education and prevention campaigns, and training for those responding to sexualized deepfake abuse. Technology developers and digital platforms must prioritize safety over profit.
Beyond sexualized deepfake abuse, there is a need for guidelines on the responsible creation of deepfake content to prevent disinformation and bias. For instance, Italian Prime Minister Giorgia Meloni is seeking damages for deepfake pornographic images of her circulated online, a move intended to empower women to press charges.
AI image manipulation tools rely on biased social norms and information. As their use becomes more mainstream, ethical guidelines are essential. Countries with long-standing regulations on deceitful expressions could extend these to deepfake content.
Given the transnational nature of this challenge, global collaboration is crucial to prevent harm and ensure ethical content development. A coordinated international effort is vital to address and prevent the harms of sexualized deepfake abuse.