The Take It Down Act is a landmark US bipartisan federal law that mandates swift removal of non-consensual sexualising deepfakes (AI-generated fake explicit imagery of real people) and grants victims stronger legal protection.
The rise of such content not only violates personal rights. It is a silent epidemic striking at the foundations of our democracy, as shown in our recent report published by CEE Digital Democracy Watch.
In May 2024, the EU passed the Directive on combating violence against women and domestic violence which, inter alia, mandates the member sates to criminalise the creation and distribution of non-consensual sexualising deepfakes by June 2027.
However, member states must take action before that deadline, and provisions must be backed up by political will and concrete enforcement.
The directive rightly recognises that non-consensual sexualising deepfakes constitute image-based violence and warrant a decisive response.
While deepfakes have long been associated with disinformation, more than 90 percent of video deepfakes are actually sexualising in nature.
Almost all cases target women.
And given the increasing public access to and quality of deepfake technology, sexualising deepfakes are on the rise. At many South Korean schools and universities, a veritable “abuse industry” has emerged, with explicit synthetic materials, many depicting minors, being exchanged on a mass scale in closed chat-groups.
In Spain, minor girls fell victim to their peers: the teenage perpetrators used free, downloadable software to create nude images. Similar cases have been reported in other countries. Recent reports also unmasked countless subscription-based services offering to “nudify” uploaded photos.
Consequently, thousands of women worldwide are victims of such attacks on their physical and psychological integrity. These attacks leave an indelible mark in the form of trauma, depression, or fear.
Moreover, collectively, they attack women’s dignity, safety, and democratic participation, objectifying them and reinforcing negative stereotypes.
Female politicians, journalists, activists, and schoolgirls alike are being targeted. In addition to sexual motivations, perpetrators often seek to silence women through blackmail or humiliation and force them to withdraw from public life and political participation.
And they often succeed. In South Korea, thousands of young women have deleted their social media accounts out of fear of respective victimisation.
Yet as of today, most EU countries lack clear criminal provisions addressing such deepfakes.
Penal codes are riddled with ambiguities. Existing regulations protecting image or reputation were written for the analogue world; their application to synthetic depictions remains uncertain. This leaves victims without clear legal safeguards.
The EU Directive provides a vital roadmap for action. Although an implementation deadline is set for 2027, every delay continues to grant perpetrators a sense of impunity. Member states must go beyond symbolic compliance and see the directive as an opportunity to act swiftly, and establish multidimensional practices to protect women and minors.
First, they must introduce unambiguous laws that explicitly criminalise both creating and sharing non-consensual sexualising deepfakes to replace vague provisions and legal loopholes.
Second, they must hold the platforms and AI providers facilitating this abuse accountable: concerning digital platforms, the directive rightly encourages swift takedown mechanisms as self-moderation apparently does not work. Platforms have repeatedly failed to act against sexualising deepfakes, despite proclaiming policies to counter them, as demonstrated in the case of synthetic intimate imagery targeting singer Taylor Swift. This content circulated for hours and was viewed around 50 million times before being taken down by X. Concerning the AI providers, controlling the consensuality of images entered into so-called “nudifying apps” is illusory, and their business model is built upon “undressing” someone else’s photo. This directly violates the rights of third parties, as do commercial services offering sexualising images. Both should be met with legal responses and should be excluded from app stores and search engines.
Third, actions must go beyond the legal sphere and take a systemic approach. Public education, digital literacy, and awareness campaigns are necessary to shape a society that rejects and can effectively counter digital abuse and misogyny.
Without addressing the social culture that trivialises such abuse — particularly among minors — laws will remain ineffective.
Lastly, a pan-European dialogue is needed to harmonise criminal law standards, avoid jurisdictional gaps, increase detection capabilities, and send a unified message: deepfake violence is real violence, and it will be treated and combated as such.
The first deepfakes were created as early as 2017 and were based on superimposing celebrity faces onto pornographic recordings.
It did not take a visionary to predict that the quality, accessibility, and abuse potential of the synthesis would increase.
Notwithstanding, we have largely slept through the last eight years. This must change. Countries such as Australia, the United Kingdom, South Korea, and now the US have already criminalised non-consensual sexualising deepfakes. Europe must swiftly follow suit.
Member States should not wait until 2027 to transpose the respective EU Directive into national law and undertake complementary non-legal countermeasures. We must act now to protect women and minors and, thereby, the foundations of our democracy.
This year, we turn 25 and are looking for 2,500 new supporting members to take their stake in EU democracy. A functioning EU relies on a well-informed public – you.
Mateusz Łabuz is a researcher on internet cybersecurity at the Institute for Peace Research and Security Policy at the University of Hamburg. Maria Pawelec is an expert in deepfakes and disinformation at the International Center for Ethics in the Sciences and Humanities, University of Tübingen.
Mateusz Łabuz is a researcher on internet cybersecurity at the Institute for Peace Research and Security Policy at the University of Hamburg. Maria Pawelec is an expert in deepfakes and disinformation at the International Center for Ethics in the Sciences and Humanities, University of Tübingen.