Nonconsensual deepfake pornography is a bane on society — here’s how Europe can fight it
Nonconsensual deepfake:
Just as the sun rises and sets, some things are inevitable. Consider technology. As soon as something new emerges, people invariably find a way to abuse it. In recent years, this mantle has fallen on artificial intelligence (AI) and one of its most troubling side effects — the rise of nonconsensual deepfake pornography.
The idea is as simple as it is horrendous: using digital tech to create fake and explicit images or videos of someone. While this has been bubbling in the internet’s underbelly for several years, recent improvements in AI tools means this sort of content is getting easier to make — and substantially worse for the victims.
Thankfully, authorities are taking note. The UK announced the first law of its kind to directly combat nonconsensual deepfake pornography via an amendment to the Criminal Justice Bill. Meanwhile, the EU has a range of laws and directives it can use to fight the maleficent practice. Or so’s the hope.
The question is whether regulation is an effective tool to fight nonconsensual deepfake pornography, and if there’s any way to eradicate it entirely.
A word on terminology
At this point you may be wondering why we’re using the phrase “nonconsensual deepfake pornography,” rather than the more commonly seen “deepfake porn?”
Well, Professor Victoria Baines — a BCS fellow and a leading authority in cybersecurity — explains that shortening the term to “deepfake porn” is viewed by online safety campaigners as “minimising a harmful behaviour through abbreviation.”
As Baines points out, “the bottom line is it’s online abuse, not porn.” The clearer we are when talking about the issue, the better a chance we have of fighting it. And, on that note, let’s take a look at how governments are currently dealing with nonconsensual deepfake pornography.
What are the laws in the UK?
Bains says that despite the upcoming amendment to the Criminal Justice Bill, in the UK, it is “already a criminal offence under Section 188 of the Online Safety Act to share nonconsensual intimate images.”
The direct wording in the legislation states that it’s illegal to share media that “shows or appears to show” another person in an intimate state. While this broadly covers nonconsensual deepfake pornography, the issue is that this isn’t its core focus.
That, according to Baines, is what the newly proposed amendment to the Criminal Justice Bill aims to fix. This “seeks to criminalise the creation using digital technology of intimate images without consent, regardless of whether the creator intends to share it.”
In other words, the upcoming amendment directly targets the issue of nonconsensual deepfake pornography. While existing laws could be applied to prosecute criminals who make it, this new amendment confronts it head on.
How the EU deals with nonconsensual deepfake pornography
“The EU does not have specific regulations on [nonconsensual] deepfake pornography,” Professor Cristina Vanberghen tells TNW.
Vanberghen is a Senior Expert at the European Commission, where she focuses on AI, DMA, DSA, and cybersecurity policy. She says that nonconsensual deepfake pornography is made illegal through existing regulations, specifically “a corroborative interpretation of rules on GDPR, DSA, national laws, and proposed measures like those existing in AI.”
Effectively, “using someone’s images and videos in a deepfake without their consent can be considered a violation of GDPR” and the DSA “imposes stricter obligations on online platforms to quickly remove illegal content and misinformation, which can extend to deepfake pornography.”
According to Asha Allen, director and secretary general of CDT Europe, the EU has opened up another avenue to fight the illegal content. Specifically, its adoption of the directive on gender-based violence.
Allen says this “makes the creation and subsequent dissemination of deepfake images that make it appear as though a person is engaged in sexually explicit activities, without that person’s consent, a criminal offence.”
On paper, this is a great move, but there is an important difference between a directive like this and a regulation. In the EU’s words, a regulation — like those Vanberghen discussed — is a binding legislative act that must be applied in its entirety across the EU.
A directive, on the other hand, lays out a goal. It is then up to “individual countries to devise their own laws on how to reach [it].” When it comes to the directive on gender-based violence, member states have until June 14, 2027, to actually adopt it into their national law or policy. This, understandably, has a gamut of issues.
The need for clarity against nonconsensual deepfake pornography
“Common rules on deepfake pornography are crucial,” Vanberghen says. These must set forth “unambiguous boundaries and repercussions to dissuade malicious conduct” and guarantee that victims have legal avenues for protection and recourse.
The issue around the adoption of the directive on gender-based violence is it could lead to inconsistent regulations across jurisdictions. This, in turn, may create weaknesses for perpetrators of nonconsensual deepfake pornography to exploit, leaving victims vulnerable.
One such example is the UK’s amendment to the Criminal Justice Bill. The End Violence Against Women Coalition (EVAW) points out that “the threshold for this new law rests on the intentions of the perpetrator,” instead of whether the victim of nonconsensual deepfake pornography consents to its creation.
Andrea Simon, the director of EVAW, says this will lead to “a massive loophole in the law” which will give “perpetrators a ‘get out of jail free’ card,” as there’s huge difficulty in evidencing intent in a court. In this state, the prosecution would have to prove the creator’s goal was to specifically cause alarm, humiliation, or distress. This, Simon believes, “will ultimately prevent victims from accessing justice.”
And that’s the kicker — even in locations where there’s regulation against nonconsensual deepfake pornography, there still needs to be more clarity in order to properly protect victims.
Getting laws over the line in the EU
Two things seem clear. The need for specific and thought-out regulation against nonconsensual deepfake pornography in the EU — and the fact that it will eventually happen. The issue, Allen explains, is that “the EU lawmaking process is inherently long,” as it needs to go across 27 countries, seven political groups, and the European Council. Things don’t happen quickly in the EU for a reason.
But even when (or if) direct regulation comes in against nonconsensual deepfake pornography, that doesn’t mean it will immediately solve everything.
Speaking with Bill Echikson from the Center for European Policy Analysis, CEPA, he says that Europe “tends to regulate and then struggle to enforce it because of the fragmented nature of the European Union.” As an example, he points towards GDPR and how it gave “the overall say on Google and Meta to Ireland, and on Amazon to Luxembourg,” neither of which had the intention of cracking down.
With newer regulations like the DSA, Echikson says “they have upped the Brussels enforcement” and made administration more centralised. He believes the issue though is resource-based, as the part of the European Commission that looks after regulations like the DSA often consist of “just a handful of officials.”
This, when combined with the structure of the EU, can make enforcement a nightmare — and there’s no reason to believe that cracking down on nonconsensual deepfake pornography would be any different.
Using tech to battle nonconsensual deepfake pornography
“I believe halting deepfake pornography presents significant challenges, akin to cybersecurity,” Vanberghen says. Yet this doesn’t mean we can’t combat it.
One thing Vanberghen points to is the development of AI-driven tools that are capable of detecting deepfake content, so operators can take it down quickly and efficiently.
Allen holds a similar view, but points out that the creation of these tools need to be heavily researched, so the techniques used are “effective, proportionate, and result in equitable outcomes.”
Unfortunately though, it’s unlikely that nonconsensual deepfake pornography will disappear from society entirely. As Vanberghen says, “while complete eradication may be unattainable, significant reduction is achievable through proactive measures and collaborative efforts across various sectors.”
Baines from the BCS supports this thought. She points out that beyond just “technical measures and legal deterrents, we’re going to need to try to reduce the stigma of being deepfaked by raising awareness that these aren’t real images.”
Concerted effort to combat deepfake abuse
The idea is that alongside technical measures there needs to be a societal and educational push against the illegal content. This, if combined with more funding for those looking to prosecute perpetrators, could severely reduce the harm it causes.
Ultimately, nonconsensual deepfake pornography won’t go away by itself. It requires a concerted effort across all aspects of government and society to highlight what it is: abuse.
Europe-wide regulations against creating nonconsensual deepfake pornography are required, but this on its own won’t be enough. Instead there must be a framework to enforce these laws. Technology can play a vital part in this, but a cultural imperative is needed too — much like drunk driving.
Yes, as technology evolves, it will inevitably be used for evil purposes. Yet things aren’t that simple. The very tools that enable malicious activity can also prevent them. We may not be able to stop the sun rising and setting, but we can influence is how humans use tech. Let’s just hope it happens soon.