Users

The Deepfake Porn Problem Is Bigger Than Taylor Swift

A researcher who studies cyber sexual violence explains how and why women are targeted.

Taylor Swift on a laptop in a dark room.
Photo illustration by Slate. Photo by Valerie Macon/AFP via Getty Images and Choreograph/iStock/Getty Images Plus.

Last week, A.I.–generated nude images of pop superstar Taylor Swift were produced and distributed without her consent. They circulated throughout the internet, with one single post on X (née Twitter) garnering 45 million views before the site took it down. Deepfakes, as they’ve come to be called in recent years, often target female celebrities, but with the rise of A.I., it’s easier than ever for everyday people (almost always women) to be targeted. Last year, more than 143,000 deepfake porn videos were created, according to one estimate from the independent researcher Genevieve Oh, more than every other previous year combined. That number will, in all likelihood, only continue to rise.

Slate spoke with Sophie Maddocks, a researcher at the University of Pennsylvania who studies cyber sexual violence and image-based abuse. We talked about fake sexual images through history, and how they can be used to target famous and ordinary women alike. Maddocks also told us what it would take to create a world where A.I. porn is a tool for creative expression, rather than abuse. This interview has been lightly edited for clarity and length.

Slate: Fake nude images aren’t an entirely new issue. What’s the history of this problem? 

Sophie Maddocks: There’s a historian, Jessica Lake, and she’s done some really interesting research tracing the potential origins of the creation of fake nude images. She talks a lot about the rise of photography in the late 19th century, and writes about an example of face-swapping in late-19th-century photography where images of the faces of high-society women were pasted onto nude bodies and then circulated. And not only is that one possible starting point when thinking about the history of fake nudes, it’s also an interesting starting point for how we see the creation of A.I.–generated fake nudes. Fake nudes first went viral in the online sense in 2017 with the creation of the DeepNude app where the faces of individuals were digitally pasted onto the bodies of adult film actors, almost exactly mimicking what had been done in the late 19th century with photography.

So there is a long history to this harm, but I think there is that long-standing desire to produce fake nude images—almost exclusively of women. With the rise of the internet, we’ve seen ways of creating and sharing ever more photorealistic images—until we get to the last year with the rise of video- and image-generation models that create extremely realistic imagery and A.I. tools trained on millions of images of girls and women scraped from the internet without their consent. You can either use a text prompt or an existing image to produce a very realistic fake nude.

So A.I. has increased the volume and severity of this problem on the internet.

Absolutely. In 2017, when activists and the first people affected by A.I.–assisted deepfakes, like famous actors and singers, started to raise the alarm about this issue, they really gave us a roadmap for what would happen: They said this is going to be used to create child sexual abuse imagery, this is going to have really damaging physical and psychological effects not just on famous women, but on everyday women and girls. And all of their predictions have come true in really disturbing ways, especially over the last year.

In terms of prevalence, there was a tenfold increase of A.I.–generated nude images online within a seven-month period in 2023. And there are offshoots of victimization: One is extortion of adolescent and preteen boys—creation of A.I.–generated nudes and then threats and blackmailing to spread those nudes. We also see preteen and adolescent girls who already disproportionately experience what some people call revenge porn, disproportionately in the classroom. They’re now experiencing the circulation of fake nude images in their school environments as well. So there has been an exponential increase. There’s also been an increase in the breadth of victimization.

To what degree does this kind of abuse target celebrities like Taylor Swift and to what degree does it target everyday people?

That’s a question I’m also trying to parse myself. What happens when victims experience any form of image-based sexual abuse is that they immediately typically go offline. The reaction to having your privacy invaded, being deeply traumatized by image-based sexual abuse, is that you don’t then immediately go back online and necessarily share what’s happened to you. So it’s extremely hard to ascertain prevalence.

When we look at the broader statistics, we clearly see very specific areas of prevalence. So we know that minoritized women and femmes are more likely to be targeted by image-based sexual abuse. We know young women, single women, LGBTQ+ individuals, Indigenous women. We see those patterns of prevalence with image-based sexual abuse more broadly. And I think we can say with relative plausibility that those patterns will be repeated in the rise of A.I.–generated nude imagery. But I don’t have any prevalence research on that precisely.

I did some research in 2019 and I spoke to people who work with individuals who were leaving abusive relationships and controlling relationships. They said that as soon as high-speed internet and social media platforms became more prevalent, every single new person entering their shelter had experienced some form of image-based sexual abuse. So I think there is a direct correlation between the use of A.I.–generated nonconsensual nudes and controlling and abusive behavior in relationships.

This is about controlling other people’s bodies, controlling other people’s visual representation. The sexual aspect of it is about shaming and silencing and humiliating people. We can see that by the types of people who are chosen to be victimized. Typically when women speak out—women in journalism, actors, musicians, especially for gender equity or against sexual violence—we often see them then immediately being targeted by digital sexual violence. This is completely connected to power and control.

My understanding is that there are some state laws on the books in the U.S. that seek to punish this type of abuse. What does the legal landscape look like and are these laws actually effective?

So, a federal bill has been introduced to criminalize specifically the creation of A.I.–generated deepfakes. And I think currently there are 16 states that offer some legal protection surrounding specifically the creation of deepfake nudes. Also, Joe Biden, in his executive order on A.I., talked about protecting sexual privacy and finding ways to stop the creation of deepfake child sexual imagery. But, yeah, it’s a big patchwork in terms of civil and criminal recourse.

And it’s not necessarily always centered around victim needs. Usually the victim’s most important need is to get content removed. So if we had a legislative landscape that was victim-centered, then it would be thinking about how we hold platforms accountable for removing content more quickly. It would focus on how we stop repeat offenders. It would focus on how we make online environments safer for people to actually use.

On the perpetrator side, we know that there are areas on the internet where individuals gather so that they can plan to create and distribute fake nudes in the most harmful way possible. And so that they can keep that barrage of harassment going indefinitely. So the impacts on victims can be immense. It can mean losing their job. It can mean changing their name. It can mean moving to another part of the country. This is a form of abuse that has extremely life-ruining consequences.

Are there any other ways in which you think we can create a better internet in this regard—one that is kinder and more sensitive to these issues?

Something that gives me a lot of hope is my own research. And some of that has been working with adolescents who have to navigate this new digital world where they are always potentially expecting to be exposed in certain ways online. Something that comes through really powerfully is people’s desire to protect each other’s privacy and people’s desire to act in active pro-social ways and to support each other in online spaces. And that comes through really strongly. And actually that comes through quite strongly as well with the case of Taylor Swift fans who are jumping to her aid and defending her. I think from the user side, I see more pro-social behavior than not.

Perhaps this will prompt the kind of social network regulation that we need. So for example, we want Google and other search engines to proactively de-list these video generators. We want websites and porn sites that host sexual imagery to have really meaningful age and consent verification practices. We want preventative measures to discourage people from producing this content and then proactive moderation when they continue to flout certain rules.

It would be amazing to imagine what would it have looked like if we had created these A.I. tools in a social environment that did center consent and that did prioritize consent—and maybe A.I. tools would’ve been, and maybe they still can be, a way to enjoy and express your sexuality in ways that aren’t harmful for other people.