/
1x
Advertisement
Proudly Canadian, obsessively Toronto. Subscribe to Toronto Life!
City News

“This doesn’t only affect Taylor Swift”: An AI and legal expert on the fight against deepfake pornography

Julie MacDonell breaks down the Toronto connection to the Swift deepfake saga, how the superstar could sue the images’ creators, and why new technologies are putting women and girls at risk

Copy link
Julie MacDonell

Earlier this week, the federal government introduced new legislation to address deepfake images like the ones that recently targeted Taylor Swift. Posting fake sexually explicit photos of the world’s most famous pop star is a good way to incur the wrath of the world’s most enthusiastic fan base; Swifties quickly reported the violations and flooded X (formerly Twitter) with the hashtag #ProtectTaylorSwift to thwart searches for the X-rated images. The story appears to have an unfortunate local connection, with some reports tying one of the most circulated images to a Toronto X user

The proliferation of deepfake images has become a problem for more than just global superstars. “This is happening in our high schools, and it’s happening to girls everywhere,” says Julie MacDonell, a trademark lawyer and co-founder of the women-led Toronto artificial intelligence firm Haloo. “Maybe now people will start paying more attention.” Here, MacDonell talks about how Swift could seek legal recourse and why Ontario should take inspiration from British Columbia when it comes to fighting deepfakes.


Could you start by giving us a tech-for-dummies explanation of exactly what happened with Taylor Swift?
These kinds of deepfake images are created using an extensive collection of existing digital content, including photographs and videos of a person—Swift, in this case—that are easily available on the internet. Often, deepfakes are created on anonymous content forums where the images disappear after a certain amount of time, but in this case, the images of Swift were shared on X, reaching a much wider audience of millions. There are so many issues at play here that speak to both the power and the dark side of AI and tech right now, including the difficulty of distinguishing authentic content from fake content, which is what we are focused on at Haloo. Swift was definitely a popular topic around the office last week. Related: Why Geoffrey Hinton is sounding the alarm about AI

Deepfakes have been around since 2017, but it feels like we’ve been hearing a lot more about them lately. It’s not just Taylor Swift—we’ve also seen a deepfake Tom Hanks in a dental insurance commercial and a deepfake Joe Biden calling voters. Why is that?
Because it’s easier than ever to create these images. Before generative AI, you had to be a computer whiz with a lot of expertise and equipment to create a believable deepfake. Now, user-friendly tools powered by advanced AI make it so that pretty much anyone with a computer can do it. The technology here is a type of AI called GANs (generative adversarial networks), which is like having two minds working on one project: one creates, the other critiques. The “creator” makes a deepfake image in which a celebrity’s face is superimposed on someone else’s naked body. The “critic” examines this creation and determines whether it’s convincing enough. This process repeats itself, improving with each cycle, until the synthesized image is indistinguishable from a real photograph to the untrained eye. And then the other relevant factor is that we have seen a move away from content regulation by a lot of social media platforms, including X since Elon Musk bought the platform. 

According to one study, 96 per cent of deepfake images constitute non-consensual porn. Does that surprise you?
It really doesn’t. Pornography has been used to enact violence against women for centuries, and that is absolutely what this is—violence. The tech has just made it an exponentially bigger problem that is so much harder to regulate. 

Advertisement

What, if any, laws are being broken by whoever created the Swift images?
As far as I’m aware, there is no overarching federal law in the US specifically regulating deepfake pornography, but that may be changing with a new act that was introduced in the Senate in response to what happened with Swift. If it passes, it would open the door for victims to sue creators, which would be an important step. In Canada, Section 162.1 of the Criminal Code clearly prohibits the non-consensual distribution of intimate images, so it’s a criminal offence to share such content without consent.

That said, the existing law doesn’t explicitly address AI-generated content, which creates potential legal ambiguities if synthesized images are argued to be a form of creative expression. The law is really unsettled about how derivative content—content that is created by AI that has consumed existing content—works. Where is the line between creative inspiration and copyright infringement? We’re seeing this play out with Sarah Silverman’s lawsuit against ChatGPT, which is just one example. It’s all so new, and it’s clearly something that lawmakers are concerned about. 

Just this week, there have been reports about new federal legislation that would address sexually explicit deepfakes. And a number of provinces are taking action, with BC leading the charge. BC Premier David Eby actually mentioned Taylor Swift in his comments about the new Intimate Images Protection Act, which is intended to provide victims with broader legal recourse, including expedited takedown orders for tech companies like Google and Facebook. The legislation protects not only traditional intimate images but also near-nude images, videos, livestreams and digitally altered images such as deepfakes. It allows individuals as young as 14 to pursue legal protection without parental consent.  

There is speculation that Swift could pursue legal action against the Toronto man accused of spreading at least one of the now-deleted images. In your opinion, would she have a case?
Not being privy to the particulars, I can only speculate. One legal angle that I haven’t mentioned yet is copyright infringement, which is enforceable across borders. I think we can safely assume that the person who created the deepfake images did not own the rights to the original images. It’s certainly possible that they used images pulled from Swift’s social media or other images that she owns. That is just one of many potential avenues. I would imagine her legal team is exploring different options. 

Still, is it safe to say Swifites work a lot faster than the wheels of justice?
Yes, they do. And that is what I loved so much about this story. Taylor Swift is an icon to women and girls in particular, and that’s who is being victimized by deepfake pornography. This is about what’s happening to a pop star but also what’s happening to high school students. So you had all of these young fans taking action to advocate for their idol, but ultimately they were advocating for themselves. The voices of women are so often silenced in the AI space even as we are being disproportionately victimized by the technology. OpenAI, the most powerful organization in AI and the company behind ChatGPT, currently has zero women on its board. This lack of diverse perspectives has very real effects. In the early versions of ChatGPT, if you asked the bot to tell you a story about a secretary, it would always be a woman. And if you asked for a story about a lawyer, it would always be a man. That’s just a basic example. Wherever misogyny has previously shown up, we will see it reproduced by AI. The best way to ensure that issues affecting women are addressed is to make sure that women are in the rooms where decisions are made. As a women-led tech firm, we are on the front lines at Haloo, and if we have Swift in our corner, that is definitely not a bad thing.

Advertisement

Favourite Swift song?
The real Swiftie in our office is Zola, the 10-year-old daughter of our VP of sales. She will often share her favourite lyrics and keep us up to date on the latest Swift news, and it’s honestly become a part of our culture—a morale booster and a source of inspiration and resilience in the face of adversity. She recently played “The Man” for me, which is now definitely my favourite Taylor Swift song, with such a powerful message. We’re now entering every contest trying to win concert tickets for Zola, so Taylor, if you’re reading this…


This interview has been edited for length and clarity.

NEVER MISS A TORONTO LIFE STORY

Sign up for This City, our free newsletter about everything that matters right now in Toronto politics, sports, business, culture, society and more.

By signing up, you agree to our terms of use and privacy policy.
You may unsubscribe at any time.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Courtney Shea is a freelance journalist in Toronto. She started her career as an intern at Toronto Life and continues to contribute frequently to the publication, including her 2022 National Magazine Award–winning feature, “The Death Cheaters,” her regular Q&As and her recent investigation into whether Taylor Swift hung out at a Toronto dive bar (she did not). Courtney was a producer and writer on the 2022 documentary The Talented Mr. Rosenberg, based on her 2014 Toronto Life magazine feature “The Yorkville Swindler.”

Advertisement
Advertisement

The Latest

"We love a good drama queen or king": This guy is bringing a Summer House –inspired reality series to Muskoka

“We love a good drama queen or king”: This guy is bringing a Summer House–inspired reality series to Muskoka

Inside the Latest Issue

The June issue of Toronto Life features our annual ranking of the best new restaurants. Plus, our obsessive coverage of everything that matters now in the city.