Swift Action: The Push for Deepfake Reform

ANDREA FISCU—As generative artificial intelligence (AI) continues to advance, so do the capabilities of perpetrators to create ever-more realistic pornographic deepfakes. Legal experts have long reflected on the damage these violent images and videos can inflict and contemplated how the law may respond. Taylor Swift is one of the most recent victims and with her celebrity, the issue has received heightened attention.

States are now beginning to enact deepfake-related laws that may give victims the right to sue those who create images using their likenesses and criminalize possession and distribution of the images. However, they face legal hurdles due to the protections of the First Amendment and the Communications Decency Act of 1996 (CDA). The CDA, specifically Section 230, provides immunity to online platforms from being held liable for content posted by their users. This section states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The law was passed in the beginning era of the internet when, for the first time, people across the world could bring their opinions to a public forum. Congress’s goal of Section 230 was to allow this diversity of opinion and innovation to flourish. In effect, the law has protected platforms from liability for content posted as long as they merely exercise good faith in their moderating. This has opened the opportunity for the proliferation of deepfakes with no accountability due to the challenge of identifying violent creators.

Critics recommend changing Section 230 to revoke liability protection from platforms that “knowingly or intentionally” allow illegal content or to only afford protection to platforms that are actively trying to stop or fix issues with their content. In practice, this change may look like platforms following a system where they must remove content upon request, similar to rules in the Digital Millennium Copyright Act. However, the First Amendment stands in the way of each of the proposed legal solutions; tailored solutions to address deepfakes specifically would allow a better chance at surviving a First Amendment challenge.

As the debate over the need for more stringent deepfake regulations grows, maybe the involvement of notable figures will finally push these issues to the forefront. While on her record-breaking Eras tour, pornographic deepfakes of Taylor Swift appeared on X. These images, which were described as “disgusting,” remained live for 17 hours and were viewed more than 45 million times before being taken down. It has not been confirmed whether Taylor will be taking legal action, however, the event prompted a US bill to tackle nonconsensual, sexual deepfakes. Additionally, Microsoft has begun troubleshooting technological solutions that would prevent similar images from being generated.

Taylor has confronted sexism head-on before such as in her notable 2017 lawsuit against a radio DJ who groped her. Her influence, along with her willingness to use her platform to oppose sexual violence, could initiate profound public awareness of the issue. Potentially, then, other victims would feel empowered to seek justice, and authorities would be pressured for stronger enforcement.

Leave a Reply

Your email address will not be published. Required fields are marked *