Forgiveness Before Permission: The Legal and Ethical Implications of OpenAI’s Sora

JAKE ROSENBERG—The release of OpenAI’s Sora 2 marks a new era in artificial intelligence capabilities. Sora offers a text-to-video model that is capable of transforming short written prompts into realistic and sometimes eerily convincing videos. While the technology has almost limitless creative potential, it also carries serious legal implications for privacy, likeness, and intellectual property rights. Sora can produce videos of people—including those who do not exist—raising urgent questions about consent, authenticity, and the scope of fair use with this new technology. As Sora and other AI-generated content become more mainstream, legal experts and policymakers are debating whether current frameworks can adequately address the potential misuse of human identity in AI-generated content.

Sora allows users to create videos and render dynamic environments with lifelike motion and realistic human expressions. There is palpable excitement about Sora’s creative potential, but also unease about likeness misuse. Indeed, OpenAI’s own system card acknowledges “Likeness Misuse and Harmful Deepfakes” as areas of ongoing risk. In response to growing criticism, including public concern from celebrities such as Bryan Cranston, OpenAI has strengthened its policies and guardrails. However, the company continues to allow celebrity depictions and likenesses to be used on Sora, so long as the figure is no longer alive.

OpenAI’s decision to allow depictions of deceased celebrities prompted Sora users to flood the platform with videos featuring historical figures behaving in ways they never would have, or could have, in real life. Perhaps the most prominent examples of this trend were the countless videos of Martin Luther King Jr., containing “disrespectful depictions” of the Nobel Peace Prize–winning advocate for civil and human rights. In response to this distasteful content, OpenAI released a statement in conjunction with the King Estate, Inc., announcing, “Authorized representatives or estate owners can request that their likeness not be used in Sora cameos.” Nevertheless, the damage had largely been done.

OpenAI’s response is but one example of the company’s self-proclaimed tendency toward forgiveness over permission. The Martin Luther King Jr. posts are yet another cog in their wheel of releasing groundbreaking tools first and responding to ethical concerns later. OpenAI’s reactive approach shifts the burden on individuals and estates to scour the internet for likeness misuse and request that content be taken down, rather than simply preventing the misuse before it begins.

However, while the estates of historic figures have generally fought to protect legacies, others have chosen to embrace the new technology. Jake Paul, an influencer and boxer with a major social media presence, voluntarily added his “cameo” to Sora, which allows the public to generate videos using his likeness. The internet quickly filled with AI-generated clips of Jake Paul engaging in absurd antics, to which he responded with a satirical threat to sue anyone posting such videos. Shortly thereafter, Paul announced that he was a proud investor in OpenAI and celebrated his cameo, which gave rise to over one billion views in just six days. Jake Paul declared himself as the “first celebrity NIL cameo user,” blurring the line between endorsement, parody, and commercialization. OpenAI CEO Sam Altman has since announced plans to monetize these cameo likenesses, suggesting a potential future of utilizing likeness for profit within the platform.

The voluntary monetization and participation approach, as with Jake Paul, hints at the lawful use of likenesses through licensed consent or compensation. However, the involuntary creation of defamatory or misleading content raises a greater concern. While most states have some version of a right of publicity to protect against the misappropriation of people’s likenesses, not all states have one. There are also legal concerns regarding the use of copyrighted material for Sora content, both in the training process as well as the outputs. With that being said, in this context, the right of publicity may be better suited to protect individual rights than copyright laws. Furthermore, Congress has proposed legislative solutions like the NO FAKES Act, a bill that seeks to protect against AI-generated content using individuals’ likenesses without their consent. Legislative action may be the best avenue for reform, as current AI content-creation capabilities far exceed what legislators could have imagined at the time they contemplated and enacted the laws that are in force today.

As one of the first and most sophisticated technologies of its kind, Sora is uniquely positioned to test what works and what does not concerning reform, protections, and solutions. With that position also comes the need to assess the legal implications of such a powerful tool. This is especially true when that tool is released to the public. Through regulatory processes and proposed legislation, perhaps the right balance can be struck in protecting individuals without stifling creativity. However, legislation often takes time, and with a rapidly evolving technology like Sora that has such great potential to infringe on the likenesses of so many individuals, playing legislative catch-up could be exceptionally detrimental. While OpenAI claims to be strengthening its guardrails, one must wonder if the forgiveness-before-permission approach has society’s best interests in mind. The challenge ahead is ensuring that the law evolves as quickly as the technology it seeks to govern, so that human likeness remains a matter of consent, not written code to be exploited by a few lines of text behind the shield of one’s phone or computer.