Illuminating the Shadows: A Conversation with Vivek Jayaram on AI, Copyright, and the Implications of a Landmark Settlement

JANAYE DOWERS—Most private settlements slip quietly into obscurity, shielded from public scrutiny. But Bartz v. Anthropic stands apart. This case proceeded as a class action that required court approval, which placed its settlement terms in the public record and cast a rare light on a process that usually remains hidden. That visibility, combined with the broader stakes of Artificial Intelligence (“AI”) training on copyrighted works, positions Bartz as a potential turning point in copyright law’s relationship with AI.

The plaintiffs in Bartz were a group of book authors. They alleged that Anthropic infringed their copyrights by making copies of roughly 500,000 books from sources such as shadow libraries, torrent sites, and public web repositories to train its large language models (“LLMs”). The case ended with a $1.5 billion settlement, representing approximately $3,000 per work. Under the Copyright Act, statutory damages typically range from $750 to $30,000 per infringed work—and up to $150,000 for willful infringement—while actual damages in most author or photographer suits are often only a few thousand dollars. Viewed against that backdrop, the $3,000-per-book payout falls within a mid-range recovery, but what makes it extraordinary is that a leading AI company agreed to pay at all.  Bartz signifies a notable outcome in AI copyright litigation, marking the first time a major AI company has paid to resolve copyright infringement claims, sending a signal that may ripple across the industry.

For Professor Vivek Jayaram, Founder of Jayaram Law and Adjunct Intellectual Property Professor at the University of Miami School of Law, the symbolism of the payment matters more than the figure itself. “The fact that you’ve now got a defendant making a payment is sort of a big thing,” he said. “Here you have a major, sophisticated company weighing the risks of going to trial against the cost of settlement and ultimately choosing to pay. That choice suggests the defendant recognized some substantial risk in the claims.”

The implications extend far beyond Anthropic’s decision to settle. As Professor Jayaram noted, similar claims targeting Meta, OpenAI, and other AI developers could cause them to “start falling like dominoes” and decide, “Okay, we are going to pay too.”

Bartz highlights an emerging concern: can the Copyright Act of 1976 handle AI disputes, or does Congress need to enact new legislation? Professor Jayaram believes the current framework remains strong. “I think the Copyright Act remains sufficient to deal with the universe of issues we have here.” Courts already employ doctrines such as fair use and substantial similarity to evaluate infringement. Still, he admitted that new legislation could bring clarity. “If we decided to either amend the Copyright Act or pass a new law . . . I don’t think that’s a bad idea,” he explained. “It may be helpful, but it’s not necessary.” Rather, in practice, courts may draw lines based on the source of the data. “Publicly available material may survive legal scrutiny. However, data pulled from shadow libraries, private websites, or torrents may create reputational and legal risks,” Professor Jayaram explained.

Professor Jayaram pointed to contract law as another shield for creators. Website owners can use their terms of service to ban AI training. Courts have enforced such provisions in cases like LinkedIn v. hiQ Labs. “Just because something is permissible doesn’t mean it can’t be overridden by contract,” Professor Jayaram observed. This approach offers creators direct control, even if courts ultimately permit some training under copyright law.

“The bright line, and the most effective way to protect creative rights, is to focus on the output,” Professor Jayaram explained. “If a new song, film, or artwork substantially copies your work, the Copyright Act gives you the sword to stop it.”  Concerns about inspiration or style do not rise to infringement, but wholesale copying does, and the law already draws that line. “By keeping enforcement tied to outputs,” Professor Jayaram argued, “creators can safeguard their rights without overextending copyright in ways that stifle innovation.” The Copyright Act already allows creators to challenge AI-generated works that cross into substantial similarity with protected expression. Courts can issue injunctions and award damages, giving artists meaningful remedies when AI products unlawfully mirror their work.

Professor Jayaram placed the dispute in a larger historical frame. “Artists have always borrowed from their predecessors, whether in film, music, or visual art,” he reflected. What feels new is the scale and speed of AI’s capabilities. “The unprecedented volume, scale, and ease with which people can now absorb and reuse creative inspiration fuel both the promise of AI and the deep anxiety surrounding it,” he explained.

Although he acknowledges discomfort among creators whose styles can be replicated with ease, Professor Jayaram believes AI can extend, not erase, creativity. “Decades from now, when we look back, we’re going to be glad that we allowed this technology to exist—as long as we kept it in check—because it ultimately contributes to society at large.”

How society will remember Bartz v. Anthropic remains uncertain. Professor Jayaram mused, “This could either end up being the case that everybody—ten or twenty years from now—in law school reads as the trailblazer for how IP can and cannot be used to train an LLM, or it could be a case that nobody remembers two years from now.”

Settlements rarely create clear legal precedent. Yet, as Professor Jayaram observed, “they often influence industry conduct by signaling companies’ risk tolerance—much like early music-sharing settlements that shaped behavior before courts ever ruled definitively.” While Bartz stands out because its class-action posture brought the settlement into the public eye, its long-term impact depends on whether other AI companies settle or press forward to trial. If similar settlements occur, then policymakers may be encouraged to consider “a mandatory licensing framework, much like the licensing regimes administered by performance rights organizations in the music industry,” Professor Jayaram suggested.

For now, Bartz illuminates both the risks that AI companies face and the tools creators can use to protect their works. Whether it becomes a landmark or fades into the margins, Bartz has already cast new light on the unsettled tensions that will persist from AI’s use of copyrighted material.