RENIER MARTINEZ—As artificial intelligence (“AI”) continues to integrate into our daily lives, lawmakers have struggled to keep pace with its legal consequences. While Congress has introduced numerous AI-related bills, no comprehensive federal regulatory statute has been enacted. Into that vacuum steps Florida. Filed on December 22, 2025, and backed by Governor Ron DeSantis, Senate Bill 482 (“SB 482”) represents a multifaceted effort to regulate interactions with AI platforms, require transparency for users, and create civil remedies. The bill is titled the “Artificial Intelligence Bill of Rights.” At its core, SB 482 seeks to define a series of “rights” for Floridians in the context of AI—rights that extend beyond traditional data privacy and into the realms of personal autonomy and consumer protection.
A central pillar of SB 482 is parental control over minors’ access to AI systems. Under the proposed legislation, “companion chatbot platforms,” broadly defined as AI systems with conversational interfaces, would be required to prevent minors from creating new accounts or maintaining existing ones without parental or guardian consent. Platforms would also be required to provide parents with access to their children’s AI interaction histories and offer mechanisms to limit or supervise ongoing use. From a policy perspective, these provisions respond to growing concerns about the psychological and social impacts of AI on young users. In practice, tying account access to parental consent would impose affirmative legal obligations on platforms to implement age-verification and oversight features, marking a significant departure from the current self-regulatory model.
Another core feature of the bill is disclosure obligations for AI interactions. SB 482 would grant Floridians the “right to know whether they are communicating with a human being or an artificial intelligence system, program, or chatbot.” Although the precise language remains subject to committee revision, the bill appears to require AI platforms to provide clear notice when users engage with generative systems to avoid liability.
Perhaps the most novel and legally significant element of SB 482 is its treatment of unauthorized uses of a person’s name, image, or likeness (NIL) by AI technologies. Drawing on existing right-of-publicity laws, the bill would allow Floridians to pursue civil remedies when AI systems misappropriate their identity for commercial purposes without consent. The provision targets the proliferation of “deepfakes” and synthetic media that can depict individuals in realistic but fabricated scenarios. Traditional right-of-publicity doctrines developed in an era of human actors and discrete commercial uses, but generative AI has complicated that framework by enabling rapid, scalable reproduction of identity with minimal human involvement. By expressly recognizing AI-generated likenesses as actionable, SB 482 attempts to close a gap in existing law that has left individuals with limited recourse against unauthorized digital exploitation.
At the same time, SB 482 raises important questions about the limits of state-level AI regulation. Its provisions implicate unresolved issues surrounding compelled speech, intermediary liability, and the practical challenges of enforcing age-verification and consent requirements across rapidly evolving platforms. As other states consider similar legislation, courts may be asked to determine how far traditional consumer-protection and publicity doctrines can be extended to cover generative technologies without running afoul of constitutional or preemption concerns.
That uncertainty is compounded by recent federal action. SB 482 now advances against the backdrop of President Trump’s executive order titled, “Ensuring a National Policy Framework for Artificial Intelligence,” which signals a renewed effort to coordinate AI policy at the federal level and address the proliferation of divergent state laws. The order authorizes federal agencies, including the Department of Justice’s newly created AI Litigation Task Force and the Department of Commerce, to evaluate existing state AI laws and identify provisions that conflict with the national policy framework. Notably, the order contemplates challenges to state laws that require AI models to alter truthful outputs or impose disclosure obligations deemed inconsistent with federal objectives, setting the stage for potential federal-state conflict.
In light of SB 482, these restrictions raise important questions about the constitutional limits of federal authority. Because executive orders cannot themselves repeal state law, any meaningful preemption or invalidation of state AI regulation must ultimately be resolved through litigation or congressional action. This unresolved tension highlights a central challenge of contemporary AI governance. Without comprehensive federal legislation, states like Florida have stepped in to regulate AI through existing legal frameworks, while the executive branch seeks to limit the scope of those efforts. Whether SB 482 becomes law, and how it will be received in a federal system increasingly focused on AI leadership and uniformity, may depend on how courts interpret the interplay between state autonomy, federal prerogatives, and the constitutional allocation of regulatory authority in the AI era.


