Regulating Artificial Intelligence: Self-Regulation, State-Regulation, and Everything In-Between

DYLAN JOHN MENCIA­­­—Most people have heard the term “artificial intelligence” (often referred to in its abbreviated form, “AI”) and have likely heard of the benefits that this burgeoning field of technology promises to bring. What has not been talked about as much, however, is how we as a society plan to regulate these new technologies in a way that will both facilitate growth and insure stability across industries and society at large. While politicians have been largely silent on the issue, industry insiders have begun to speak out. Elon Musk, for example, has warned that we must regulate AI before “its too late” and has even gone so far as to say that establishing a harmonic relationship between AI and humanity is “the single biggest existential crisis that we face.” Additionally, the late, world-renown theoretical physicist Stephen Hawking had been unequivocal in his concerns, noting that without the proper regulatory scheme in place, the emergence of AI  “could be the worst event in the history of our civilization.” While some AI experts have rebuked this sort of rhetoric as alarmist, many share Musk and Hawking’s concerns.

Embed from Getty Images

While the need for regulation is certain, determining the most appropriate method is a tricky endeavor. On one end of the spectrum is immediate and heavy-handed state regulation. This strategy carries with it the undesirable effect of imposing overly burdensome regulations that may ultimately stifle innovation. Americans have a vested interest in being at the forefront of this technological frontier, and superfluous regulation could be a serious impediment to that objective. Under a heavy-handed regulatory scheme, nations such as China would become the benefactors of our fastidiousness and would undoubtedly capitalize on the opportunity. On the other end of the spectrum is a laissez-faire approach wherein regulation is left up to market participants. While a libertarian might espouse such a strategy, an entirely hands-off approach is undesirable and ineffective for a couple obvious reasons. First, participants will always have an incentive to cheat on agreed-upon standards because following agreed upon ethical standards will result in a competitive disadvantage. More critically, the absence of a proper enforcement regime renders self-regulatory efforts inefficacious, affording no real remedies to the parties involved. Nevertheless, self-regulatory bodies have already cropped up. Consider, for example, the partnership between Google, DeepMind, Facebook, Microsoft, Apple, Amazon, and IBM with the American Civil Liberties Union (ACLU) and the Association for the Advancement of Artificial Intelligence (AAAI). Under this partnership, participants agree to certain tenets of behavior that strive to promote industry growth while protecting society’s interest at large. Critics have voiced concern over the efficacy of such partnerships given the lack of any enforcement mechanism. A more appropriate regulatory scheme would lie somewhere in the middle of these two extremes.

While regulations are typically enacted on an ex post basis, that is in response to certain events, the rapid rise of artificial intelligence requires ex ante regulation so as to ensure stability going forward. With that being said, regulators should adopt a light-handed approach, so as not to stifle innovation. In order to guide legislation, representatives should attempt to categorize and prioritize those forms of AI that pose the most immediate and existential threats. For example, legislators may want to focus their immediate attention on those applications that affect large portions of the American work force, such as automobile automation. In fact, this particular concern has been voiced by democratic presidential candidate Andrew Yang, who has expressed deep concern for the three million truck-drivers facing the driverless-vehicle era. Yang’s concern for the truck-driver is a microcosm of the impact AI could have on broad swaths of the American public. Alternatively, regulators may want to first regulate those forms of AI that pose threats to national security. AI that falls within this ambit would, for example, consist of the political “bots” that were used to influence the 2016 U.S. Presidential election.

On the other end of the spectrum are the less-threatening forms of AI, such as Pandora, which uses basic forms of machine learning to curate songs based on user preferences; AI programs such as this would be confined to the lowest risk profile and thus largely ignored. Determining which forms of AI pose the most serious threats will undoubtedly be the most difficult part of the regulatory process. To that end, regulators should team up and work closely alongside the private sector so as to better understand the technology and its associated risks. For example, regulators may want to work alongside some of the partnerships mentioned above so as to more effectively and efficiently approach the problem.

Regulators should impose certain restrictions on high-risk forms of artificial intelligence. By categorizing and prioritizing different forms of AI, regulators will be able to enact proactive legislation that will guide us through this new age of technological innovation.