pexels-tara-winstead-8386422

Pump The Brakes or Step on the Gas? An Analysis of Emerging AI Regulatory Frameworks

Pump The Brakes or Step on the Gas? An Analysis of Emerging AI Regulatory Frameworks

April 6, 2023

Pump The Brakes or Step on the Gas? An Analysis of Emerging AI Regulatory Frameworks

By: Abbey Block

Artificial Intelligence (“AI”) is growing exponentially and has infiltrated nearly every sector of society. Despite the technology’s growth, the US has yet to pass comprehensive federal legislation addressing its use, commercialization, and development. Although several states such as New York, Maryland, and Washington have implemented their own regulations, no such supervisory scheme has been broadly implemented on the federal level.

In response to the largely unregulated growth of the technology, tech leaders and researchers published an Open Letter, calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” The letter was signed by some of the biggest names in tech, including Steve Wozniak and Elon Musk. According to the letter, the pause is warranted given the “profound risks,” that advanced AI systems pose to society – including the “loss of control of our civilization.” To address this threat, the letter urges that “AI developers must work with policymakers to dramatically accelerate the development of robust AI governance systems,” including the development of regulatory authorities that are specifically dedicated to the oversight and tracking of AI systems.

The letter was published just as the UK Government unveiled its first “artificial intelligence white paper,” which aims to “guide the use of artificial intelligence in the UK, to drive responsible innovation and maintain public trust in this revolutionary technology.” The white paper embraces the idea that public trust is key to continuing innovation and further development of AI technology.

The juxtaposition of the publications is hard to miss. While the UK is embracing innovation-centric regulation and pushing for the expansion of AI, tech-leaders in the US want to pump the brakes on the technology’s development. The UK’s push forward raises the question: what kind of AI regulations will US lawmakers and tech leaders be willing to adopt?

The UK white paper recognizes that while AI offers significant benefits, it also poses risks to privacy, mental and physical health, and the preservation of human rights.  With these risks in mind, the publication offers a framework of regulatory guidance to replace the “patchwork of legal regimes,” currently in place. The white paper, which was presented to Parliament on March 29, 2023, outlines five principles (the “Principles”) that are intended to guide the use and development of AI in the UK:

  • Safety, security, and robustness: all AI systems should function in a robust, secure and safe way throughout the AI life cycle so that risks are continuously identified, assessed and managed.
  • Transparency and explainability: all AI systems should be appropriately transparent and explainable, meaning that regulators have access to sufficient information to give meaningful effect to the other principles. This Principle will also ensure that AI suppliers and users are less likely to inadvertently break the law, infringe rights, and cause harm.
  • Fairness: all AI systems should not undermine the legal rights of individuals or organizations, discriminate unfairly against individuals, or create unfair market outcomes.
  • Accountability and governance: regulators must look for ways to ensure clear expectations for regulatory compliance. Regulator guidance should reflect that “accountability,” refers to the expectation that organizations and individuals will adopt appropriate measures to ensure the proper functioning of the AI systems they design, develop, train, operate, or deploy.
  • Contestability and redress: when appropriate, users, impacted third parties, and actors in the AI life cycle should be able to contest an AI decision or outcome that harms them or creates a material risk of harm. Regulators will be expected to identify existing routes of contestability and redress and implement proportionate measures to ensure that the outcomes of AI use are contestable where appropriate.

These Principles, the white paper explains, should be used to “act quickly” to develop a “clear, pro-innovation regulatory environment,” which recognizes the risks inherent in the technology. However, the Principles are initially to be issued on a “non-statutory” basis to “tailor the implementation of the [P]rinciples to the specific context in which AI is used.” Indeed, the paper remarks that “[n]ew rigid and onerous legislative requirements on businesses could hold back AI innovation and reduce [the] ability to respond quickly and in a proportionate way to future technological advances.”

Rather than restricting the use of AI through structured statutory regulation, the goal of the white paper’s framework is to respond to risk, build public trust, support business investment, and build confidence in innovation. To achieve its goals, the UK will rely on existing regulatory bodies to issue practical guidance in accordance with the white paper’s Principles. Only if the implementation of these Principles is proven to be ineffective without legislation will parliament introduce a “statutory duty on regulators requiring them to have due regard to the [P]rinciples.”

The UK’s “pro-innovation,” framework undoubtedly provides a flexible and immediate approach to regulation. By relying on existing regulatory bodies, the issuance of regulatory guidance can be both specialized and expedited. It would undoubtedly be far more cumbersome to erect an entirely new regulatory body dedicated solely to the governance of the AI sector (a suggestion proposed by the signatories of the Open Letter). Further, existing regulatory bodies already possess a base-level of expertise regarding how AI can and will impact their respective sectors (finance, defense, employment, education, etc.). Relying on this pre-existing expertise will ensure that the promulgation of AI-related practical guidance addresses the actual concerns of those in the effected industries.

On the other side of the spectrum, reliance on existing regulatory bodies ignores the fact that some AI operators and/or systems may not fit squarely within one industry category. For example, what if an AI-program is used to expedite the hiring of educational professionals? Such a program would implicate regulatory principles promulgated by both the education and employment sectors. Alternatively, some AI-operators or systems may fall entirely outside the scope of existing regulatory frameworks.

Such hypotheticals raise several questions: Which industry’s “best practices” should apply? Who decides which industry practices apply? And what rules apply in industries that don’t have a pre-existing regulatory body? Thus, although the UK white paper’s framework seeks to eliminate the existing “patchwork of legal regimes,” in practice, some AI operators and systems may end up falling through the regulatory cracks.

The UK’s industry-specific framework may also result in conflicting regulations from one industry to the next.  This may create a situation in which AI operators are subject to several, conflicting “best practice Principles.”  Thus, while diversification of regulation allows for a specialized approach, it may also lead to confusion and inadvertent non-compliance. To address these concerns, the UK white paper promotes “centralized coordination and oversight.” However, given the breadth of industries impacted by the development of AI, this may be difficult in practice.

In this regard, the UK’s approach differs from that employed in other parts of the world. For example, the European Union has proposed the “AI Act,” which sets out a horizontal legal framework for the development, commodification, and use of AI services and products. The Act adopts a centralized risk-based approach that focuses on the characteristics of the AI technology, rather than industry in which it is being used. The regulations would apply across the entirety of EU, ensuring a level of legal certainty not provided by the UK’s industry-specific framework.

Where does this leave the US? Are lawmakers likely to embrace the UK’s industry-specific, principles-based approach? It seems unlikely.

Although the US has yet to propose or enact comprehensive federal regulation of AI, policy makers seem keen to do so soon.  A commission launched by the US Chamber of Commerce (the “Commission”) put forth a set of principles that, like those articulated in the UK’s white paper, seek to provide broad guidance for the regulation of AI technology. The Commission published a report which urges that “[p]olicy leaders must undertake initiatives to develop thoughtful laws and rules for the development of responsible AI and its ethical deployment.”   To this end, the Commission adopts a hybrid approach – arguing both that (1) new laws and rules must be developed to regulate AI; and (2) in areas of lower risk, policy makers should encourage both a soft-law and best practice approach that relies on self-policing and self-regulation. Notably, however, the Commission cautions that a “soft-law” approach is not intended to suggest that new laws and regulations are not needed. Thus, unlike the UK’s white paper, the Commission’s report urges regulators and lawmakers to adopt hard and fast rules to govern the development and use of AI rather than relying entirely on industry-wide self-governance.

Exemplifying this point, on March 8, 2023, a subcommittee of the House Oversight Committee held a hearing on artificial intelligence titled “Advances in AI: Are We Ready for a Tech Revolution?” There, Aleksander Madry, director of the Center for Deployable Machine Learning at the Massachusetts Institute of Technology, urged lawmakers to “step up” and regulate emerging artificial intelligence companies so that developers realize they have “responsibilities” separate from merely making a profit.  Representatives from both political parties seemingly agreed that the need for regulation was paramount.

Given these circumstances, the US is unlikely to follow in the UK’s regulatory footsteps. Indeed, US lawmakers seem to have a greater appetite for definitive statutory regulation than their counterparts in the UK, as evidenced by the Commission’s report and Congress’s apparent interest in the issue of AI regulation. While the UK’s white paper proposes a “wait and see” approach that relies largely on industry-specific self-regulation, the US appears eager to embrace a more traditional form of centralized statutory regulation.

Abbey Block

Abbey Block

Abbey Block found her path in law as a journalism major, coupling her passion for advocacy through writing with her litigation experience to create persuasive, effective arguments.

Prior to joining Ifrah Law, Abbey served as a judicial law clerk in Delaware’s Kent County Superior Court, where she was exposed to both trial and appellate court litigation. Her work included analyzing case law, statutes, pleadings, depositions and hearing transcripts to draft bench memoranda and provide recommendations to the judge.

Related Practice(s)
Other Posts
To Refer, Or Not To Refer? OIG’s Outdated Health Care Referral Restrictions
Mar 21, 2016

To Refer, Or Not To Refer? OIG’s Outdated Health Care Referral Restrictions

By: Drew Barnholtz
CFPB No-Action Letters Are No Help
Mar 1, 2016

CFPB No-Action Letters Are No Help

By: George Calhoun
Halting Business And Seizing A Domain Without A Moment’s Notice
Jan 15, 2016

Halting Business And Seizing A Domain Without A Moment’s Notice

By: Nicole Kardell
The Federal Wiretap Act and the Law of Unintended Consequences
Mar 2, 2015

The Federal Wiretap Act and the Law of Unintended Consequences

By: Ifrah Law

Subscribe to Ifrah Law’s Insights