AI technology innovative applications vector infographic. Artificial intelligence, machine learning, data science and cognitive computing concept.

Navigating the AI Regulatory Landscape: Comparing Biden’s Executive Order and the EU’s Artificial Intelligence Act

Navigating the AI Regulatory Landscape: Comparing Biden’s Executive Order and the EU’s Artificial Intelligence Act

January 2, 2024

Navigating the AI Regulatory Landscape: Comparing Biden’s Executive Order and the EU’s Artificial Intelligence Act

By: Abbey Block

The regulation of artificial intelligence (“AI”) is a popular topic of discussion. Politicians, activists and even AI developers have weighed in to urge the Government to take steps to create guardrails for the technology’s development and use. Yet, until recently, governments and regulators have been slow to take action and pass concrete legislation – leaving the regulation of the industry largely to the private sector’s discretion.

In response to the wide-spread calls for regulatory guidance, the Biden Administration took the plunge and issued an Executive Order (“EO”) on “Safe, Secure, and Trustworthy Artificial Intelligence,” in October of 2023.

Most recently, the EU threw its hat in the regulatory ring, announcing in early December that, after months of development, and hours of debate, the European Commission, the Council, and the Parliament had agreed upon the provisional rules of the European Union’s Artificial Intelligence Act (the “AI Act”).

The Biden Administration and the EU have adopted distinctive approaches to the regulation of AI, with the Biden Administration delegating regulation to various governmental agencies and the EU adopting a more uniform, risk-based framework. While distinguishable in their design, the American and the EU regulatory frameworks similarly aim to target and mitigate the risks inherent in the use of AI technology: proliferation of misinformation, discrimination, and a lack of transparency.

The AI Act

The AI Act adopts a “uniform, horizontal” and “risk-based” legal framework, categorizing the use of AI into several distinct categories based on the degree of negative consequences potentially posed by the technology’s use:

(1) Unacceptable Risk Technology is prohibited entirely (with certain exceptions for law enforcement usage). This category includes technology that has the potential to exploit sensitive information and vulnerabilities (such as age, socioeconomic status, disability, or race) for use in social scoring programs or biometric categorization systems.

(2) High Risk Technology is permitted but subject to disclosure, transparency, and risk assessment obligations before it can be made available to the EU Market. For example, High Risk technology is subject to a “fundamental rights impact assessment,” before it can be put on the market by its deployer and public entities that utilize High-Risk AI will be required to register in a EU database. The High-Risk category includes systems that pose a “significant potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law,” such as “AI systems used to influence the outcome of elections and voting behavior.” The EU Parliament has explained that with regard to High Risk Technology, “[c]itizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.”

(3) Minimal or No-Risk Technology is generally permitted with minimal or no restrictions. Minimal Risk technology includes widely available chatbots (e.g., OpenAI’s ChatGPT), while No-Risk technology includes technology such as AI-enabled video games and spam filters. Minimal Risk technology is subject to certain transparency categories – i.e., users must be made aware that they are interacting with AI.

Companies that fail to comply with each risk category’s applicable requirements will be subject to hefty fines, set as a percentage of the offending company’s global annual turnover or a predetermined amount (up to €35 million for the worst offenders) – whichever is higher.

The Biden Executive Order

By contrast, the Biden Administration’s EO largely adopts an “agency-based” approach, tasking several governmental agencies with developing and implementing AI-based regulations to be used in their respective industries.  For example, the EO directs the Department of Commerce to develop “guidelines and best practices” for “developing and deploying safe, secure and trustworthy AI systems,” along with guidance for the watermarking of AI-generated content promulgated by the Government (in an effort to improve transparency regarding the use of the technology).

The EO similarly imposes certain testing and disclosure requirements on private companies. For example, companies that develop large AI systems with the ability to affect national security, public health, or the economy, are required to perform “red team testing” exercises and report the results to the Government. The Secretary of Commerce, the Secretary of Defense, the Secretary of Energy, and the Direction of National Intelligence are responsible for determining the “set of technical conditions for models and computer clusters that would be subject” to these reporting requirements.

Each directive within the EO falls under one of seven broad categories:

(1) New standards for AI safety and security;

(2) Protecting Americans’ privacy;

(3) Advancing equity and civil rights

(4) Standing up for consumers, patients, and students;

(5) Supporting workers

(6) Promoting innovation and competition;

(7) Advancing American leadership abroad; and

(8) Ensuring responsible and effective government use of AI.

 

For example, the EO also explicitly seeks to address “equity and civil rights,” by directing the Attorney General to produce and deliver a report to the president addressing the use of AI technology in the criminal justice system (e.g., in sentencing, parole/probation decisions, bail, risk assessments, and police surveillance). Similarly, in order to “promote innovation and competition,”  the Secretary of Homeland security is directed to “take appropriate steps to” “streamline processing times of visa petitioners and applications . . . for noncitizens who seek to travel to the United States to work on, study, or conduct research in AI.”

The EO outlines certain deadlines by which these directives are to be accomplished but does not provide explicit penalties for failure to comply with its directives.

Different Approaches to Similar Goals

Both the Biden EO and the AI Act recognize the importance of balancing the need for regulation against the importance of encouraging innovation. Indeed, one of the explicit goals of the Biden EO is to “promot[e] responsible innovation, competition, and collaboration.” Similarly, both regulatory frameworks target the unique risks and concerns stemming from the use of AI such as promoting transparency and ensuring that the technology is not exploited for discriminatory purposes. Yet, the frameworks adopt distinguishable approaches to addressing these concerns and realizing these goals.

The Biden EO’s administrative approach is seemingly broader in scope, imposing obligations not only on private companies but also on several administrative bodies within the Government. The seven broad categories identified in the EO signal that the administration has adopted an “all hands on deck” approach, delegating the responsibility for further regulation to a variety of administrative agencies. In this regard, the Biden EO inherently recognizes that certain agencies may be better equipped to develop specialized rules and programs than others.  However, the directives are also so broad that they pose a risk of inconsistency, resulting in a regulatory framework lacking in cohesion. AI companies may be subject to regulations promulgated by several different administrative agencies (e.g., commerce, defense, education) and may, as a result, face inconsistent or confusing obligations.

By contrast, the AI Act’s risk-based approach provides for a uniform set of rules, without regard to the specialized knowledge or needs of one governmental division or another. The AI Act does not contemplate delegation of further regulatory developments, but instead provides a framework under which all AI programs are to be evaluated and managed. In this regard, the AI Act may provide greater consistency in its application when compared to the Biden EO, in which a variety of governmental actors will get a say in how the regulations are to be implemented. However, it could also be argued that certain AI programs may not fall so easily within one “risk” category or another – leading to questions regarding what set of rules should be applied.

To a certain extent, both the AI Act and the Biden EO are limited in their ability to impact the industry as it evolves over the long term. The AI Act is in its provisional stages, meaning that it still must go through several phases of approval before it becomes fully enforceable. Indeed, even if all goes according to schedule, the AI Act will not be fully effective until 2026. Given the fast pace at which AI technology is being developed, it seems likely that in two years, the industry’s landscape may be entirely unrecognizable. Under these circumstances, experts have raised concern with regard to the AI Act’s staying power – i.e., whether its provisions will even be applicable to the newest technology by the time they are fully in effect.

However, the AI Act may ultimately prove to be more influential in the long term given that, once passed, it will function as controlling, and largely permanent legislation. By contrast, the Biden EO has the rule of law for now – but could be retracted or replaced by future presidential administrations. It isn’t hard to imagine that a future administration may have different priorities when it comes to the regulation of AI and may revoke the authority delegated to the various administrative bodies identified in the EO.

Finally, the frameworks adopt different approaches to the enforcement of their provisions. The AI Act imposes severe penalties on the companies that fail to comply with its provisions, while the Biden EO is largely silent with regard to the consequences of non-compliance. For example, the Biden EO tasks private companies with reporting the results of their “red-team testing” to the Government but doesn’t address the consequences of a company’s failure to do so. Accordingly, several critics have wondered if the Biden EO is, at least at this juncture, all bark and no bite.

Critics have also argued that both frameworks will ultimately stifle innovation. Indeed, the president of France, Emmanuel Macron, has openly criticized the AI Act, stating that “We can decide to regulate much faster and much stronger than our major competitors. But we will regulate things that we will no longer produce or invent. This is never a good idea,” Similarly, opponents of Biden’s EO have called it a “Red Tape Wishlist,”  that puts “the future of American innovation and global technological dominance at risk.”

While neither framework is perfect or obviously superior to the other, the effectiveness of both the AI Act and Biden EO will only be revealed in the months and years to come – i.e., as private AI companies are required to adapt in accordance with their regulations. Regulatory compliance may prove challenging and expensive, particularly in light of the fact that, until now, AI companies have largely been left to regulate themselves.

There is no denying that the AI industry is rapidly evolving, and any framework intended to regulate it must similarly adapt to accommodate the changes in the technology, the industry, and society’s use and acceptance of the programs. However, both the Biden Administration and the EU Council’s willingness to devote the time and resources to regulation signal that governments throughout the world are seriously considering the repercussions, and potential for growth, stemming from the use of AI.

Abbey Block

Abbey Block

Abbey Block found her path in law as a journalism major, coupling her passion for advocacy through writing with her litigation experience to create persuasive, effective arguments.

Prior to joining Ifrah Law, Abbey served as a judicial law clerk in Delaware’s Kent County Superior Court, where she was exposed to both trial and appellate court litigation. Her work included analyzing case law, statutes, pleadings, depositions and hearing transcripts to draft bench memoranda and provide recommendations to the judge.

Related Practice(s)
Other Posts
What are cookie consents and do you really need them on your website?
Oct 15, 2024

What are cookie consents and do you really need them on your website?

By: Nicole Kardell
FTC Adds COPPA Violations to the Growing List of Privacy Concerns While TikTok is on the Clock
Aug 13, 2024

FTC Adds COPPA Violations to the Growing List of Privacy Concerns While TikTok is on the Clock

By: Jordan Briggs
Ready, Set, Go: More States Adopt Privacy Laws
Jul 8, 2024

Ready, Set, Go: More States Adopt Privacy Laws

By: Nicole Kardell
Will AI lead Google Paid Advertisers to Ask for a Refund?
Jun 27, 2024

Will AI lead Google Paid Advertisers to Ask for a Refund?

By: Jake Gray

Subscribe to Ifrah Law’s Insights