Posted on Leave a comment

Artificial Intelligence Act: What Is the European Approach for AI?

On April 21, the European Commission unveiled the first-ever legal framework on artificial intelligence (AI): the Artificial Intelligence Act. The extensive AI Act addresses the risks stemming from the various uses of AI systems and aims to promote innovation in the field of AI. Mark MacCarthy and Kenneth Propp have called the proposed regulation “a comprehensive and thoughtful start to the legislative process in Europe [that] might prove to be the basis for trans-Atlantic cooperation.” This post builds on MacCarthy and Propp’s discussion and closely examines the key elements of the proposal—the provisions most likely to shape the discussion regarding the regulation of AI on this side of the Atlantic. 

Before diving deep into the legislation itself, it is important to recognize the significant amount of work that the European Union has done to come up with this text.

The Consultation Process 

In 2019, following the European Commission’s outline for AI in Europe, two deliberating bodies collaborated to publish the ethics guidelines for trustworthy AI. The High-Level Expert Group on Artificial Intelligence wrote the guidelines, after consulting with members of the European AI Alliance, which is a multi-stakeholder forum created to provide feedback on regulatory initiatives regarding AI. The guidelines laid out seven key requirements that AI systems should meet to be deemed trustworthy: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability. 

The ethical guidelines (and the assessment tool created to operationalize the guidelines) helped frame the discussions and structured the debate for the next phases of legislative action. In February 2020, the European Commission built on these guidelines through its white paper titled “On Artificial Intelligence: A European Approach to Excellence and Trust.” The white paper announced upcoming regulatory action and presented the key elements of the future framework. Among these key elements was the risk-based approach suggesting that mandatory legal requirements—derived from the ethical principles—should be imposed on high-risk AI systems. 

The white paper was followed by a public consultation process that involved 1,200 stakeholders from various backgrounds—citizens, academics, EU member states, civil society, as well as businesses and industry. This round of consultation sought to influence the drafting of the AI Act. The submitted comments prompted the European Commission to abandon the contemplated sectoral approach for a broader and simpler approach: Under the proposed act, all high-risk applications of AI wherever they may occur—excluding uses by the military because of jurisdictional reasons and certain uses by eu-LISA, the European Agency for the Operational Management of Large-scale IT Systems in the Area of Freedom, Security and Justice—have to comply with heightened obligations. 

So, this is where we are now. Following a three-year process in which many stakeholders from a variety of perspectives have made their voices heard, the European Commission proposed a legal framework to regulate AI. The proposal is described as thoughtful and nuanced, but its “when in doubt regulate approach” clashes with the United States’s hands-off approach traditionally seen with technology. Even if the framework provides for regulatory sandboxes, it might be perceived—from an American standpoint—as inhibiting innovation.

The U.S government has made international cooperation one of the key strategic pillars of its National Artificial Intelligence Initiative, so it will be important to know what the AI Act is all about.

Defining Artificial Intelligence Under the Act 

The European Commission chose not to define AI per se but to define AI systems instead. And it did so using a hybrid approach. First, it provides an expansive (and somewhat vague) definition of AI systems—that is strongly influenced by the Organization for Economic Cooperation and Development’s definition—in the body of the proposed regulation at Article 3:

“[A]rtificial intelligence system” (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with[.] 

The European Commission sought to clarify the definition and provide legal certainty surrounding the scope of the act by enumerating computer sciences techniques and approaches that would be regulated in Annex I. Under Article 4 of the proposal, the commission can amend and update this list as technology evolves. The current list of techniques and approaches under Annex I reads as follows: 

  1. Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods, including deep learning; 
  2. Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; 
  3. Statistical approaches, Bayesian estimation, search and optimization methods.

This enumeration of AI techniques falling within the material scope of the proposed regulation has caused a bit of a stir. For example, some observers claimed that the EU was “proposing to regulate the use of Bayesian estimation”—Bayesianism being first and foremost a mathematical theorem—to decry the overbroadness of the proposed regulation. 

While the critiques of the act’s definition broadness is not totally without merit because it can create uncertainty for developers of AI systems, it’s worth noting that a certain technology falling within the scope of the proposal does not necessarily mean it will be subject to novel legal obligations. Only some AI systems that pose an increased level of risk according to the commission’s criteria will be subject to legal requirements. For example, spam-filtering tools that rely on Bayesian statistics won’t be subject to new legal requirements; Bayesian networks used for triaging patients in the emergency room, however, likely would be. 

These distinctions illustrate the European Commission’s risk-based approach. Such a gradation of risks is the core idea of the proposed regulation. This tiered approach, first proposed in Europe by Germany, is also profiled in the United States’s Algorithmic Accountability Act of 2019 and in the Canadian Directive on Automated Decision-Making. This emergence of similar approaches to regulating AI in other countries may suggest it will become the norm on the international scene. 

Brussels’s risk-based approach to regulate AI is discussed in further detail below.

A Risk-Based Approach 

The AI Act is guided by the EU’s underlying idea that developing trustworthy technologies will prompt the global uptake of AI in Europe. For the European Commission, building trust requires the proper protection of people’s safety and fundamental rights, which can be achieved by establishing boundaries around why and how AI systems are used. However, these boundaries must not be so burdensome that they hamper the very innovation they aim to promote. 

Balancing the preservation of individual safety and fundamental rights without overly inhibiting innovation in AI is difficult. The AI Act has attempted to find the middle ground by adopting a risk-based approach that bans specific unacceptable uses of AI, heavily regulates some other uses that carry important risks, and says nothing—except encouraging the adoption of codes of conduct—about the uses that are of limited risk or no risk at all. 

The gradation of risks is represented using a four-level pyramid—an unacceptable risk, a high risk, a limited risk and a minimal risk (see Figure 1). While the nuances of the approach are extensive, understanding the underlying rationales behind them is crucial since it “is the most important part,” as described by Lucilla Sioli, director for Artificial Intelligence and Digital Industry at the European Commission. 

Figure 1. The AI Act’s hierarchy of risks. 

Minimal Risk: The base of the pyramid concerns the technologies that are of little or no risk. Every existing AI system that is not explicitly discussed in the proposal falls into this category. The commission has stated that it encompasses “the vast majority of AI systems currently used in the EU.” 

These technologies, such as spam filters or AI-enabled video games, won’t be subject to new legal requirements. However, even though these AI systems won’t be formally regulated under the act, Article 69 may still shape their development. This provision strongly encourages the drawing of codes of conduct to regulate these technologies. The commission hopes that putting these soft-law regimes in place could foster the voluntary application of principles such as transparency, human oversight and robustness that are otherwise applicable only to high-risk AI systems. 

Limited Risk: This porous layer encompasses some technologies that are high risk and some that aren’t. The defining characteristic of AI systems that fall into this category is that they raise certain issues in terms of transparency and thus require special disclosure obligations. 

There are three types of technologies that require such special transparency requirements: deep fakes, AI systems that are intended to interact with people, and AI-powered emotion recognition systems/biometric categorization systems. 

Article 52 of the proposed regulation grants people living in the European Union the right to know if the video they are watching is a deep fake, if the person they’re talking to is a chatbot or a voice assistant, and if they are subject to an emotion recognition analysis or a biometric categorization made by an AI system. Therefore, limited-risk AI systems must be transparent about their artificial nature. 

However, there are some exceptions. The transparency obligations do not apply to AI systems authorized by law to detect, prevent, investigate or prosecute criminal offences. But emotion recognition systems are not excluded, and disclosure of their use is always mandatory. For many observers, emotion systems, which are thought to rely on flawed science, should simply be banned; the fact that they remain subject to transparency requirements is not enough. 

High-Risk: The technologies that fall in this category will be subject to a range of new and extensive obligations. 

There are two kinds of high-risk AI systems. The first category covers those embedded in products that are already subject to third-party assessment under sectoral legislation and serve as safety components for said products. This includes safety components for machinery, medical devices or toys. These systems will be regulated by sector-specific legislation, which will be amended to include the obligations provided for in the proposed regulation. As such, an AI system’s compliance with the sectoral legislation will entail compliance with the AI Act. 

The second category focuses on AI systems that are not embedded in other products. The proposed regulation considers that these stand-alone systems are deemed high risk when they are used in certain areas. The list of areas, which can be amended under Article 7, includes: 

  • Biometric identification and categorization of natural persons 
  • Management and operation of critical infrastructure (such as supply of water, gas, heating and electricity)
  • Education and vocational training 
  • Employment, workers’ management and access to self-employment 
  • Access to/enjoyment of essential private services and public services and benefits (like credit and emergency first response services)
  • Law enforcement
  • Migration, asylum and border control management 
  • Administration of justice and democratic processes 

To mitigate the risk posed by these systems, the proposal puts a Conformité Européenne (CE) marking process into place. The CE mark—a logo that can be found on lots of products traded on the European market—is affixed to a product once it has been assessed to meet EU high safety, health and environmental protection requirements. High-risk AI systems will need to have this mark before entering the European market. And to get that mark, they will have to comply with five requirements heavily inspired by key principles from the aforementioned ethics guidelines. These obligations are summarized below. 

Data and Data Governance: High-risk AI systems must be developed using quality datasets, including those used for training, validating and testing the algorithm. Concretely, this quality requirement means that data must be relevant, representative, free of errors and complete. In addition, good data management practices such as paying particular attention to biases as well as to data gaps and data shortcomings are also mandatory. 

Transparency for Users: People who develop high-risk AI systems (“providers” under the proposed regulation) must disclose certain types of information to ensure proper use of AI systems. For example, providers must disclose information about the characteristics, capabilities and limitations of the AI system; the system’s intended purpose; and information necessary for its maintenance and care. 

Human Oversight: High-risk AI systems must be designed to be overseen by humans. Importantly, it does not mean that people must have a precise understanding of how AI systems—often described as black boxes—come to a particular decision. Instead, the focus is on an individual’s capacity to understand the main limitations of AI systems and ability to identify such shortcomings in a particular system. The overseeing duties include surveilling for automation bias problems, spotting anomalies or signs of dysfunctions, and deciding whether to override an AI system’s decision or to pull the “kill switch” if a system poses a threat to the safety and fundamental rights of people. 

Accuracy, Robustness and Cybersecurity: High-risk AI systems must achieve a level of accuracy, robustness and cybersecurity proportionate to their intended purpose. Providers of AI systems will be obligated to communicate accuracy metrics to those who use their AI systems. Backup or fail-safe plans to ensure sufficient robustness will also be required, as will technical solutions to prevent cybersecurity incidents such as data poisoning. 

Traceability and Auditability: Providers of high-risk AI systems must establish technical documentation containing information necessary to assess their compliance with the other requirements mentioned above. The extensive list of things that must be documented—like data management practices and risk management systems—can be found in Annex IV of the AI Act. Moreover, automatic recording of events (logs) is mandatory under the proposal.

Providers can comply with these requirements through a self-assessment procedure, except for remote biometric identification systems, which have to go through a more stringent process. Once the compliance assessment is done, the provider of an AI system completes an EU declaration of conformity and then can affix the CE marking of conformity and enter the European market. 

Compliance is not static, and the proposed regulation requires high-risk AI system providers to enact a postmarket monitoring process that actively evaluates the system’s compliance throughout its life cycle. 

Unacceptable Risk: This category of systems is regulated by Article 5. It prescribes a blanket ban on certain uses and fields of AI. Four types of technologies are encompassed in this category—social scoring, dark-pattern AI, manipulation and real-time biometric identification systems.

The prohibition of social scoring seems to be a direct charge against China-style AI systems that are said to monitor almost every aspect of people’s life—from jaywalking habits to buying history—to assess people’s trustworthiness. Western countries have incorrectly depicted China’s social credit system, as the technologies currently in use are “nowhere close to Black Mirror fantasies.” But this is beside the point. The aim of this ban is symbolic. By stating that public authorities cannot engage in AI-powered assessment of people’s trustworthiness, the EU has made it clear that its vision of AI is one that protects fundamental rights. 

Another outright ban concerns some dark-patterned AI systems. Under the proposal, the EU will prohibit technologies that deploy subliminal techniques beyond a person’s consciousness to materially distort the person’s behavior in a manner that is likely to cause either psychological or physical harm. For example, Article 5(1)(a) would prohibit the use of an inaudible sound played in a truck driver’s cabin to push him to drive for more hours than is healthy and safe.

The proposal also bans what can be described as “manipulation.” The proposal describes this as AI systems that “exploit[] any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm.” A doll with an integrated voice assistant that encourages a minor to engage in progressively dangerous behavior would be prohibited under Article 5(1)(b).

Lastly, law enforcement’s use of real-time remote biometric identification systems—facial recognition technologies used for identification purposes—in publicly accessible spaces is prohibited under Article 5. Despite being in the unacceptable risk category, this is not an outright ban. Rather, it’s part of a broader political compromise, which is discussed below. 

So, this is the risk-based approach à la European Commission. As MacCarthy and Propp have pointed out, premarket conformity assessments and postmarket monitoring duties won’t be copied in the United States. However, since the proposed regulation applies to every high-risk AI system that is put on the European market and to every high-risk AI system for which the output is used in the union, the requirements are likely to have significant impacts on American tech developers. 

Indeed, due to the Brussels effect—a phenomenon by which the European Union seeks to apply its own regulations to foreign actors through extraterritoriality means—American tech developers will have to comply with European rules in many instances. Failing to do so could lead to considerable monetary penalties. Violations of data governance requirements or noncompliance with the unacceptable risk prohibitions can lead to fines of up to 30 million euros or 6 percent of a business’s global annual revenue total worldwide annual turnover. Noncompliance with other provisions of the AI Act can carry a penalty of up to 20 million euros or 4 percent of global annual revenue. 

The Facial Recognition Compromise 

Remote biometric identification is a complicated matter. It mainly concerns facial recognition, but it could also include voice or gait recognition for identification purposes. The European Commission considered a five-year moratorium on the use of such technologies in public places when initially drafting its February 2020 white paper but ultimately backed off. Instead, the commission suggested a broad European debate on the specific circumstances, if any, that might justify the use of facial recognition technologies. 

France, Finland, the Czech Republic and Denmark submitted remarks during this debate that using facial recognition to identify people in public spaces might be justified for important public security reasons and should therefore be allowed under strict legal conditions and safeguards. However, members of the public who replied to the online survey—mainly European citizens—opposed this justification and were in favor of a ban of its use in public spaces. The public was “very vocal on this issue[,]” and their sentiments were shared by other key figures from civil society, academia and the political sphere. The commission ultimately decided to compromise by choosing to heavily regulate remote biometric identification systems without going for an outright prohibition. This solution builds on an existing legal framework with respect to facial recognition. The General Data Protection Regulation (GDPR) generally prohibits the processing of biometric data for identifying people but allows certain exceptions provided in Article 9 (consent or substantial public interest in particular). Article 10 of the Law Enforcement Directive requires that uses of biometric data for identifying people be authorized by state members and, except for one narrow situation, aimed at protecting vital interests.

Now, with these provisions in mind, the additional ways in which the AI Act proposes to regulate remote biometric identification systems become clearer, with two distinguishable levels of regulation. 

The first level of regulation applies to all biometric identification systems. These technologies must comply with the high-risk systems requirements previously discussed However, the compliance assessment process that they ought to go through is more stringent than the one required for any other stand-alone AI system. Indeed, any remote biometric identification system that is to be used in the European Union—by either private businesses or state actors—must go through a third-party conformity assessment or comply with harmonized European standards that are to be published later on. These systems will also be subject to ex-post surveillance requirements.

The second level of regulation refers to a more narrow kind of use of remote biometric identification systems: law enforcement’s real-time use in publicly accessible spaces. These systems—which can capture an image, compare it to an existing database and identify who the person in the image is, almost instantly—are confined to the unacceptable risk category. But this prohibition applies only to law enforcement, and private entities may use such systems if they adhere to the strict requirements detailed above. However, there are some exceptions. The commission has “three exhaustively listed and narrowly-defined situations” in which real-time biometric identification systems can be used in public places for law enforcement purposes. In these situations, law enforcement must receive prior authorization by a judicial authority or an independent administrative authority—but emergency situations can justify obtaining the authorization only during or after the use. 

Make no mistake, these narrowly defined exceptions cover a rather broad range of situations. The first exception arises when the use of the system is necessary to aid in specific investigations, like those related to missing children. The second situation is when the technology is necessary to prevent a specific, substantial and imminent threat to natural persons’ lives or a terrorist attack. The third situation regards the detection, localization, identification or prosecution of a perpetrator or suspect of one of the 32 listed offenses in Article 2(2) of the Council Framework Decision. Such offenses span from environmental and computer-related crimes to human trafficking, terrorism and rape. 

As the bill heads to the European Parliament for further debate, it will be interesting to follow the evolution of the provisions regarding remote biometric identification. The bill could undergo significant amendments in the process toward its adoption. Key voices like that of Wojciech Wiewiórowski, the European data protection supervisor, have called for a moratorium on such uses in public spaces, and members of the European Parliament went even further, asking for an outright ban of all kinds of remote biometric identification technologies. 

In the meantime, however, this makes the EU’s claim of being the champion of fundamental rights protection in the field of AI harder to sustain. According to Europe policy analyst Daniel Leufer, the narrative that the EU is the only one regulating risky AI technologies is erroneous. Various cities in the United States have taken action, with cities such as Portland, Boston, and San Francisco banning city departments’ and agencies’ use of facial recognition in public places. 

The International Influence of European Regulations

When it comes to technology, Europe has made no secret of its desire to “export its values across the world.” And as we have seen with the GDPR—the European law that has quickly become the international standard in terms of data protection—these efforts are far from in vain. 

Since the GDPR went into full effect in 2018, the law has significantly shaped how data protection is conducted around the world. Anu Bradford, who has written a book explaining how the EU came to “rule the world,” discusses how nearly 120 nations have adopted privacy laws inspired by the EU data protection regime. She also explains how users of major internet services and platforms such as Google, Netflix, Airbnb or Uber—wherever they are located in the world—end up being governed by European data protection laws because these companies have adopted single global privacy policies abiding with the GDPR to manage all of their users’ data. 

With the proposed AI Act, the European Union seems to want to replicate the same kind of regulatory influence it achieved with the GDPR. The extraterritorial reach of the proposal illustrates the European Commission’s hegemonic aims: The proposed regulation covers providers of AI systems in the EU, irrespective of where the provider is located, as well as users of AI systems located within the EU, and providers and users located outside the EU “where the output produced by the system is used in the Union.” 

The international community should not let Europe write the rules that govern technology all by itself. The Artificial Intelligence Act is still very much at the beginning of its journey. It will take at least two to three years for the European Parliament, the European Commission and the Council of the European Union to agree on a final version of the proposed regulation, and then two years will have to pass before it starts applying. 

During this period, the United States should play an active role in helping Europe to develop a balanced and nuanced regulatory framework for AI. In December 2020, the European Union greeted the Biden administration with an EU-U.S. agenda proposing that the EU and the U.S. “start acting together on AI” to develop regional and global standards based on a shared belief in a human-centric approach. 

The invitation should not be missed.

Eve Gaumond
Author: Eve Gaumond

This post first appeared in Lawfare. Read the original article.

Leave a Reply

Your email address will not be published. Required fields are marked *