The EU AI Act sets requirements depending upon the intensity and scope of the risks that AI systems can generate, in light of the seven principles that underly the Regulation and that we mentioned in our previous paper, in particular, but not only, with regards to fundamental rights.
This risk-based approach led the Commission to make a distinction between (i) a certain number of activities that are deemed unacceptable and that have to be prohibited, (ii) systems considered high risk that are at the core of the Regulation, as well as (iii) transparency obligations for certain AI models and systems, notably general purpose AI (GPAI) models.
In this second paper of our series devoted to the EU AI Act, we shall focus on the prohibited practices (Title II) and the transparency obligations for providers and deployers of certain AI systems and GPAI models (Title IV)
Shall be prohibited the placing on the market, respectively putting into service or use of the following AI systems:
Systems using subliminal techniques (such as audio, image or video stimuli that are beyond human perception) or any manipulative of distorting techniques that are meant to persuade people to engage in unwanted behaviors that these persons are not consciously aware of should be prohibited. The same goes with regards to such systems that exploit vulnerabilities from a person of a specific group (such as children, elderly people, ones suffering from disabilities or poor economic or social conditions) with the objective or the effect to distort their behavior.
In both instances, such prohibition however only exists if the use of the AI system at stake cause or is likely to cause a significant harm to the individual, including harms that may be accumulated over time.
Although intent will in most instances be present, intent would not be required; all that matters is the objective impact of such AI systems.
This prohibition should also be red in light of the provisions contained in Directive 2005/29/EC related to unfair commercial practices, bearing in mind that, according to the preamble of the Regulation, common and legitimate practices notably in the field of advertising that comply with the applicable law should not in themselves be regarded as constituting harmful manipulative practices.
While advertising is by definition meant to induce the recipients to a certain behavior, one may wonder whether the use of subliminal techniques should not, as a result of Art. 5, be prohibited per se in that industry as well. The way the notion of “significant harm” will be construed will certainly play a key role: may the purchase resulting from the use of subliminal techniques in advertisement (or in the gaming industry to induce in-game purchase) be considered a significant harm? Should it depend upon the amount at stake? In a world where the asymmetry of information is always wider, transparency in my view becomes more important than ever and would lead me to answer in a positive way. Not sure, though, that this will be the outcome of the construction of “significant harm”. Wait and see.
The use of such systems in the context of medical treatment, such as mental disorder or physical rehabilitation, does however not fall under that prohibition; provided, obviously, that these practices are carried out in line with the applicable medical regulatory framework.
These prohibitions refer to the use of biometric data, i.e. the collection of personal data resulting from specific technical processing relating to the physical (facial), physiological or behavioral characteristics of an individual, to achieve certain goals that are considered unethical and, on that regard, illegal under the Regulation.
Is prohibited the use of such data to infer individuals’ political opinions, trade union membership, religious or philosophical beliefs, race, sex life or sexual orientation, all considered as special categories of data within the meaning of Art. 9 GDPR which as a result, deserves special protection (without mentioning the fact that such inferences or correlations based upon biometric data may be questionable from a scientific standpoint on several accounts).
Is further prohibited the use of such data for social scoring, i.e. evaluate or classify people over a certain period of time based upon their social behavior inferred from multiple data points; provided, however, that such social scoring leads to either unfavorable treatment (i) in social contexts unrelated to the contexts in which such data has originally been generated or collected or (ii) unjustified or disproportionate to their social behavior or its gravity. As a result, and at least in my view, this means that any profiling activity falling under that provision would be prohibited, no matter whether it is carried out in line with the GDPR (which in any case remains doubtful when such processing violates the right to dignity and non-discrimination).
The use of real-time remote identification systems in publicly accessible spaces for the purpose of law enforcement has been heavily debated during the negotiations, by fear of the risk of skidding and loss of control.
Ultimately, such use may be allowed by Member States (but it is therefore up to each Member State to decide) in the following narrow circumstances and stringent formal requirements that can be summarized as follows:
(i) Circumstances
The use of real-time remote biometric identification system for law enforcement may only be used for the following reasons:
Such use by law enforcement is only allowed in publicly available spaces. These spaces can be publicly or privately owned. What matters is that they are accessible to an indefinite number of people, regardless of whether certain conditions for access may apply (such as, for instance a ticket for an event, to enter a fitness or a swimming pool). Would access granted through a badge in a privately held company still be considered as a publicly available space if thousands of people can get access to it, including guests, etc.? I would tend to answer in the positive, but questions are likely to occur as to what is considered a “publicly available space”, in particular when it is privately owned.
In these circumstances, the EU AI Act will be considered as a lex specialis in respect of Art. 10 GDPR and will be considered as the legal basis for the processing of personal data under Art. 8 GDPR. Any other use of a biometric identification system, whether in real time or not, including by authorities, shall always be subject to the requirements set forth in Art. 9 and 10 GDPR (bearing in mind that several data protection authorities have already banned such use).
(ii) Formal requirements
The use of such systems in the above-mentioned circumstances will always be subject to:
The following AI systems that are hard to categorize in one of the above are also prohibited:
The advent of GPTs in 2023 is one of the reasons which explains the delay in the adoption of the EU AI Act. While several Member States were keen on ensuring that a certain level of control upon these systems would find its way into the EU AI Act, others such as France and Germany were more reluctant.
Ultimately, Member States have reached a compromise and Title IV now provides for certain transparency obligations for providers and users of such systems in its Art. 51 et seq. In a provision mirroring Art. 27 GDPR, the EU AI Act requires providers of GPAIs established outside of the Union to appoint an authorized representative to act as a contact point for the authorities.
Similarly to the notion of AI systems (see our latest paper), the Commission defines general purpose AI models (GPAI) based upon their functional characteristics, namely models that display significant generality and that are capable to perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, including through libraries, APIs or as direct download. According to the preamble, models with at least one billion parameters should be considered as displaying such significant generality.
While focusing in this Title IV on GPAIs, the EU AI Act first addresses in its Art. 52 chatbots and emotion recognition biometric categorization systems. In both instances, providers should inform the concerned natural persons that they are interacting, respectively being subject to such systems, and ensure that their data, notably with regards to emotion recognition or biometric systems, is processed in line with the application data protection legal framework.
The EU AI Act then turns to GPAIs. It makes a distinction between general purpose AI models (GPAI) in general, and the ones with systemic risk, that are subject to additional obligations.
Generally speaking, providers of GPAIs have to ensure that:
The above requirements will not apply to AI models that are made accessible under a free and open license such as, for instance, Llama-2, launched by Meta, and whose parameters including the weights and information on architecture model are made available.
Even open source licensed GPAIs will however have to comply with the following requirements:
Unless the GPAI at stake presents a systemic risk, in which case no exception to above obligations shall apply, none of these above obligations are imposed when:
It is to be pointed out that the use of such a model for internal processes may not be considered as falling under an exception of the Union copyright law, so that the exemption for such uses to provide a copyright policy is, from the first look of it, questionable.
(i) Classification
A systemic risk is defined as having a significant impact on the internal market due to its reach, and with actual or reasonably foreseeable negative effects on public health, safety, public security fundamental rights or the society as a whole, and which can be propagated at scale across the value chain.
A GPAI model shall be classified as displaying systemic risks either if it has high impact capabilities (i.e. capabilities that match or exceed the most advanced GPAI) or based on a decision of the Commission taken ex officio or following an alert by the scientific panel (taking into account different criteria such as quality or size of the training dataset, number of business and end users, input and output modalities, degree of autonomy and scalability or the tools it has access to).
Is presumed to have high impact capabilities a model whose cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10^25, a threshold that may evolve over time. As an example, it is estimated that ChatGPT was trained on 10^24 FLOPs, meaning that any models significantly more powerful than GPT-3.5 will be considered to bear systemic risk.
Assuming the provider considers, based upon its own assessment, to meet the high impact capabilities requirement, it will have to notify the AI Office at the latest two weeks after the requirements are met or after having found out that the FLOP threshold in particular will be met, together with the relevant information. The provider should however be entitled to demonstrate that, notwithstanding such threshold, the GPAI at stake does not present a systemic risk due to its specific characteristics. The Commission may however decide to reject those arguments and consider such GPAI to be of systemic risk. In such cases, the provider may request reassessment of its model every six months based upon objective, concrete and new reasons.
The Commission shall publish a list of GPAIs with systemic risks and keep it up to date.
(ii) Requirements
Taking into account the specific risks represented by such GPAIs models, their providers are subject to additional obligations meant to identify and mitigate those risks and ensure an adequate level of cybersecurity protection, regardless of whether such model is provided on a standalone basis or is embedded in an AI system. These providers shall:
While it is to be hoped that standards leading to a presumption of conformity will emerge overtime, the EU AI Act provides that the AI Office shall encourage and facilitate the drawing up of codes of practices at Union level, in particular with regards to the obligations applicable to GPAIs with systemic risks. The preamble provides that these codes should represent a central tool for the proper compliance with obligations foreseen under the Regulation, and that these providers should be able to rely on these Codes to demonstrate compliance.
Goal of these codes would notably be to ensure that (i) these obligations are kept up to date in the light of market and technological developments, (ii) type and nature of systemic risks and their sources are identified, as well as (iii) the measures, procedures and modalities for the assessment and management of systemic risks, including the documentation thereof.
It would then be up to the Commission to approve such a code or, alternatively, to provide common rules for the implementation of the obligations put upon the providers of GPAIs presenting systemic risks.
While the drafting of such Codes certainly should be encouraged, the idea of leaving it up to some extent to the providers of these Codes to draft them, for instance through Partnership on AI, may also be perceived as an acknowledgment that, taking into account the complexity and opacity of most of these models, the asymmetry of information makes it difficult for outsiders to rule upon those models. Although understandable, this obviously is regretful as one may fear that we will end up leave it up to the main stakeholders to set their own rules to a large extent.
In our third paper of this series, we shall focus on high-risk systems.
Do you have questions about his topic?
Our lawyers benefit from their perfect understanding of Swiss and international business law. They are highly responsive and work hard to find the best legal and practical solution to their clients’ cases. They have acquired years in international experience in business law. They speak several foreign languages and have access to correspondents all over the world.
Avenue de Rumine 13
PO Box
CH – 1001 Lausanne
+41 21 711 71 00
info@wg-avocats.ch
©2024 Wilhelm Attorneys-at-Law Corp. Privacy Policy – Made by Mediago