This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Skip to main content
United Kingdom | EN-GB

Add a bookmark to get started

Cortex - Life Sciences Insights

| 9 minute read

The EU’s draft AI Regulation: key considerations for Life Sciences companies

On 21 April 2021, the European Commission published the long-awaited proposal for a Regulation on Artificial Intelligence[1](“AI Regulation”).   The proposed AI Regulation introduces a first-of-its-kind, comprehensive, harmonized, regulatory framework for Artificial Intelligence, with significant turnover based financial sanctions.  This article provides an overview of the AI Regulation, and summarizes the key points of interest for clients in the life sciences and medical devices industries. 

SCOPE

The AI Regulation adopts a broad regulatory scope, covering all aspects of the lifecycle of the development, sale and use of AI systems, including:

  • placing AI systems on the market;
  • putting AI systems into service; and
  • making use of AI systems.

All those involved in undertaking these activities – whether as a provider, user, distributor, importer, or resellers – will be subject to a level of regulatory scrutiny. This also extends to providers or users of AI systems who are located outside the EU if they are placing AI systems into service in the EU or using the outputs derived from the AI systems operating in the EU.

ALIGNMENT WITH OTHER REGULATORY FRAMEWORKS

The AI Regulation is intentionally designed to complement and work alongside several existing legal frameworks, in particular: (i) the product safety / CE regime, including as it applies to the regulation of medical devices; and (ii)  data protection, under the GDPR.   Whilst the proposed text of the AI Regulation is only the first step in a long legislative process, it gives us an important early insight into the model the EU is looking to adopt.

DEFINITION OF AI SYSTEM

The definition of an AI system is intended to be technology-neutral and future-proof, while providing legal certainty. It is based on the OECD’s 2019 Recommendation on Artificial Intelligence and covers:

  • Software;

  • Developed with one or more of the specified techniques and approaches in Annex I to the AI Regulation (which the Commission can amend over time through delegated acts). Currently these techniques include:
    • Machine-learning approaches;
    • Logic- and knowledge-based approaches; and
    • Statistical approaches;
  • Which can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

MEDICAL DEVICES AND HEALTHTECH

Any AI System that constitutes a regulated medical device, or is used as a safety component of a medical device, is classified as a “High Risk AI System” (see more detail on this below).  This is particularly pertinent given the huge growth in recent years of ‘software as a medical device’, and the explicit reference to software in the definition of a medical device under the MDR and the IVDR.[2]   

A developer of such a device  – referred to in the AI Regulation as the “provider” – has a range of obligations, which we summarize below under “Requirements applicable to high-risk AI systems”. 

Importantly, however, the EU’s proposal is for the requirements for AI systems to be checked as part of the existing conformity assessment procedures under the incoming Medical Devices Regulation.  The AI conformity assessment will therefore be an extension of the medical devices assessment regime, with the stated aim of the EU to ensure consistency, avoid duplications and minimise additional burdens.   

Further, the AI Regulation establishes a comprehensive regime for both post-market monitoring and market surveillance, and provides that the market surveillance authority in each Member State with responsibility for medical devices will also have competency in respect of market surveillance of AI in relation to those devices.   However, in addition the AI Regulation requires the creation of a national supervisory authority responsible for providing guidance and advice on implementation of the AI Regulation across all sectors. 

PHARMACEUTICAL INDUSTRY

AI systems already enjoy a myriad of applications within the pharmaceutical industry, including in relation to drug discovery, the interpretation / analysis of clinical data, and the selection of patients for a clinical trial. 

As well as AI-based medical devices, some of the other uses of AI by the pharmaceutical industry may qualify for “high-risk” status based on the list of fields set out in Annex III to the AI Regulation, for example where processing of patient data constitutes the “biometric identification and categorisation of natural persons”.   Further, pharmaceutical companies, in common with all employers, will stray into high-risk territory when using AI for recruitment or to make decisions on promotion, termination and employee management. 

Otherwise, AI solutions used by the pharmaceutical industry may instead qualify for “lower-risk” status which, as explained below, triggers a lighter touch regime centred on transparency requirements. 

PROHIBITED AI PRACTICES

The AI Regulation prohibits specific AI practices (rather than AI systems) which are considered to create an unacceptable risk (e.g. by violating fundamental rights). Examples of these include AI-based dark patterns (i.e. AI systems that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior in a manner that causes or is likely to cause that person or another person physical or psychological harm) and AI-based micro-targeting (i.e. AI systems that exploit the vulnerabilities of a specific group of persons in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm).

HIGH RISK AI SYSTEMS

High-risk AI systems are permitted provided the strict controls set out in the regulation to mitigate risk are in place. Much of this part of the regulation follows the approach taken in existing EU legislation to manage product safety risk.

(a) The definition of a high-risk AI system: 

High-risk AI systems are defined by a classification model  that focuses on the risk associated with the product itself:

  • A first category covers AI systems intended to be used as a safety component of products (or which are themselves a product) covered by EU product safety legislation, such as the Medical Devices Regulation. These systems are listed in Annex II to the AI Regulation.

  • A second category covers stand-alone AI systems whose use may have an impact on fundamental rights. These systems are listed in Annex III and the list may be expanded in the future to cover other AI systems which the EC consider to present similarly high risks of harm.

    (b) Requirements applicable to high-risk AI systems

The key regulatory controls on high risk AI systems fall on providers[3] of the system, as summarised below. 

  • Transparency: High-risk AI systems must be designed and developed in such a way to ensure that operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately. Clear user documentation and instructions should be provided to the user, which must contain information on the identity of the provider, the characteristics / capabilities / limitations of the AI system and the human oversight measures.

  • Security: A high level of accuracy, robustness and security must consistently be ensured throughout the high-risk AI system’s lifecycle. Serious incidents and malfunctioning of the high-risk AI system must be reported to the market surveillance authorities of the Member State where the incident occurred.

  • Accountability:
    • Complete and up-to-date technical documentation must be maintained (and drawn up by providers before the placement on the market/putting into service) to demonstrate compliance with the AI Regulation. The outputs of the high-risk AI system must be verifiable and traceable throughout the lifecycle, including the automatic generation of logs (which must be kept by providers, when under their control).
    • The system must be registered in an EU database on high-risk AI systems before being placed on the market or put into service.
    • Where no importer can be identified, providers established outside of the EU shall appoint an authorized representative.
  • Risk management: A risk management system must be established, implemented, documented and maintained as part of an overall quality management system. Risk management must comprise a continuous iterative process run throughout the entire lifecycle of the system.

  • Testing: Any data sets used to support training, validation and testing must be subject to appropriate data governance and management practices and must be relevant, representative, free of errors and complete and have the appropriate statistical properties to support the system use.

  • Human review: AI systems must be designed and developed in such a way that there is effective human oversight. This element of human oversight can also be found in article 22 GDPR on automated decision-making that provides for a right to obtain human intervention.

Importers, distributors, and users of high-risk AI systems are subject to even more limited regulatory control requirements. Most notable for users of high risk AI systems are the requirements to (i) use the systems in accordance with the instructions given by the provider; (ii) ensure all input data is relevant to the intended purpose; (iii) monitor operation of the system and inform the provider/distributor of suspected risks / serious incidents, or malfunctioning and (iv) keep logs automatically generated by that high-risk AI system, where those logs are within their control.

(c) Conformity assessments and notified bodies / notifying authorities 

As noted above, the AI Regulation includes a conformity assessment procedure which has to be followed for high-risk AI systems, and where the high risk AI system is already regulated under product safety rules, a simplified conformity assessment regime applies, effectively as an extension to the existing regime.

RULES FOR OTHER (LOW RISK) AI SYSTEMS

AI systems which are not deployed for a prohibited practice and fall outside the scope of a high-risk system will be subject to a number of basic controls that apply to all AI systems, in particular:

  • If the AI system is intended to interact with an individual, the provider must design the system to ensure the individual is aware they are interacting with an AI system (except where this is obvious, or it takes place in the context of the investigation of crimes);
  • If the AI system involves emotion recognition or biometric categorization of individuals, the user must inform the  individual that this is happening;
  • If the AI systems generates so-called ‘deep fakes’, the user must disclose this (i.e. that the content has been artificially created or manipulated).
  • Codes of Conduct are encouraged in order to encourage those providing and using lower risk AI systems to comply with the letter and spirit of the rules applicable to high-risk AI systems.


GOVERNANCE, ENFORCEMENT AND SANCTIONS

  • European Artificial Intelligence Board

The AI Regulation provides for the establishment of a European Artificial Intelligence Board (“EAIB”), to help advise and assist the EC  in relation to matters covered by the AI Regulation to (i) contribute to the effective cooperation of the national supervisory authorities and the Commission, (ii) coordinate and contribute to guidance and analysis by the Commission and the national supervisory authorities and other competent authorities on emerging issues and (iii) assist the national supervisory authorities and the EC ensure consistent application of the rules. The EAIB construct is clearly modelled on the tasks and responsibilities of the European Data Protection Board (EDPB) under the GDPR.

  • National competent authorities

Member States must designate national competent authorities and a national supervisory authority responsible for providing guidance and advice on implementation of the AI Regulation, including to small-scale providers.

  • Enforcement 

The AI Regulation requires Member State authorities to conduct market surveillance and control of AI systems in accordance with the product safety regime in Regulation (EU) 2019/1020. Providers are expected to co-operate by providing full access to training, validation and testing datasets etc.

If market surveillance by an authority gives reason to believe that an AI system presents a risk to the health or safety or to the protection of fundamental rights of persons, the authority shall carry out an evaluation of the AI system and where necessary, require corrective actions.

  • Sanctions 

Infringement of the AI Regulation is subject to monetary sanctions of up to €10m – €30m (depending on the nature of the infringement), or (if higher) a turnover based fine at 2% - 6% of the global annual turnover.

The AI Regulation is enforced by supervisory authorities and does not provide for a complaint system or direct enforcement rights for individuals.

Footnotes: 

[1] The Proposal for a Regulation of The European Parliament and of the Council laying down harmonised Rules On Artificial Intelligence .

[2] i.e. the Medical Devices Regulation and the In Vitro Diagnostic Medical Devices Regulation

[3] A provider is defined as a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge.

Tags

regulation, artificial intelligence, european union