The use of artificial intelligence (‘AI’) systems in the pharmaceutical and healthcare sectors is growing exponentially, with expectations that using such systems will bring significant benefits to individuals and the community. However, the significant opportunities connected to the use of AI should be considered jointly with the relevant risks, especially with reference to sectors as sensitive as those mentioned above.
For this reason, the Italian Data Protection Authority (‘IDPA’) published a ‘Decalogue for the implementation of national health services through Artificial Intelligence systems’ (‘Decalogue’) to draw attention to the main privacy requirements applicable to the use of AI systems in the context of national healthcare services.
First, the Decalogue highlights that AI systems affecting health - including those affecting the right to receive treatment and the use of healthcare services, medical care, and patient selection systems for emergency care - are classified as ‘high-risk systems’ under the proposed EU AI Act. The Decalogue therefore depicts the main privacy-related issues and obligations arising from using AI systems in the healthcare sector.
Although the Decalogue does not contain substantial novel ideas or principles, it clarifies the measures healthcare sector stakeholders must take to use AI in compliance with the GDPR. Moreover, many principles reported in the Decalogue apply to any type of high-risk AI systems, regardless of their use in the healthcare sector.
Below is a brief summary of the principles laid down by the IDPA.
1. Processing legal ground
First of all, the IDPA stresses that the appropriate legal basis for using AI systems in national healthcare services is Article 9(2)(g) of the GDPR. Therefore, the processing of personal data must be necessary for reasons of substantial public interest (according to Italian laws or regulations). The categories of data to be processed, the types of operations that can be carried out, the reasons of important public interest, as well as the appropriate measures to be implemented to protect the rights and freedoms of data subjects must all be indicated.
2. The principles of accountability and privacy by design and by default
The IDPA highlights the importance of the principles of accountability, privacy by design and privacy by default. The use of AI systems in national healthcare services must be consistent with said principles. In particular, the stakeholders must ensure the proportionality of data processing with regard to the public interest pursued and embed data protection measures from the design stage right through the lifecycle of generative AI technologies.
3. Privacy roles
The concepts of controller and processor are functional concepts, aiming to allocate responsibilities according to the actual stakeholders' roles. Their privacy roles must be allocated based on a factual rather than a formal analysis. To ensure GDPR compliance while using AI systems, it is essential to properly allocate the privacy roles of the concerned stakeholders to assign rights, obligations, and responsibilities.
Regarding national healthcare services, the stakeholders should identify their privacy roles by considering the national AI system in the healthcare sector as a whole, given that this system will be accessible by multiple entities for different purposes. Therefore, an overall view of the data governance framework is needed.
4. The principles of knowability, non-exclusivity and algorithmic non-discrimination
These principles are the three pillars that govern the use of AI systems in performing significant public interest tasks:
- The principle of knowability requires informing the data subject about the existence of a decision-making process based on automated processing operations and the logic on which these operations are based;
- The principle of non-exclusivity requires the decision-making process to include human intervention in order to ensure control over automatic decisions; and
- The principle of algorithmic non-discrimination requires data controllers to take appropriate measures to reduce opacity and errors to avoid possible discrimination deriving from processing either inaccurate health data or data based on incorrect statistical and mathematical procedures.
5. Data Protection Impact Assessment (‘DPIA’)
The IDPA states that a DPIA is indispensable to lawfully use AI systems in national healthcare services, since this involves the systematic and large-scale processing of sensitive data relating to vulnerable individuals.
The DPIA should be carried out at national level, to ensure a comprehensive assessment of all circumstances that may affect the processing of personal data (especially the risks arising from a database including health data of the entire population).
Although the IDPA’s statement refers to national healthcare services, the same conclusion applies to any AI system involving systematic and large-scale processing of patients’ health data.
6. Data quality
Article 5(1)(d) of the GDPR requires organizations to ensure that personal data is accurate and kept up to date. Compliance with this principle by healthcare operators is crucial to protect patients’ interests since the processing of inaccurate data may seriously damage their health and safety.
Hence, the stakeholders must implement appropriate measures to ensure data accuracy to address the risks associated with:
- Using systems with no rigorous scientific validation;
- The lack of control over the data processed; and
- Adopting decisions based on unfitting assumptions.
7. Data integrity and confidentiality
Article 5(1)(f) of the GDPR sets forth the integrity and confidentiality principle according to which organizations must process personal data in a manner that ensures appropriate security of said data. In this regard, the IDPA stresses that the main risks associated with using deterministic and stochastic analysis models based on machine learning techniques derive from possible biases that may cause harmful consequences to data subjects.
For this reason, the IDPA emphasizes that organizations will need to indicate in detail:
- The algorithmic logic used by the AI system to both train the system and generate its output;
- The checks performed to avoid biases;
- The corrective measures taken to remediate these biases; and
- The risks inherent in deterministic and stochastic analyses.
8. Fairness and transparency
Organizations using AI in the healthcare sector will need to adopt the following measures to ensure compliance with the fairness and transparency principle, in addition to those that are commonly necessary:
- Explaining the logic and data processing characteristics on which the AI system is based;
- Clarifying whether healthcare professionals have any liabilities arising from the use of AI;
- Highlighting the diagnostic and therapeutic benefits of using AI;
- Ensuring the intervention of healthcare professionals when using AI systems for treatment purposes; and
- Regulating healthcare professionals’ liability arising from the choice to use AI.
9. Human supervision
To avoid the severe risks associated with using inaccurate data to train the algorithm or the assumptions based on which the system works, humans should maintain a central role in the training phase and decision-making process.
The IDPA recalls a case concerning the use of an AI system in the US aiming to estimate the health risk of more than 200 million citizens. This system assigned a lower risk to African-American patients with the same health conditions as other citizens since the metric for estimating the health risk was based on the average individual health expenditure.
As shown by this example, the distorted use of AI systems in the healthcare sector may result in serious risks. Therefore, human supervision is necessary to ensure the the functioning of AI systems and their outputs can be verified.
10. Additional profiles about data protection regulations concerning dignity and personal identity
The IDPA includes final reflections on the importance of using ethics as an interpretative criterion for regulating the use of AI. For instance, ethics should guide organizations in choosing suppliers and business partners that comply with the principles set out in the Decalogue.
In this regard, we expect that compliance by companies producing, distributing, or using AI with data protection legislation will be crucial for their business. These organizations should be able to demonstrate that they are trustworthy from a data protection standpoint. Therefore, implementing a 'privacy-centric' strategy may be highly relevant to ensuring business success.
For more information on this development and how DLA Piper can assist you in developing an effective legal compliance management strategy to harness the full potential of AI, please get in touch.