This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Skip to main content
United Kingdom | EN-GB

Add a bookmark to get started

Cortex - Life Sciences Insights

| 14 minute read

Uprooting Algorithmic Bias: How life sciences and healthcare organisations can minimise biased outcomes in the use of AI

Artificial Intelligence (AI) is already altering the way we live and work, providing both opportunities to grow and challenges to overcome.  Yet, Australia is hesitant in its attitudes towards AI.  A recent study surveying how individuals in 31 different countries feel about AI revealed that Australia was most worried, with 69% of respondents (compared to the global average of 52%) saying that they felt nervous about the technology,[1] and only 44% of Australians indicating that they trust AI systems not to discriminate or show bias.[2]  These results provide a useful indication of the public consciousness as to one of the biggest risks and dangers associated with the use of AI: the occurrence of algorithmic bias.[3]

We recently reported on how the circumstances surrounding the training of AI systems can lead to the occurrence and perpetuation of bias in the operation of AI algorithms (using the examples of sex and racial bias), which may become evident in an AI system’s outputs or outcomes, and how this presents issues in the application of AI in the life sciences and healthcare sectors.  However, this is not limited to these sectors, with issues pertaining to bias also relevant to the application of certain AI systems in the criminal justice system,[4] education,[5] recruitment,[6] and child protection,[7] just to name a few.

The existence of algorithmic bias is an issue not only for those making use of AI (such as developers and user organisations), but also for those who are impacted by the outcomes of AI-driven decision making. In this article, we take a deep dive into how algorithmic bias can arise, as well as the practical steps that organisations can take to resolve algorithmic bias, including where AI systems have been licensed from third party providers.

How Does Algorithmic Bias Arise?

Ideally, the goal of any AI system should be to produce useful output that is as accurate, fair, ethical and equitable as possible; but sometimes, AI systems can produce results that fall short of these expectations, producing results that may be inaccurate and/or unfair.  Where this occurs, it is generally suggestive of some sort of bias within the relevant algorithm, which bias can be caused by a variety of factors, including data quality and issues with data collection, idiosyncrasies in the underlying algorithm and its operational parameters, or even human involvement in the process that is enhanced by AI (for example, selection bias in the procurement of training data sets, the selection of which algorithms are used to operationalise the relevant AI system, or issues with the interpretation of AI outputs). 

The most common cause of bias in AI systems, however, is where the data used to train an AI system are in some way inadequate.  Generally, data used to train an AI system are procured from a raft of different sources, including, for example, being scaped indiscriminately from the internet.  It is then fed into an AI system and a learning algorithm is implemented in order to teach the system to generate (with varying degrees of user guidance) the desired output, based on statistical connections and inferences that the system draws from its training data and the relevant user input.

However, the fatal flaw in this process is that, because an AI system and the quality of its output is generally only as good as the data it was trained on, an insufficient quantity or breadth of data used to train an algorithm will likely compromise that quality.  Further, if there is a lack of ‘diversity’ in the data (where the data do not incorporate diverse demographics sufficient for the AI system to make accurate or appropriate predictions in accordance with its overriding purpose), this could lead to an AI system generating output that is biased.  If the training data is reflective of the social and cultural biases of human society generally, then this will likely result in those biases being reproduced in the form of the relevant AI system’s output. 

Practically, the risk that this presents is not only legal in nature (possibly exposing organisations to liability where, for example, an AI system is deployed to make decisions that ultimately discriminate against or affect the rights of certain (classes of) persons), but is also reputational and existential.  The unaddressed perpetuation of bias in an AI system can lead to dramatic outcomes, like a decrease in access to products and services and poorer outcomes for certain demographics, all of which will likely have an effect on the willingness of people to engage with AI systems and the organisations that deploy or use them.  For example, an AI system designed to predict heart disease risk that was trained on majority male data (acknowledging that the substantial bulk of medical research to date is based on male anatomy and presentation),[8] could fail to correctly predict heart disease risk in women (either by overestimating or underestimating risk), potentially leading to poorer health outcomes for women as a result of being ‘overlooked’ by the AI system as a result of its bias.

How Can Organisations Resolve Algorithmic Bias?

The importance of identifying and resolving algorithmic bias is clear.  The Australian Federal Government’s recent discussion paper on the safe and responsible use of AI comments that those designing or implementing AI systems need to design, test and validate the systems to correct for bias and potential harms.[9]

However, before any mitigation strategies can be adopted, it is crucial to understand the relevant AI model and ensure that it is, to the greatest extent possible, explainable, meaning that human users can comprehend how the system works and why the system produces certain results or makes certain decisions, as well as the impact of potential biases on those results.[10]  This may be easier said than done in the case of complex AI systems, such as neural networks (AI systems that process data using interconnected groups of ‘nodes’ analogous to neurons in the human brain), where the internal workings and decision-making processes may not be easily understandable or may even be completely opaque.  Nonetheless, as explainable AI facilitates trust and confidence, it is important for organisations to understand any AI product they use or develop, especially how it learns, draws inferences and makes predictions based on its training data.  It is through this understanding that a user will also be able to identify any algorithmic bias and then seek to mitigate or resolve it.

  
 


In light of the fact that every AI system is trained on its own unique data set, it is likely that appropriate strategies to effectively mitigate bias will differ depending on the system in question, its size, complexity, capability and explainability, and a ‘one size fits all’ approach may not deliver the best outcome.  That being said, although it may not be possible to completely eliminate algorithmic bias, where an organisation is concerned about algorithmic bias and its consequences, or indeed, identifies bias in its AI system’s output, the following practical steps may be taken to investigate, mitigate and (hopefully) resolve that bias.

Define Fairness Criteria

Organisations should start by defining what ‘fairness’ (generally defined to refer to impartial and just treatment) means in the context of the overriding purpose of the relevant AI system and the output it produces.  Due to the potential for AI systems, particularly when used in a decision-making context, to affect lives, it is essential to have a management framework that quantitatively assesses the fairness of that AI system and how it operates.

As the notion of fairness is quite subjective (differing depending on legal system, culture, etc.) and broadly philosophical, fairness could be defined in various ways, including by reference to the legal precepts of discrimination, or the preferred outputs/outcomes of the AI system itself as it relates to certain individuals or groups, based on certain attributes (for example, legally protected attributes such as sex, race and disability status).  Ideally, an algorithm should behave in such a way that it treats every person equitably, and so observing whether or not an algorithm treats people differently based on those attributes is a helpful way in which to identify (and then seek to mitigate) unfairness.  However, the way fairness is defined in the relevant context will largely depend on the relevant use case and what the AI system is designed to achieve.  For example, ‘equality’ is not likely to be an appropriate fairness metric for an AI system that needs to understand and cater for the differences between the sexes in order to achieve equitable outcomes (e.g. where differences between the sexes are relevant to disease presentation and thus relevant to achieving improved health outcomes for patients regardless of sex).

In addition, an important element of ensuring fairness is to have meaningful diversity in teams developing, and assessing the fairness of, AI systems.  A team in charge of AI development, bias mitigation and monitoring fairness will have to be carefully curated, as a culture where minority voices are heard and encouraged will facilitate improved identification of potential fairness-related issues that may simply not have been considered in a team devoid of diversity. This is also important given that what a non-diverse group may view as fair is likely to differ from what a diverse group, with multiple different perspectives, may consider as fair, such that a diverse group may better and more accurately define what fairness means in the context of the relevant AI system.

Data Review

Next, organisations should review all of the data with which the AI model has been, or is to be, trained (though this may be easier said than done).  It is pivotal that the data are representative and diverse, covering all relevant groups and demographics.  If there is any obvious under-representation of a demographic identified in the data, the collection of additional data to address this gap should be considered. Depending on the size and functionality of the AI tool, this may be a lengthy and difficult process; however, it is a required one in order to avoid an algorithm that may perpetuate bias. 

Where it is not possible to collect additional data, the limitations of the training data should be made clear to all who use the system, and ongoing work should be done to try and rectify the deficit.  For instance, if the training data for an AI system used in life sciences or healthcare did not include data related to pregnant women (and a study on pregnant women could not be ethically justified) then, to the extent the AI system is used in a real world context for pregnant women, evidence should be gathered from those experiences to update the training data.

Attribute Selection

Organisations should then identify any relevant attributes that could be correlated with bias (for example, sex or race) in the relevant data set, as it is essential to be aware of the potential impact of these attributes on the relevant AI system’s outputs.  If there is an acknowledgement that a data set is limited in terms of its diversity and attribute representation, then that organisation needs to be conscious that the outputs/outcomes of the AI system may be biased, and it behoves the relevant organisation to take the steps set out below in attempting to resolve that potential bias.

Measure Bias

Once the AI system has been deployed, organisations should use the fairness metric developed previously to assess bias in the relevant system.  For example, biased outcomes may be able to be identified by the volume of false positive results given in relation to a particular demographic, or the manner in which the AI system ostensibly disregards certain individuals or classes of person based on certain attributes.  In order to place a control into this assessment, organisations should assess performance across different demographics and attribute combinations.

Bias Mitigation And Ongoing Monitoring 

If algorithmic bias is detected in an AI system, then the organisation should take measures to mitigate that bias by implementing strategies that may include: 

  • augmenting or modifying the training data to ensure that all demographics are adequately represented, or otherwise, that the data is, as far as possible, balanced and diverse, taking into account the purpose of the AI system – this could involve:
    • procuring more data from various sources, which could be an expensive exercise depending on the AI system and the work required to ‘diversify’ the training data set.  Organisations should consider any additional risks that may arise from such an exercise (for example, collecting extensive personal information for this purpose will give rise to privacy risks); or
    • ‘cleaning’ the existing data to remove certain data points correlated with bias (for example, certain attributes like sex or race).  In doing this, organisations should be conscious that, in some cases, simply hiding certain attributes from a data set may not be sufficient to mitigate bias, if other pieces of information in the data set may act as proxy variables for that protected attribute.  As an obvious example, in medical data, references to male reproductive organs would act as a proxy variable for words that otherwise denote the male sex.  Less obvious examples may include whether a person is on a particular insurance or medical aid plan, or whether they are employed, as these may be linked to socio-economic status or membership of a particular minority group.  Regardless of the specific application, an AI system will often draw connections that are not recognised by humans, such that a person 'cleaning' the data will not always identify the issue.  As a result, it may be necessary to identify (to the extent possible) the connections the AI system is making; 
  • adjusting the algorithm itself to enforce fairness constraints during training, including by adjusting the design or parameters of the model, or by changing its complexity; or
  • exercising greater human oversight over the system’s outputs, to scrutinise those outputs after the fact, thereby making the final decisions fairer.

After implementing bias mitigation techniques, testing and validation of the relevant system's performance are imperative to check that these techniques have in fact reduced bias.  It is also important to continue to monitor the system’s performance, and feedback from users, to identify areas for improvement.  Finally, regular audits and reviews should be performed to identify any new instances of bias or changes of behaviour in the system itself, which processes should be carefully documented. 

While the complete elimination of algorithmic bias may not always be achievable, continuous efforts to address this issue will lead to more ethical and equitable applications of AI over time.

Outside The AI System 

In addition to the above steps, which focus on the AI system itself, organisations should also ensure that the parts of the organisation that surround the relevant AI system are able to assist in reducing the presence of algorithmic bias and the likelihood of discriminatory outcomes.  For instance, organisations should ensure that they assign specific individuals to monitor the relevant AI system and its outputs, in order that biased and inaccurate outcomes can be detected and mitigated proactively.  In addition, those parts of an organisation that interact with the relevant AI system should be trained in the concepts of bias and discrimination in order to be able to identify such issues in the system’s outputs.  Having the right personnel involved in AI development and deployment is an important aspect of minimising bias.

Further, AI systems that are customer-facing also raise questions around bias and discrimination, not necessarily in respect of the AI system itself, but in terms of the ease with which it may be accessed by a diverse number of users.[11]  Making AI systems accessible, non-discriminatory and user-friendly is a key imperative for any organisation that implements AI in its customer-facing operations, and speaks directly to the global focus toward ensuring diversity and inclusion in industry.

Third party Purchased / Licensed AI

The issues pertaining to bias in AI systems apply to any sort of system that learns from training data, whether that system was developed by an organisation, or purchased or licensed from a third party (Provider).  Even where an organisation is purchasing or licensing an AI system from a Provider, that organisation will remain ultimately responsible for it, and will also be accountable in most instances for the effect that an AI system has on the lives of the people that interact with it.  For example, in the life sciences and healthcare sectors, an AI-driven diagnostics tool could misdiagnose a patient, causing them to suffer loss.  In that situation, the aggrieved individual is likely to seek a remedy from the organisation that misdiagnosed them using the AI tool (e.g. a hospital), rather than the Provider. 

Acknowledging that, in this context, the user organisation will generally lack the technical permissions or capabilities to assess and mitigate bias in the Provider’s system, those user organisations should ensure that their contractual arrangements with Providers assist, as far as is reasonable, in the mitigation of issues pertaining to bias, including by:

  • ensuring that, under the relevant contract, the Provider commits to obligations around the testing of the system (including algorithmic bias testing) and monitoring of outputs, coupled with obligations to remediate any issues identified;
  • requiring that, under the relevant contract, the Provider provides an appropriate level of transparency in relation to the workings of the relevant system, as well as the results of any algorithmic bias testing undertaken and any steps that will be taken to mitigate or resolve any bias that is identified.  Further, prior to contracting, organisations should require the Provider to supply detailed information about these matters.  This enables the organisation to be best placed to understand and accept the inherent risks of the AI system in question or, alternatively (particularly if this information cannot be provided or is unsatisfactory), refrain from using the relevant system;[12] and
  • ensuring the inclusion of appropriate warranties and indemnities in the relevant contract with the Provider, particularly in relation to third party claims arising in connection with the use or operation of the relevant AI system.  Although this will not do anything to prevent the occurrence of bias in the system, it will mitigate an organisation’s potential liability in connection with the operation of the system and may also act as a powerful incentive for the Provider to implement measures to mitigate its own liability.

Looking Forward

Despite a majority of Australians remaining hesitant towards AI and lacking trust in its ability to remain unbiased, over 60% of Australians surveyed agreed that products and services using AI will profoundly change their daily life within the next three to five years.[13]  As a result, strategies to mitigate algorithmic bias should be front-of-mind for developers, creators, providers and user organisations.  By implementing the above practical steps, involving both the AI system itself as well as the people surrounding it, it is possible to reduce the occurrence and consequences of algorithmic bias.

How can DLA Piper Help?

DLA Piper’s global, cross-functional team of 100+ lawyers, data scientists, programmers, coders and policymakers deliver technical solutions to our clients all over the world, on AI adoption, procurement, deployment, risk mitigation, monitoring and testing, and legal and regulatory compliance.  We also offer a unique-to-market forensic data science capability, enabling us to help clients monetise and productise data, develop AI systems and algorithmic models in a legal and ethical manner, and conduct verification of AI systems to detect and mitigate algorithmic bias.

Because AI is not always industry-agnostic, our team also adopts a sector focus and has extensive experience in a range of sectors; we’re helping industry leaders and global brand names stay ahead of the AI curve.

 

Resources:

[1] https://www.ipsos.com/sites/default/files/ct/news/documents/2023-07/Ipsos%20Global%20AI%202023%20Report.pdf 

[2] Ibid.

[3] Australian Government, Safe and responsible AI in Australia, Discussion paper (2023) p 7.

[4] https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/ 

[5] https://blogs.lse.ac.uk/impactofsocialsciences/2020/08/26/fk-the-algorithm-what-the-world-can-learn-from-the-uks-a-level-grading-fiasco/

[6] https://www.abc.net.au/news/2020-12-02/job-recruitment-algorithms-can-have-bias-against-women/12938870 

[7] https://hbr.org/2020/11/a-simple-tactic-that-could-help-reduce-bias-in-ai 

[8] Invisible Women (Vintage, 2020), Part IV: Going to the doctor.

[9] Australian Government, Safe and responsible AI in Australia, Discussion paper (2023) p 7.

[10] https://www.ibm.com/topics/explainable-ai                                               

[11] Lauren Solomon and Nicholas Davis, The State of AI Governance in Australia, Human Technology Institute, The University of Technology Sydney (2023), p 46.

[12] https://www.statnews.com/2023/05/23/chatgpt-questions-health-care-patients-artificial-intelligence/?utm_campaign=health_tech&utm_medium=email&_hsmi=259402973&_hsenc=p2ANqtz-_OcqfrnZ2gpoVjA9vFmT3y2OnD_eYy3-nM1my7UzCqrrbZDijEmpIHvCrOxR1POQ477bmpbZjWBb6wPJA6aACqM07oJztABTbOTv_VmIQeL3dtyNg&utm_content=259402973&utm_source=hs_email 

[13] https://www.ipsos.com/sites/default/files/ct/news/documents/2023-07/Ipsos%20Global%20AI%202023%20Report.pdf

Tags

healthtech