This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Skip to main content
United Kingdom | EN-GB

Add a bookmark to get started

Cortex - Life Sciences Insights

| 2 minutes read

Regulation of Life Sciences AI in Australia

In an important development for life sciences and healthcare companies (et al), on 17 January 2024 and after receiving over 500 submissions from an array of stakeholders, the Australian Government has released its interim response to its 2023 consultation paper “Safe and Responsible AI in Australia”.  The interim response confirms that the Government will consider the regulation of “high-risk” AI use cases via a series of mandatory requirements, to be developed in further consultation with industry and the wider community, as well as in conjunction with current regulatory reviews and legislative reforms. 

Ed Husic, Minister for Industry and Science, provided an initial view that high-risk AI will encompass “anything that affects the safety of people’s lives, or someone’s future prospects in work or with the law”, and it is anticipated that one of the first cabs off the rank could relate to AI use cases in healthcare.

While it is not clear as to the specific AI use cases that will be considered high-risk (and regulated accordingly), many of the ways in which AI is used in the life sciences and healthcare sectors (for example, the use of AI-enabled robots in surgical procedures, or AI technologies designed to predict, detect, diagnose and treat disease) would seem to satisfy the threshold test for high-risk AI.  Accordingly, life sciences and healthcare companies should expect the introduction of mandatory requirements for their AI use cases (to the extent they are, or could be, considered to be high-risk), which requirements, according to the government, are likely to centre around the principles of “testing, transparency and accountability”.  However, life sciences and healthcare companies should not expect to be overburdened, with the government making it clear that AI use cases in low-risk settings should be allowed to continue to flourish largely unimpeded.

For life sciences and healthcare organisations that provide, procure or use AI technologies that could be considered high-risk (or those wanting to do so in future), it is essential that they follow the developments in this space and take a proactive approach to enabling compliance with incoming laws and regulations.  Essential in this equation is developing an in-depth understanding of their current (or anticipated) AI use cases, as well as an understanding of their inherent risk profile.

DLA Piper is here to help; our global, cross-functional team of over 100 lawyers, data scientists, programmers, coders and policymakers deliver technical solutions to clients all over the world, on AI adoption, procurement, deployment, risk mitigation, monitoring and testing, as well as legal and regulatory compliance.  We also offer a unique-to-market forensic data science capability, enabling us to help clients monetise and productise data, develop AI systems in a legal and ethical manner, and conduct verification of AI systems to detect and mitigate algorithmic bias.

To stay up to date on the latest developments in this area, visit Cortex, our dedicated life sciences blog.

Tags

healthtech