Digital health tools have proliferated in recent years, particularly during the exponential adoption of telehealth and remote care during the COVID-19 pandemic, as well as due to the rapid evolution of artificial intelligence (AI) technologies. Such tools are recognized for their potential to increase access to care, increase efficiencies in the healthcare system, and drive patient involvement in their own healthcare journey, holding promise for the improvement of health outcomes and reduction of costs, particularly for chronic conditions.
However, the accelerated development and adoption of digital health tools – many without regulatory oversight – have not necessarily included consensus or standards on how to adequately test, deploy, or govern these technologies. Additionally, regulators are increasingly discussing and proposing regulatory guardrails for AI tools. This leaves many in the healthcare ecosystem, from digital health technology developers to hospital networks, struggling to negotiate the real-world practicalities of smoothly and successfully developing, assessing, and using digital health tools.
Various groups have formed to help support organizations in the healthcare space make more informed decisions around digital health and AI-based solutions. Two such groups released practical guidance in May 2023:
- The Health AI Partnership (HAIP), a collaboration of healthcare organizations and ecosystem partners, including DLA Piper, released a series of practical guides setting out best practices for health systems adopting AI solutions, and
- A group of healthcare experts organized by the Digital Medicine Society (DiMe) released a framework for evaluating the quality of digital health tools based on their clinical evidence.
In this alert, we take a closer look at these frameworks and their impact on the industry.
Health AI Partnership best practice guides
HAIP is a multi-stakeholder collaborative, led by a team of clinicians, engineers, lawyers, and social scientists from DLA Piper, Duke Health, Mayo Clinic, and UC Berkeley, formed to establish best practices for healthcare organizations to use AI safely, effectively, and equitably.
Between April 2022 and January 2023, HAIP conducted in-depth interviews with over 90 professionals in healthcare and related fields, including those with clinical, technical, operational, and regulatory roles. Interviewees were collectively knowledgeable in bias, ethics, community engagement, organizational behavior, regulation, and credentialing.
In February 2023, HAIP held a case-based workshop attended by 75 industry leaders addressing a contemporary challenge: “Our health care delivery setting is considering adopting a new solution that uses AI. How do we assess the potential future impact on health inequities?” The interviews and workshop insights were then synthesized to surface themes and practical learnings.
This research culminated in mid-May with the release of a collection of best practices guides for healthcare professionals and organizations seeking to implement AI tools. These guides address the AI product life cycle, from identifying a problem that AI might solve, through development, deployment, and decommissioning of AI tools in the healthcare setting. The guides are organized around eight decision points and map out the dependencies and flows for implementing an AI solution within health systems:
Procurement
- Identify and prioritize a problem. Healthcare delivery organizations should identify and prioritize problems impacting their organizations and the stakeholders impacted by the problems. The guides help inform the assessment of AI tools as a technical approach to addressing the identified problems.
- Define AI product specification. Featured guides offer instruction for assessing the feasibility and viability of adopting AI to solve problems and for conducting assessments of AI products.
Development
- Develop success measures. The guides provide instruction to organizations for defining the scope of use, constraints, and dependencies of AI products as well as technical performance targets for AI products and measures of success.
- Design AI solution workflows. These guides include instruction for adapting pre-existing operational structures, workflows, and technologies to enable successful integration and optimal clinician support for AI products.
- Generate evidence of safety, efficacy, and equity. These guides provide validation tools for assessing AI products prior to clinical use and identifying potential risks from AI use in clinical care.
Integration
- Execute AI solution rollout. Guides regarding the dissemination of information about the AI products to affected clinicians, managing workflow changes, and preventing the inappropriate use of AI products beyond their intended scope.
Lifecycle management
- Monitor the AI solution. Guides for the monitoring of AI solutions over time, including audits of AI products and proactive risk identification in changing environments.
- Update or decommission the AI solution. Guides that cover expanding use of AI products to new settings, updating existing AI products, and decommissioning AI products, including minimizing disruptions and harms that could result from decommissioning.
This content is designed to offer healthcare stakeholders a comprehensive, pragmatic source of guidance to help navigate decisions regarding the adoption, implementation, and use of AI. HAIP’s goal in publishing these guides is to better define minimum elements for organizational governance of AI systems in healthcare settings and to support health system leaders to make more informed decisions around AI adoption.
HAIP invites healthcare delivery professionals and the wider community to engage with the guides and best practices on its website as well as provide feedback. The organization noted that it welcomes suggestions on additional case studies or topics for inclusion in the guides.
DiMe Evidence DEFINED Framework
At the end of May, a group of healthcare experts organized by DiMe proposed a framework to evaluate digital health tools. The framework, titled Evidence in Digital Health for EFfectiveness of INterventions with Evaluative Depth (Evidence DEFINED – as used herein, the Framework), seeks to create a standardized process for the evaluation of digital health interventions (DHIs) on the basis of their clinical evidence. The move highlights the desire among payers and providers to efficiently identify the most effective DHIs in the increasingly crowded digital health space and to choose such DHIs that are equitable, effective, and safe.
The researchers behind the Framework found that existing assessment frameworks were not designed for the assessment of digital tools and, of the 78 frameworks reviewed, none met all the needs and aims of adopters, such as health systems, payers, pharmacy benefit managers, and pharmaceutical companies. In response, the Framework was established to emphasize clinical evaluation and validation of DHIs while, at the same time, encouraging a streamlined and efficient review process to keep pace with technological advancements in healthcare.
The Framework includes four components: data privacy; clinical assurance and safety; usability and accessibility; and technical security and stability. The primary goals of the Framework are to facilitate standardized, rapid, rigorous DHI evidence assessment in organizations and guide digital health solutions providers who wish to generate evidence that drives DHI adoption.
The Framework sets out four steps to help ensure stakeholders make DHI selection decisions based on clinical evidence. The authors note that this process should be performed by evaluators with appropriate expertise, such as physicians, researchers, and clinical trialists:
- Screen for failure to meet absolute requirements. Stakeholders should identify their own threshold requirements for potential DHIs, such as HIPAA compliance, or whether the DHI is subject to FDA regulation and has the proper approvals and screen out any DHIs that fail to meet those requirements.
- Apply an established evidence assessment framework. Stakeholder organizations should apply existing evidence assessment frameworks developed for non-digital interventions, such as GRADE.
- Apply the Evidence DEFINED supplementary checklist. Stakeholders should apply the Evidence DEFINED supplementary checklist to supplement existing frameworks with evidence quality concerns that are particularly important to DHIs due to either their digital nature or the regulatory landscape.
- Make actionable, defensible recommendations. Stakeholders should use the evidence-to-recommendation guidelines to provide a recommendation regarding appropriate adoption levels based on clinical evidence.
The Framework explicitly excludes feature checklists for evaluating DHIs and suggests that they may have more drawbacks than benefits. Specifically, feature checklists increase the time required to evaluate DHIs and may yield misleading assessments of clinical value as “checking boxes” does not necessarily equate to clinical efficacy.
Importantly, the Framework is designed to accommodate the rapidly evolving area of digital health innovation by creating a public website for industry collaboration and feedback and establishing a reassessment process to update the Framework every 6 to 12 months. While the Framework is designed to assess clinical evidence, other domains should be considered when evaluating DHIs, specifically health equity, patient experience, cost-effectiveness, health literacy, interoperability, data governance, and overall product design.
Impacts
The release of HAIP’s guides for assessing and implementing AI and DiMe’s Evidence DEFINED Framework reflects the growing appetite for evidence-based, practical guidance and standardization in the space to help deliver the clinical value digital health solutions promise. Health systems, physician groups, payers, pharmaceutical and device manufacturers, patients, and other adopters of digital health tools could all benefit from the insights shared by these resources. Both highlight the importance of including diverse stakeholders in digital health projects, including those with deep regulatory experience.
The HAIP guides, the Framework, and others like them are useful, contemporary resources for organizations looking to build, bolster, or stay current on the latest concepts for responsibly assessing and governing digital health solutions. Of course, readers are encouraged to drill down on the directions proposed in these publications and seek counsel that is able to apply the concepts in the real world. DLA Piper’s Digital Health and AI practices spend significant time advising and strategizing with clients at a granular level on digital health adoption issues, including reimbursement, clinical trial setup, and AI bias testing, helping to operationalize industry best practices.
For more information about HAIP and its new guides, the Evidence DEFINED Framework, or DLA Piper’s Digital Health and AI capabilities, please contact your DLA Piper relationship partner, the authors of this alert, or any member of our Healthcare industry group.