On January 12, 2021, the U.S. Food and Drug Administration (“FDA”) released its Artificial Intelligence/Machine Learning (“AI/ML”)-Based Software as a Medical Device (“SaMD”) Action Plan (“Action Plan”).1 The Action Plan, which outlines a series of steps the Agency will take to further its regulatory oversight for AI/ML-based SaMD, is yet another effort in the Agency’s continuing push to modernize and clarify the regulation of digital health technologies.
FDA has taken a number of important steps in recent years to advance digital health—including AI/ML technologies—by clarifying its regulatory framework, a topic explored in two articles written by the co-leads of Ropes & Gray’s Digital Health Initiative and published in Law360 and Bloomberg Law at the end of 2020.2 FDA efforts have included issuing draft and final guidance documents addressing clinical decision support software and multifunctional devices, among other topics,3 and issuing a Proposed Regulatory Framework for Modifications to AI/ML-Based SaMD (the “Proposed Framework”) for stakeholder feedback in April 2019.4 More recently, in September 2020, FDA launched the Digital Health Center of Excellence within the Center for Devices and Radiological Health (“CDRH”), a centralized resource intended to help the Agency and external stakeholders build partnerships to accelerate digital health development and regulation.
AI/ML-based SaMD is a rapidly progressing field, with new technologies emerging that have the potential to transform medical product development and patient care. As described below, the publication of FDA’s Action Plan builds on FDA’s Proposed Framework and provides additional clarity regarding the Agency’s approach to AI/ML-based SaMD.
FDA’s Proposed Regulatory Framework for Modifications to AI/ML-Based SaMD
In April 2019, FDA issued its Proposed Regulatory Framework for Modifications to AI/ML-Based SaMD (“Proposed Framework”). The Proposed Framework took the form of a discussion paper, outlining for public comment the Agency’s proposed regulatory approach for medical devices using AI and ML. FDA recognized in the Proposed Framework that the traditional paradigm of medical device regulation is not well-suited for AI/ML-Based SaMD and that a new regulatory approach is warranted to provide “regulatory oversight to embrace the iterative improvement power of AI/ML SaMD while assuring that patient safety is maintained.”5 The Proposed Framework was limited in scope, applying only to commercial SaMD as opposed to SaMD used for research purposes, and focuses only on modifications to pre-existing SaMD rather than original clearances and approvals of such products.
In recognition of the challenges inherent in regulating devices that are continuously learning and evolving, the Proposed Framework expressly noted the importance of undertaking a total product lifecycle (“TPLC”) regulatory approach for AI/ML-based SaMD that leverages risk-based principles for SaMD set out by the International Medical Device Regulators Forum (“IMDRF”),6 FDA’s benefit-risk framework for premarket review of medical devices,7 risk management principles in FDA guidance for modifications to 510(k)-cleared software devices,8 and the firm-based TPLC approach as envisioned in the Digital Health Software Precertification (Pre-Cert) Program. The Proposed Framework described a four-pronged approach to the regulation of AI/ML-based SaMD:
- Quality Systems and Good Machine Learning Practices (“GMLP”): Manufacturers of AI/ML-based SaMD will be expected to establish cultures of quality and demonstrate compliance with GMLPs;
- Initial Premarket Assurance of Safety and Effectiveness: Manufacturers will be expected to submit plans for modifications for premarket review, and these submissions will be expected to include a “Predetermined Change Control Plan” that details information about both the types of anticipated modifications to the software (the “SaMD Pre-Specifications” or “SPS”) and the methodology underlying algorithm changes to ensure that the device remains safe and effective after the modification (the “Algorithm Change Protocol” or “ACP”);
- Approach for Modifications After Initial Review with an Established SPS and ACP: The type of modification being made to an AI/ML-based SaMD product will dictate the type of notification required to be submitted to FDA. Certain types of modifications, as described in the framework, will require new premarket submissions, while others will merely require the submission of records related to the modification in the change history and other documents;
- Transparency and Real-World Performance Monitoring of AI/ML-based SaMD: Manufacturers will be expected to implement mechanisms to ensure transparency about the functions of and modifications to medical devices and suggests that manufacturers may need to provide periodic updates to FDA, as well as notices to health care professionals and users of the products, to explain software updates in detail. The Proposed Framework also encourages manufacturers to leverage real-world performance data and monitoring to understand how their products are being used and how to respond proactively to safety and risk issues that may emerge.
FDA’s AI/ML-Based SaMD Action Plan
In the Action Plan, FDA responds to stakeholder feedback it received on the Proposed Regulatory Framework.9 It outlines five actions that the Agency believes will advance the effort toward practical oversight of AI/ML-based SaMD moving forward in light of the comments received. The five actions include:
- Updating the Proposed Framework for AI/ML-based SaMD, including through issuance of draft guidance on the Predetermined Change Control Plan;
- Encouraging harmonization of Good Machine Learning Practice development;
- Promoting user transparency and a patient-centered regulatory approach, holding a public workshop on how device labeling supports transparency to users and enhances trust in AI/ML-based devices;
- Supporting regulatory science efforts to develop methodology for the evaluation and improvement of machine learning algorithms, including for the identification and elimination of bias, and for the promotion and evaluation of algorithm robustness; and
- Working with stakeholders who are piloting real-world performance process for AI/ML-based SaMD.
Tailored Regulatory Framework for AI/ML-Based SaMD
FDA received extensive comments on the Proposed Framework, including in particular the “Predetermined Change Control Plan” concept. In the Action Plan, FDA explains that it plans to publish draft guidance addressing Predetermined Change Control Plans, which will speak to the information manufacturers should include in the SPS and ACP to support the safety and effectiveness of AI/ML SaMD algorithms. The guidance will leverage both stakeholder comments on the Proposed Framework and FDA’s experience reviewing submissions for AI/ML-based algorithms, such as Caption Health Inc.’s AI/ML-based cardiac ultrasound software authorized by FDA in February 2020 through the de novo pathway. FDA’s goal is to issue the draft guidance on Predetermined Change Control Plans in 2021. The Agency also intends to continue working with stakeholders to refine its Proposed Framework, including by clarifying the types of modifications appropriate under the framework and the specifics on its focused review of AI/ML-based submissions.
Good Machine Learning Practice
FDA’s Proposed Framework advanced the concept of “Good Machine Learning Practices,” or GMLP, as a set of AI/ML best practices in areas such as data management, interpretability, evaluation, and documentation. FDA has been actively engaged in efforts to harmonize GMLP, working with the IMDRF and other international standards organizations. As part of the Action Plan, FDA commits to deepening its harmonization efforts with these organizations and continuing to leverage this work to achieve clear and harmonized GMLP standards. In addition, given the importance of cybersecurity considerations to the safety of AI/ML-based technologies, FDA notes that its GMLP efforts will be pursued in close collaboration with the Agency’s Medical Device Cybersecurity Program.
Patient-Centered Approach Incorporating Transparency to Users
FDA believes that AI/ML-based devices have unique considerations that necessitate a proactive patient-centered approach to their development and utilization, which accounts for issues like usability, equity, trust, and accountability. A key way FDA intends to address this issue is through transparency. However, the Action Plan acknowledges that, as expressed by numerous stakeholders, AI/ML-based devices pose particular challenges with regard to labeling and ensuring that users understand their outputs, benefits, risks, and limitations. FDA plans to solicit feedback, including through a public workshop, to help the Agency develop recommendations on the types of information manufacturers should include in the labeling of AI/ML-based medical devices to support transparency.
Regulatory Science Methods Related to Algorithm Bias and Robustness
FDA recognizes that AI/ML-based SaMD technologies are not immune from the threat of bias. In particular, because such systems are developed and trained using data from historical datasets, they are prone to including the biases that are present in these datasets. The biases present in our current health care system—such as variations in health care delivery by race, ethnicity, and socio-economic status—could be inadvertently introduced into AI/ML-based algorithms trained and developed using datasets with these biases. To address these challenges, FDA plans to continue to support and collaborate on regulatory science research efforts focused on AI/ML algorithm development and evaluation, including methods for the identification and elimination of bias, and methods for ensuring the robustness and resilience of AI/ML-based algorithms to withstand changing clinical inputs and conditions.
Real World Performance
As part of a TPLC approach to AI/ML-based SaMD development, FDA believes that real-world data collection and monitoring will be an important mechanism for manufacturers to understand how their products are being used, identify opportunities for improvements, and respond proactively to safety or usability concerns. In response to the Proposed Framework, FDA received many questions from stakeholders about the use of real-world data, underscoring the need for greater clarity and direction from FDA. FDA plans to support the piloting of real-world performance (“RWP”) monitoring by working with industry stakeholders on a voluntary basis. These efforts will be undertaken in coordination with other ongoing FDA initiatives focused on promoting the use of real-world evidence in product development and post-market evaluation, which Ropes & Gray’s Life Sciences Regulatory & Compliance attorneys have previously discussed in the podcast series “Non-Binding Guidance” in August 2019 and September 2020. Ultimately, FDA hopes that learnings from the pilot programs and further engagement with the public will assist FDA in creating a RWP monitoring framework for AI/ML-based SaMD.
Limitations of FDA’s Proposed Framework and Action Plan
While FDA’s Action Plan is an important next step in advancing the regulatory framework for AI/ML-based SaMD, there remain a number of substantive issues related to the regulation of these products that are either entirely absent from consideration in the Action Plan or are otherwise left open. For example:
- The Proposed Framework is limited to outlining the potential regulation of devices for commercial distribution and does not discuss the use of AI/ML-based devices in research. However, AI/ML innovations in research, including research related to drug discovery, are becoming increasingly common. As FDA has done for other digital health technologies, such as remote patient monitoring, the Agency may ultimately have to address the unique considerations that arise in the research context when relying on AI/ML-based SaMD.
- The Proposed Framework identified three broad types of modifications in AI/ML-based SaMD, each of which may raise different regulatory issues: (1) modifications related to performance (e.g., increased sensitivity based on further training of the algorithm); (2) modification to inputs (e.g., to support compatibility with CT scanners from additional manufacturers); and (3) modifications to intended use (e.g., changing from providing an informative “confidence score” to providing a definitive diagnosis). FDA notes in the Action Plan that this was a significant area of stakeholder feedback, and identifies this as an area for further work by the Agency.
Certainly, these questions do not exhaust the additional policy development that is needed to ensure a comprehensive regulatory approach for AI/ML-based SaMDs, and there is significant work ahead for both FDA and industry. FDA’s Action Plan makes clear, however, that the Agency is prepared to continue to work with industry stakeholders to create a comprehensive and effective framework for regulating this novel technology, and we expect these efforts will help spur continued development of AI/ML-based medical devices in the years to come.
- U.S. Food and Drug Administration, Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan, available at https://www.fda.gov/media/145022/download.
- See Megan Baca et al., Digital Health 2021: Trends in Big Data, AI, Telehealth, and Beyond, Bloomberg Law, available at https://news.bloomberglaw.com/health-law-and-business/digital-health-2021-trends-in-big-data-ai-telehealth-and-beyond; Megan Baca et al., Expert Analysis: 2 Major Digital Health Trends Driven by COVID-19, Law360, available at https://www.law360.com/articles/1339567.
- See also Gregory Levine and Abram Barth, Expert Analysis: Lessons From FDA Draft Guidance On Multifunctional Devices, Law360, available at https://www.law360.com/articles/1046613.
- FDA, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) (2019), available at https://www.fda.gov/media/122535/download.
- Proposed Framework at 4.
- See IMDRF, Software as a Medical Device: Possible Framework for Risk Categorization and Corresponding Considerations (2014), available at http://www.imdrf.org/docs/imdrf/final/technical/imdrf-tech-140918-samd-framework-risk-categorization-141013.pdf.
- See FDA, Guidance for Industry and FDA Staff: Factors to Consider When Making Benefit-Risk Determinations in Medical Device Premarket Approval and De Novo Classifications (2019), available at https://www.fda.gov/media/99769/download.
- See FDA, Guidance for Industry and FDA Staff: Deciding When to Submit a 510(k) for a Software Change to an Existing Device (2017), available at https://www.fda.gov/media/99785/download.
- See FDA, News Release: FDA Releases Artificial Intelligence/Machine Learning Action Plan (Jan. 12, 2021), available at https://www.fda.gov/news-events/press-announcements/fda-releases-artificial-intelligencemachine-learning-action-plan.
Stay Up To Date with Ropes & Gray
Ropes & Gray attorneys provide timely analysis on legal developments, court decisions and changes in legislation and regulations.
Stay in the loop with all things Ropes & Gray, and find out more about our people, culture, initiatives and everything that’s happening.
We regularly notify our clients and contacts of significant legal developments, news, webinars and teleconferences that affect their industries.