The integration of artificial intelligence (AI) and machine learning (ML) into medical devices has revolutionized health care, enabling advancements in patient diagnosis, treatment personalization and management.
This technological evolution, however, introduces challenges in regulatory compliance and product liability that necessitate a nuanced approach to ensure patient safety without hindering innovation. In April 2023, the U.S. Food and Drug Administration (FDA) responded to these challenges by issuing a draft guidance titled “Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions.” The draft guidance aims to facilitate the safe and effective iterative improvement of AI/ML technologies, termed as machine learning-enabled device software functions (ML-DSF), within medical devices through a Predetermined Change Control Plan (PCCP), outlining methodologies for planned modifications and their impact assessment to enhance device performance responsively.
The FDA’s endeavor to establish guiding principles for PCCPs, in collaboration with international regulatory bodies such as Health Canada and the U.K.’s Medicines & Healthcare products Regulatory Agency (MHRA), underscores a commitment to global regulatory harmonization and informs the development of Good Machine Learning Practice (GMLP). This initiative ensures AI/ML-enabled devices are developed with an emphasis on diversity and health equity, catering to the needs of varied patient demographics. Aligning with broader governmental ethical considerations, the FDA’s guidance resonates with the White House’s “Blueprint for an AI Bill of Rights,” advocating for safe, transparent and equitable AI use, emphasizing algorithmic discrimination protection, data privacy and the provision of human alternatives.
However, the autonomous decision-making capability of AI/ML technologies complicates traditional product liability frameworks. The potential for continuous learning and adaptation post-deployment, coupled with the opaque “black box” nature of some AI algorithms challenges the attribution of liability in adverse events. Determining whether liability lies with the manufacturer, the health care provider or the algorithm developers becomes increasingly complex.
To navigate this evolving landscape, manufacturers and stakeholders are advised to actively engage with the FDA’s guidance, participate in public discussions, and prioritize the transparency and explainability of their AI/ML systems. Ensuring that health care providers and patients comprehend how decisions are made by these devices is crucial. Implementing robust data governance and ethical AI practices from the onset of device development, in line with the principles advocated in the AI Bill of Rights, is essential.
Manufacturers must also consider the total product lifecycle (TPLC) of their AI/ML-enabled devices, emphasizing comprehensive risk management and quality systems to maintain safety and efficacy as the devices evolve. This includes conducting rigorous testing and validation to accommodate the dynamic nature of AI/ML technologies while considering the foreseeability of potential harms and establishing clear warnings and communication strategies about device updates and modifications.
In conclusion, while the integration of AI/ML in medical devices presents significant opportunities for enhancing patient care, it also introduces notable regulatory and liability challenges. The FDA’s proactive steps towards creating a flexible yet robust regulatory framework highlight a path forward for manufacturers. By adhering to these guidelines, engaging in ongoing dialogue with regulatory bodies, and prioritizing ethical considerations, the medical device industry can effectively navigate the complexities of AI/ML integration, ensuring these technologies realize their potential in improving health care outcomes while mitigating associated risks and liabilities.
Eric M. Kraus is a partner at Phillips Lytle LLP, a member of the firm’s Litigation Practice Team and co-chair of the firm’s Life Sciences and Health Effects Team. He can be reached at ekraus@phillipslytle.com or (212) 508-0408.
George Hajduczok is counsel at Phillips Lytle LLP and advises drug and medical device companies on digital technologies and regulatory compliance as a member of the firm’s Life Sciences and Health Effects Team. He can be reached at ghajduczok@phillipslytle.com or (716) 504-5772.
Receive firm communications, legal news and industry alerts delivered to your inbox.
Subscribe Now