Rochester Business Journal
Read the ArticleArtificial intelligence (AI) has revolutionized the health care industry for decades, providing solutions that enhance the efficiency, accuracy and accessibility of medical services.
Use cases include the use of robotic technology in surgery, AI algorithms that assist with diagnostics and personalized treatment plans, and large language models that can write a medical note using a transcript from a patient visit. It is clear that AI’s applications are vast and continually expanding, and the speed of innovation promises a future where health care is more precise and patient-centered. This article discusses examples of innovative applications of AI within the health care industry and provides an introduction to the patchwork of health care laws and regulations that surround and impact the deployment of such technology.
One of the most significant AI contributions to health care is improvement in diagnostic accuracy. Machine learning algorithms, trained on vast datasets of medical images and records, detect anomalies and patterns that may be missed by human eyes. For example, AI systems have shown remarkable proficiency in identifying early stages of diseases such as cancer, even before symptoms manifest.
AI is also at the forefront of personalized medicine, tailoring treatment based on an individual’s genetic makeup, lifestyle and other factors. The expansion of electronic health care records (EHR), nationally and globally, has led to incredible amounts of readily accessible stored data. By analyzing data from various sources, including genomic data and EHRs, AI predicts how a patient might respond to different treatments. This personalized approach improves patient access to effective therapies while lessening harmful side effects, thus optimizing the overall treatment process.
Beyond direct patient care, AI is streamlining administrative and operational aspects of health care. NLP tools automate the documentation process, reduce the burden on health care providers, and allow them to focus on patient care. Predictive analytics forecast patient admissions, optimize staffing and help hospitals meet patient needs without unnecessary delays or resource wastage. Further, CDS software supports clinicians with computer physician order entry and the sending of electronic prescriptions. AI-powered virtual health assistants are transforming patient interactions with health care services. These assistants provide 24/7 support, answer questions, schedule appointments and even observe patients’ health through remote monitoring devices. Lastly, as we have seen throughout the COVID-19 pandemic, telemedicine that is augmented by AI enables remote diagnosis and monitoring, making health care more accessible, and provides a lifeline for underserved areas.
While there is no one comprehensive federal regulation addressing health care and AI, there are several regulatory agencies which have enacted regulations that govern the use of certain AI technologies. First, the U.S. Food and Drug Administration (FDA) regulates the production and sale of medical devices in the United States through the Federal Food, Drug, and Cosmetic Act (FD&C Act) and related rules and regulations.
As AI presents itself differently depending on its application ― accessory or component, stand-alone solution or used in the manufacturing process ― it is regulated according to its application. For example, the 21st Century Cures Act amended the FD&C Act to exclude certain clinical decision support software functions from the definition of a device under the law based on its purpose, e.g., functions that provide duplicate testing or prescription product prevention notifications.
Second, the Federal Trade Commission (FTC) Act gives the FTC investigative and certain law enforcement authority regarding unfair or deceptive acts or practices affecting interstate commerce and, therefore, has jurisdiction over a large range of industries, including health care. The FTC has identified AI as a technology with the potential to harm consumers, such as the risk for unfair or discriminatory outcomes or the perpetuation of existing socioeconomic disparities. The FTC’s business blog includes informal guidance on how the FTC applies its principles to AI and advises that AI tools should be “transparent, explainable, fair, and empirically sound, while fostering accountability.”
The third example is the Centers for Medicare and Medicaid Services (CMS), which issued regulations addressing Medicare Advantage plans’ use of algorithms, software or AI to perform utilization review and make medical necessity determinations. Under the federal rule, insurance companies must ensure that medical necessity determinations are based on the specific individual’s circumstances.
As AI thrives on large datasets to inform and train the AI algorithms, health care entities that wish to deploy or are deploying AI software must be cognizant of the relevant federal, state and international laws on data privacy – including the Health Insurance Portability and Accountability Act of 1996 (HIPAA), the General Data Protection Regulation (GDPR), and comprehensive state laws such as the California Consumer Privacy Act (CCPA) as amended by the California Privacy Rights Act (CPRA).
The Biden administration has also made it a priority to address the risks of AI. On October 30, 2023, President Biden issued Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The executive order will impact many business sectors and is a significant development in the regulation of AI in the United States. Regarding the health care industry, the U.S. Department of Health and Human Services (HHS) is required to create the HHS AI Task Force with the purpose of developing a strategic plan that includes policies, frameworks and regulatory action on the responsible deployment and use of AI and AI-enabled technologies in health care, including in the drug development process. HHS is also responsible for prioritizing grantmaking and other awards, including grants awarded with the goal of increasing the participation and representation of researchers and communities currently underrepresented in the development of AI and machine learning models.
Health care organizations can efficiently and compliantly implement and use AI technologies by treating the software similar to any other technology that is deployed, i.e., understanding where it will be used, the types of data it will rely on, and important rights and obligations within the terms of use. Drafting an AI acceptable use policy, which outlines the guidelines, rules and procedures governing the development, deployment and utilization of the AI systems and aligns such policy with the organization’s risk tolerance, can be a beneficial initial step for an organization to take.
Internal AI policies can serve as a roadmap for employees, outlining permissible use cases, data handling practices, transparency requirements, and accountability measures concerning AI technologies. For risk management to be effective, organizations may need to establish and maintain new appropriate accountability mechanisms, including new roles and responsibilities that oversee AI technologies and perhaps even a change to the culture and incentive structures. To accomplish this, an organization may need organizational commitment at senior levels and continuous employee training.
Despite its potential, the integration of AI in health care comes with challenges. Data privacy and cybersecurity safeguards are vital, as the sensitive nature of health data necessitates stringent protection measures. Additionally, ethical considerations regarding the use of AI in decision-making processes can help promote transparency, accountability and the avoidance of biases in AI algorithms in order to maintain trust in these technologies.
The integration of AI into health care is ushering in a new era of medical innovation. By enhancing diagnostic accuracy, personalizing treatment, improving operational efficiency and expanding access to health services, AI holds the promise of significantly improving health outcomes. As technology continues to advance, the collaboration between AI and health care professionals will be necessary to navigate the challenges and maximize the benefits of this transformative technology.
Richard J. Marinaccio, Partner at Phillips Lytle LLP and Leader of the firm’s Artificial Intelligence Team, can be reached at rmarinaccio@phillipslytle.com or (716) 504-5760.
Dorothy E. Shuldman is an attorney at Phillips Lytle LLP and a member of the firm’s Artificial Intelligence Team and Health Care and Life Sciences Team. She can be reached at dshuldman@phillipslytle.com or (716) 504-5778.