Rochester Business Journal
Read the ArticleOn October 30, 2023, President Biden issued the landmark Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The executive order will impact many business sectors and is a significant development in the regulation of AI in the United States. While the U.S. presently has no overarching federal regulation on AI, the executive order creates a comprehensive initiative across the federal government with the goal of overseeing the responsible development and implementation of AI. This article aims to shed light on key aspects of the executive order and its implications for businesses, providing insights into the evolving federal regulatory landscape.
AI has become a transformative force across various sectors, prompting the need for comprehensive regulation to ensure ethical, responsible and secure development and deployment. In response, some states have enacted AI-specific regulation or incorporated AI-related requirements into their data privacy and cybersecurity laws. While comprehensive federal AI legislation is still in progress, several bills have been introduced to address specific aspects of AI regulation. For instance, the National Defense Authorization Act for Fiscal Year 2023 contains provisions related to AI in defense applications, including on intelligence collection and analysis and cybersecurity considerations, as well as investing in training for government employees. Further, in September, the U.S. Senate made clear its intentions to pass sweeping federal AI legislation by holding simultaneous hearings in the Senate Committee on the Judiciary Subcommittee on Privacy, Technology and the Law and the Senate Committee on Science, Commerce and Transportation. The bipartisan proposals include regulating various aspects of AI development, addressing transparency and establishing a framework to mitigate the risks of AI.
The executive order builds upon prior commitments from the Biden administration. In October 2022, the White House Office of Science and Technology Policy released five principles to guide the design, use and deployment of automated systems to protect Americans amidst AI’s rapid growth. Those principles include:
Following this, the National Institute of Standards and Technology (NIST), part of the U.S. Department of Commerce, published the AI Risk Management Framework.
To bring uniformity, the executive order includes explicit definitions of AI industry-specific terms. For instance, it adopts the definition of AI from 15 U.S.C. § 9401(3), “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action.” As another example, an “AI system” is “any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI.” Based on this broad definition, it appears an AI system could include many, if not all, the various AI and AI-enabled tools that are increasingly available and prevalent across a multitude of industries, including in the delivery of health care, development of pharmaceuticals and manufacturing operations.
The executive order directs over 50 federal entities to undertake more than 100 specific actions across eight overarching policy areas. However, many of the executive order’s provisions do not retain the force of law behind them due to the limits of the executive branch’s power. Despite limitations on the enforceability of certain provisions, the order emphasizes the federal government’s commitment to protecting consumers from fraud, unintended bias, discrimination, privacy infringements and other AI-related harms.
The executive order addresses the recent concerns surrounding generative AI (as noted in our previous RBJ AI Demystified article) by tasking NIST to develop the AI Risk Management Framework and Secure Software Development Framework resources devoted to generative AI. Further, it invokes the Defense Production Act, requiring companies to provide the Department of Commerce with information on the development of certain AI, defined as a dual-use foundational model, posing a serious risk to security, national economic security, national public health or safety.
Cybersecurity emerges as a focal point of the executive order. The Department of Homeland Security, collaborating with other Sector Risk Management Agencies, is set to perform a risk evaluation concerning potential threats arising from the utilization and implementation of AI. This assessment will specifically address vulnerabilities in critical infrastructure systems, encompassing the potential for critical failures, physical attacks and cyberattacks, with an emphasis on identifying and proposing effective measures to mitigate these vulnerabilities. The secretary of the Treasury is tasked with issuing a public report on best practices for financial institutions to manage AI-specific cybersecurity risks.
The executive order contains various policies impacting the health care industry. For example, the Department of Health and Human Services (HHS) is required to create the HHS AI Task Force with the purpose of developing a strategic plan that includes policies, frameworks and regulatory action on the responsible deployment and use of AI and AI-enabled technologies in health care, including in the drug development process. HHS is also responsible for prioritizing grantmaking and other awards, including grants awarded with the goal of increasing the participation and representation of researchers and communities currently underrepresented in the development of AI and machine learning models.
The executive order addresses the concerns within the labor market on job displacement and competition in the AI marketplace and attempts to mitigate harms faced by consumers and workers that may be bolstered by the use of AI. In order for the Biden administration to acquire a more thorough analysis, the executive order requires:
The Biden administration’s executive order on AI marks a significant stride toward harnessing the potential of this transformative technology while also addressing critical ethical and societal considerations. By prioritizing principles of safety, fairness, accountability and transparency, the administration aims to ensure that AI advancements align with American values and serve the collective well-being. The emphasis on collaboration between government agencies and other partners underscores a commitment to fostering innovation and global leadership in AI. As we navigate the evolving landscape of artificial intelligence, the White House’s proactive approach lays the groundwork for a future where technology not only drives economic growth but also upholds fundamental principles of equity, safety and ethical use. It is now incumbent upon stakeholders at all levels to actively engage in the implementation of these policies, fostering an environment that propels AI development responsibly and ethically for the benefit of society at large.
We will be committed to keeping you informed of all of the latest updates as the regulatory landscape surrounding AI continues to evolve. Stay tuned for future articles in this series, where we will delve deeper into the developments and implications surrounding the executive order and other laws and regulations. We are dedicated to being a resource to you as you navigate the world of AI.
Richard J. Marinaccio is a partner at Phillips Lytle LLP and leader of the firm’s Technology Industry Team, Intellectual Property Team and chair of the firm’s Technology and Innovation Committee. He can be reached at rmarinaccio@phillipslytle.com or (716) 504-5760.
Dorothy E. Shuldman is an attorney at Phillips Lytle LLP and a member of the firm’s Corporate and Business Law Practice and Intellectual Property Team, focusing on trademark and copyright law. She can be reached at dshuldman@phillipslytle.com or (716) 504-5778.
Receive firm communications, legal news and industry alerts delivered to your inbox.
Subscribe Now