Articles | Mar 27, 2024

AI Demystified: State Legislative Trends

Rochester Business Journal

Read the Article
Robotic finger touching digitized legal scales

Myriad of legal and ethical considerations

The integration of artificial intelligence (AI) into various facets of our lives is becoming increasingly prevalent. From autonomous vehicles to personalized recommendation systems, AI has the potential to revolutionize industries and take efficiency to new levels. Nevertheless, with this transformative power comes a myriad of legal and ethical considerations that must be addressed to allow for responsible development and deployment.

In response to AI’s rapid growth, calls for increased regulation of AI have emerged, highlighting that only a few states have AI-related laws or resolutions, and there is no dedicated federal AI law. State governments are taking proactive measures to regulate this rapidly evolving technology within their jurisdictions. These state-level AI laws vary in their objectives, with many integrated into broader consumer privacy laws. For example, North Dakota passed HB 1361, effective April 12, 2023, clarifying that AI is not a legal entity similar to a person. While Indiana’s Consumer Data Protection Act, which will take effect in 2026, empowers consumers to opt out of profiling that informs automated decisions and requires data protection assessments for activities with a heightened risk of harm.

In this article, we aim to provide insight into this patchwork of state laws and regulations, by touching on common themes for enacted and proposed legislation. As the legal framework surrounding AI is continuously evolving, it will be key to understand the implications for businesses, policymakers and society at large.

Task Forces

Establishing task forces on AI to study its use in government is crucial for states to harness the potential benefits of AI while mitigating associated risks. While AI technologies offer opportunities to enhance government efficiency, improve service delivery and optimize resource allocation, the adoption of AI in government also raises important considerations regarding accountability, transparency and equity. Some examples of task forces include:

  • Illinois: Enacted HB3563 in 2023 to establish the Generative AI and Natural Language Processing Task Force.
  • Texas: Enacted HB 2060 in 2023 to create the artificial intelligence advisory council to study and monitor artificial intelligence systems developed, employed or procured by state agencies.
  • Vermont: Was one of the early adopters of the task force initiative, and established their Artificial Intelligence Task Force in 2018 with the passage of H.378.
  • Washington: Enacted SB 5092 in 2021 which establishes a work group, convened by the office of the chief information officer, to conduct analysis and develop recommendations for state law and policy regarding public agencies’ development, procurement and use of automated decision systems.

Task forces provide a dedicated forum for policymakers, experts and stakeholders to explore these complexities and develop guidelines and best practices to ensure responsible deployment. Moreover, by studying AI use in government, task forces can facilitate knowledge-sharing among states, enabling them to learn from each other’s experiences, successes and challenges, and ultimately drive collective progress in leveraging AI to better serve citizens and advance public interests. By fostering collaboration and informed decision-making, these task forces can help shape responsible AI policies that maximize benefits while minimizing risks for citizens and communities.

Transparency

Requiring transparency of AI use through legislation helps address the growing concerns surrounding accountability, fairness and trust in AI systems across industries. Transparency allows individuals, businesses and communities to understand how AI technologies are being deployed and the potential impacts they may have on various aspects of society. By incorporating transparency into legislation, lawmakers can promote accountability among developers and users of AI systems and encourage them to adhere to ethical standards and best practices.

  • California: The newly introduced Senate Bill 896, Artificial Intelligence Accountability Act, would mandate a report detailing the benefits and risks of generative AI and require certain entities to conduct a risk analysis of potential threats posed by the use of generative AI to California’s critical energy infrastructure. It would also require a state agency or department that utilizes generative AI to directly communicate with a person, to explain that their interaction with the state agency or department is being communicated through artificial intelligence.
  • Connecticut: On June 7, 2023, Connecticut signed Senate Bill No. 1103, “An Act Concerning Artificial Intelligence, Automated Decision-Making and Personal Data Privacy,” into law with the aim of increasing transparency and accountability of the state’s use of AI. For example, by requiring assessments of AI systems used by state agencies and the creation of policies and procedures on the development, procurement, implementation and utilization of AI systems.
  • Colorado: The Colorado Division of Insurance promulgated Final Regulation 3 CCR 702-10, which establishes requirements for Colorado-licensed life insurer on the use of external consumer data and information sources (ECDIS), as well as algorithms and predictive models using ECDIS. Such insurers must adopt a governance and risk management framework with respect to their use of ECDIS.
  • Illinois: Under Illinois’s HB0053, which amends the Artificial Intelligence Video Interview Act, employers that rely solely upon AI to determine whether an applicant will qualify for an in-person interview must gather and report certain demographic information to the Department of Commerce and Economic Opportunity. The department must analyze this data and provide in a report to the governor and general assembly to address whether the data discloses a racial bias in the use of artificial intelligence.

Importantly, transparency fosters public trust by enabling stakeholders to assess the reliability, accuracy and potential biases of AI algorithms and decision-making processes. This not only helps mitigate the risks of unintended consequences or discriminatory outcomes, but also empowers individuals to make informed choices and hold accountable those responsible for AI deployment. Ultimately, legislation requiring transparency of AI use serves to promote responsible innovation, protect fundamental rights and uphold public trust.

Anti-Discrimination

AI systems have the potential to perpetuate and even exacerbate existing biases and disparities if not properly regulated. Therefore, several states have enacted or proposed legislation aimed at preventing discrimination within AI algorithms and decision-making processes, so that such systems are designed and implemented in a manner that is unbiased and equitable.

  • California: In February 2024, AB 2930 was introduced and would prohibit discrimination from AI software, particularly algorithmic discrimination. The bill provides the state attorney general and public attorneys with the ability to sue businesses for discrimination — a previous version of the bill allowed for a private right of action, but this was removed from the current version. The types of AI systems that are of focus include those with serious implications to individuals (referred to in the bill as “consequential decisions”), such as education, housing, employment, essential utilities, adoption services, health care or health insurance.
  • Maine: In 2022, an “Act To Promote Equity in Policy Making by Enhancing the State’s Ability To Collect, Analyze and Apply Data” was enacted to establish a data governance program. It requires the secretary of state, or their designee, and the chief information officer to consult with the Permanent Commission on the Status of Racial, Indigenous and Tribal Populations and the state archivist on a quarterly basis on methods for building racial equity considerations into the program, including data algorithms and statistical tools. Requirements of the program include the promotion of consistent collection of racial and ethnic demographic data and the development of policies aimed at reducing disparities and increasing equity.
  • New York City: Local Law 144 (the “AI Law”) went into effect on July 5, 2023 and applies to employees residing in New York City. The law makes it unlawful for employers to use automated employment decision tools (AEDTs) to screen candidates and employees within New York City unless certain bias audit and notice requirements are met.
  • New Jersey: Assembly Bill 537 would require automobile insurers to provide annual documentation to policyholders demonstrating no discriminatory outcomes with insurer’s automated underwriting system. This bill is currently pending.

By mandating fairness and nondiscrimination, such legislation promotes equal opportunities and treatment for all individuals. Through robust legislative measures, the risks of discriminatory outcomes may be mitigated and can lead to increased public trust in AI technologies by demonstrating a commitment to ethical and responsible AI development and deployment.

The landscape of state laws on artificial intelligence reflects both the opportunities and challenges presented by rapidly advancing AI technology. While some states have taken proactive measures to address AI’s ethical, legal and societal implications, others are still navigating the complexities and nuances of regulation in this domain and collecting information. As AI continues to permeate various sectors of society, it is crucial for lawmakers, industry stakeholders and the public to collaborate in developing robust frameworks that promote innovation while safeguarding against potential risks.

Richard J. Marinaccio is a partner at Phillips Lytle LLP and leader of the firm’s Artificial Intelligence Team. He can be reached at rmarinaccio@phillipslytle.com or (716) 504-5760.

Dorothy E. Shuldman is an attorney at Phillips Lytle LLP and a member of the firm’s Corporate and Business Law Practice and Intellectual Property Team, focusing on trademark and copyright law. She can be reached at dshuldman@phillipslytle.com or (716) 504-5778.

Related Insights

View All