Rochester Business Journal
Read the ArticleThe integration of artificial intelligence (AI) into various facets of our lives is becoming increasingly prevalent. From autonomous vehicles to personalized recommendation systems, AI has the potential to revolutionize industries and take efficiency to new levels. Nevertheless, with this transformative power comes a myriad of legal and ethical considerations that must be addressed to allow for responsible development and deployment.
In response to AI’s rapid growth, calls for increased regulation of AI have emerged, highlighting that only a few states have AI-related laws or resolutions, and there is no dedicated federal AI law. State governments are taking proactive measures to regulate this rapidly evolving technology within their jurisdictions. These state-level AI laws vary in their objectives, with many integrated into broader consumer privacy laws. For example, North Dakota passed HB 1361, effective April 12, 2023, clarifying that AI is not a legal entity similar to a person. While Indiana’s Consumer Data Protection Act, which will take effect in 2026, empowers consumers to opt out of profiling that informs automated decisions and requires data protection assessments for activities with a heightened risk of harm.
In this article, we aim to provide insight into this patchwork of state laws and regulations, by touching on common themes for enacted and proposed legislation. As the legal framework surrounding AI is continuously evolving, it will be key to understand the implications for businesses, policymakers and society at large.
Establishing task forces on AI to study its use in government is crucial for states to harness the potential benefits of AI while mitigating associated risks. While AI technologies offer opportunities to enhance government efficiency, improve service delivery and optimize resource allocation, the adoption of AI in government also raises important considerations regarding accountability, transparency and equity. Some examples of task forces include:
Task forces provide a dedicated forum for policymakers, experts and stakeholders to explore these complexities and develop guidelines and best practices to ensure responsible deployment. Moreover, by studying AI use in government, task forces can facilitate knowledge-sharing among states, enabling them to learn from each other’s experiences, successes and challenges, and ultimately drive collective progress in leveraging AI to better serve citizens and advance public interests. By fostering collaboration and informed decision-making, these task forces can help shape responsible AI policies that maximize benefits while minimizing risks for citizens and communities.
Requiring transparency of AI use through legislation helps address the growing concerns surrounding accountability, fairness and trust in AI systems across industries. Transparency allows individuals, businesses and communities to understand how AI technologies are being deployed and the potential impacts they may have on various aspects of society. By incorporating transparency into legislation, lawmakers can promote accountability among developers and users of AI systems and encourage them to adhere to ethical standards and best practices.
Importantly, transparency fosters public trust by enabling stakeholders to assess the reliability, accuracy and potential biases of AI algorithms and decision-making processes. This not only helps mitigate the risks of unintended consequences or discriminatory outcomes, but also empowers individuals to make informed choices and hold accountable those responsible for AI deployment. Ultimately, legislation requiring transparency of AI use serves to promote responsible innovation, protect fundamental rights and uphold public trust.
AI systems have the potential to perpetuate and even exacerbate existing biases and disparities if not properly regulated. Therefore, several states have enacted or proposed legislation aimed at preventing discrimination within AI algorithms and decision-making processes, so that such systems are designed and implemented in a manner that is unbiased and equitable.
By mandating fairness and nondiscrimination, such legislation promotes equal opportunities and treatment for all individuals. Through robust legislative measures, the risks of discriminatory outcomes may be mitigated and can lead to increased public trust in AI technologies by demonstrating a commitment to ethical and responsible AI development and deployment.
The landscape of state laws on artificial intelligence reflects both the opportunities and challenges presented by rapidly advancing AI technology. While some states have taken proactive measures to address AI’s ethical, legal and societal implications, others are still navigating the complexities and nuances of regulation in this domain and collecting information. As AI continues to permeate various sectors of society, it is crucial for lawmakers, industry stakeholders and the public to collaborate in developing robust frameworks that promote innovation while safeguarding against potential risks.
Richard J. Marinaccio is a partner at Phillips Lytle LLP and leader of the firm’s Artificial Intelligence Team. He can be reached at rmarinaccio@phillipslytle.com or (716) 504-5760.
Dorothy E. Shuldman is an attorney at Phillips Lytle LLP and a member of the firm’s Corporate and Business Law Practice and Intellectual Property Team, focusing on trademark and copyright law. She can be reached at dshuldman@phillipslytle.com or (716) 504-5778.
Receive firm communications, legal news and industry alerts delivered to your inbox.
Subscribe Now