Rochester Business Journal
Read the ArticleSince the public release of OpenAI’s ChatGPT almost one year ago, artificial intelligence (AI) has become a pervasive and dominant topic of discussion across various platforms and in legislative bodies worldwide. The influence of AI, especially in the form of language models like ChatGPT, is undeniable, and it has raised important questions regarding its applications, regulation, benefits and risks. In this article, the first in a series, we aim to clarify some of the misconceptions surrounding AI by exploring its various aspects. Before delving into specific issues related to AI, it is essential to have a solid understanding of what AI is, its categories and its historical context. This article serves as a foundational piece to set the stage for more in-depth discussions in the subsequent installments.
Defining artificial intelligence is no straightforward task, as the term is interpreted differently by various stakeholders. Amidst the hype surrounding AI, many companies eagerly add the “AI” label to any computer-based task, often for marketing purposes. However, at its core, AI is commonly perceived as the point at which machines exhibit human-like intelligence in their actions. Generally, AI can be categorized into two main types: narrow AI (or weak AI) and general AI (or strong AI).
Narrow AI is designed for specific tasks, operating within predefined parameters. It excels in solving problems limited to its designated scope. In contrast, general AI represents a more futuristic concept, where machines demonstrate human-like intelligence, adaptability and the ability to solve a wide array of diverse problems. While general AI captures the headlines and is a subject of intense research, its complete realization is likely decades away. On the other hand, narrow AI is already a reality with numerous practical applications.
AI has infiltrated our daily lives in various forms, and three common manifestations include machine learning, robotics and natural language processing (NLP).
Recent advancements in AI, including the proliferation of language models like ChatGPT, have raised concerns and controversies. These include questions about AI’s integrity, its potential to eliminate bias and even the possibility of AI systems harboring malicious intent. In one instance, a chatbot engaged in a concerning conversation with a journalist, and attempted to persuade him to end his marriage. Additionally, professors in universities face challenges in determining whether work submitted by their students is entirely generated by AI.
However, it is important to recognize that AI is a multifaceted field, and language models represent just one aspect of it. AI has existed for decades and has contributed significantly to various domains. Despite the valid concerns surrounding the power and future of AI, it is not always a cause for alarm. In fact, AI has led to the development and implementation of efficient, safe and controllable applications.
The field of artificial intelligence has a rich historical context. The term “artificial intelligence” was officially coined at a Dartmouth workshop in 1956 as part of a project proposal titled “The Dartmouth Summer Research Project on Artificial Intelligence.” This workshop marked the birth of AI, bringing together leading computer scientists to discuss the potential of creating machines that could simulate human intelligence. In the same year, Allen Newell and Herbert A. Simon developed the “Logic Theorist,” a computer program that could simulate aspects of human problem-solving abilities.
In 1959, Simon and Newell, alongside J. C. Shaw, introduced the “General Problem Solver (GPS),” designed to solve a wide range of problems. However, it used the same algorithm each time, limiting its problem-solving abilities. In the 1980s and 1990s, AI development centered on machine learning techniques like neural networks, in which computers learn tasks through analysis and training on examples. Subsequently, the AI field experienced rapid growth driven by advancements in machine learning, access to vast datasets, powerful computing resources and deep learning techniques.
In response to the transformative power of AI, many have called for increased regulation. As of now, only a few states have AI-related laws or resolutions, and there is no dedicated federal AI law.
In a significant development, Sam Altman, the CEO of OpenAI, met with lawmakers on Capitol Hill to advocate for AI industry regulation. Altman emphasized the need for regulatory intervention by governments to mitigate the risks associated with increasingly powerful AI models. In response to these concerns, President Biden released the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” on October 30, 2023. While this executive order contains provisions to protect consumers and invokes the Defense Production Act for certain AI models posing risks to national security, many of its provisions lack the force of law, highlighting the need for comprehensive AI-specific federal regulations.
State-level AI laws vary in their objectives, with many integrated into broader consumer privacy laws. For instance, North Dakota passed HB 1361, effective April 12, 2023, clarifying that AI is not a legal entity similar to a person. Indiana’s Consumer Data Protection Act, which will take effect in 2026, empowers consumers to opt out of profiling that informs automated decisions and requires data protection assessments for activities with a heightened risk of harm.
AI has become a central topic of conversation, shaping our present and future in myriad ways. While recent controversies have brought the challenges and complexities of AI to the forefront and have ignited calls for regulation, AI’s continued evolution exposes more people to both the pros and cons of the technology.
The evolving landscape demands continuous learning and understanding of the capabilities and implications of AI. As the field progresses, this article series aims to delve deeper into the world of AI, keeping readers current on the ever-changing, fast-paced developments and their profound impact on our society and the future.
Richard J. Marinaccio is a partner at Phillips Lytle LLP and leader of the firm’s Technology Industry Team, Intellectual Property Team and chair of the firm’s Technology and Innovation Committee. He can be reached at rmarinaccio@phillipslytle.com or (716) 504-5760.
Dorothy E. Shuldman is an attorney at Phillips Lytle LLP and a member of the firm’s Corporate and Business Law Practice and Intellectual Property Team, focusing on trademark and copyright law. She can be reached at dshuldman@phillipslytle.com or (716) 504-5778.
Receive firm communications, legal news and industry alerts delivered to your inbox.
Subscribe Now