By now, you may have heard the story of the lawyers who filed a legal brief using AI-generated work product. The problem for the lawyers, of course, was that the AI software had completely made up the legal precedent cited within the brief — that is, it engaged in “AI hallucination”. When the opposing party’s lawyers could not locate the precedent using traditional legal databases (because it was completely made up), this caused quite a ruckus in the legal community.
That’s the bad news.
The good news is, with reliable input, proper supervision and quality control measures, AI can handle large volumes of data and repetitive tasks across an organization so that employees can focus on creative solutions, complex problem-solving and impactful work. When used properly, it can increase efficiencies, streamline workflows and make life a lot easier.
When we say AI, we are referring to artificial intelligence. The type of AI that has been at issue recently is generative AI—particularly generative AI using machine learning that is trained on vast amounts of data and natural language processing or large language models. At a high level, generative AI uses algorithms to generate content based upon the data set on which the generative AI model was trained. Generative AI takes input and instructions from a user and provides data as the output. That output is based on the data and natural language patterns that the generative AI has learned. The output is not the result of specific research being done in real-time by the AI platform. Rather, it is based on data that the platform had previously been fed (sometimes from years prior and therefore largely outdated). Since large language models are trained on a specific data set, the output is only as reliable as the underlying data set. If the underlying data set is biased, inaccurate or incomplete, then the output will reflect this. AI hallucinations are problematic because the output is delivered in an authoritative and convincing manner, which can be blindly accepted as truth by the uninformed user.
Not surprisingly, AI raises many legal issues in the workplace—some more obvious than others. This article outlines just some potential legal pitfalls particular to the labor and employment context and suggests some ways to avoid them.
First, AI can be a great tool for finding the right candidate to join your team. It can create job descriptions, screen resumes for relevant skills and experience, administer pre-employment assessments such as skills tests and personality tests, and even analyze facial expressions and other nonverbal cues during a video interview to assess a candidate’s suitability for the position.
Because the content produced by generative AI is determined by the underlying training data (which is produced by humans), the output is susceptible to biases and flaws. According to recent EEOC guidance, employers are responsible under Title VII of the Civil Rights Act of 1964 (Title VII) for use of “algorithmic decision-making tools even if the tools are designed or administered by another entity, such as a software vendor[.]” For example, even if AI software is used to select the candidate pool, the employer may still be liable for disparate impact discrimination if the result is that persons in protected groups (e.g., race, sex or age) are hired at disproportionately lower rates compared to non-protected groups.
The “four-fifths rule” is a guidepost that can help identify possible disparate impact discrimination. According to the four-fifths rule, disparate impact discrimination may be present when the selection rate for a protected group is less than 80% as compared to non-protected groups. The EEOC guidance gives the following example: if an algorithm used for a personality test selects Black applicants at a rate of 30% and White applicants at a rate of 60% resulting in a 50% selection rate for Black applicants as compared to White applicants (30/60 = 50%), the 50% rate suggests disparate impact discrimination because it is lower than 4/5 (80%) of the rate at which White applicants were selected.
Importantly, compliance with the four-fifths rule does not eliminate potential liability. Rather, it is a “rule of thumb.” Employers should consider also instituting regular internal audits of the candidate pools chosen by AI. If interviewees’ demographics begin to change, it should be investigated. For employers located in, or that have candidates or employees who reside in New York City, those audits must be done before using AI if it is used to substantially assist or replace discretionary decision making in hiring or other employment decisions. New York City Local Law 144 requires an independent bias audit be conducted before using such automated employment decision tools (AEDT).
Next, there is the issue of data privacy, which can be complex. There are numerous data privacy laws that may be implicated by the use of generative AI in the workplace, including Health Insurance Portability and Accountability Act (HIPAA), the Fair Credit Reporting Act (FCRA), the Uniform Trade Secrets Act and the Defend Trade Secrets Act, New York State’s Shield Act, the California Privacy Rights Act (CPRA) and the General Data Protection Regulation (GDPR), as well as biometric laws, among others.
Most privacy laws require notice be given before disclosing data, while others require affirmative consent before collecting, processing or sharing data. They may also confer various rights to candidates or employees, such as the “right-to-delete” or “opt-out” of collection, processing or sharing data. For example, the GDPR gives data subjects the “right not to be subject to a decision based solely on automated processing, including profiling.” In New York City, Local Law 144 makes it unlawful for an employer to use AI for making certain employment decisions unless notice has been provided to “each such employee or candidate who resides in the city.”
Employers should be aware of and account for these laws in their compliance programs. Consider instituting an AI policy to safeguard personal, confidential or proprietary data and protect against data leaks. Employers may want to limit employee access to the AI platforms so that protections over this data are not lost, the data is not inadvertently shared with unauthorized parties, and integrity of employee work product is maintained.
To summarize, AI can be a useful tool in the workplace, and the technology is certainly exciting to witness. But its use can leave your business exposed to myriad legal issues, including employment, privacy, intellectual property, and others.
Richard J. Marinaccio is a partner at Phillips Lytle LLP and a leader of the firm’s Technology Industry Team and member of the Technology and Internet Law Practice Team. He can be reached at firstname.lastname@example.org or (716) 504-5760.
Anna Mercado Clark is a partner at Phillips Lytle LLP and leader of the firm’s Data Privacy and Cybersecurity Industry Team. She can be reached at email@example.com or (585) 238-2000 ext. 6466.
James R. O’Connor is an attorney at Phillips Lytle LLP and a member of the firm’s Labor and Employment Law and Litigation Practice Teams. He can be reached at firstname.lastname@example.org or (716) 504-5723.
 Larry Neumeister, Lawyers Submitted Bogus Case Law Created by ChatGPT. A Judge Fined Them $5,000, AP News (June 22, 2023), https://apnews.com/article/artificial-intelligence-chatgpt-fake-case-lawyers-d6ae9fa79d0542db9e1455397aef381c.
 See Neumeister, supra note 1.
 U.S. Equal Emp. Opportunity Comm’n, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, EEOC, https://www.eeoc.gov/select-issues-assessing-adverse-impact-software-algorithms-and-artificial-intelligence-used (last visited July 28, 2023) (emphasis added).
 See 29 C.F.R. § 1607.4(D) (2023).
 U.S. Equal Emp. Opportunity Comm’n, supra note 3.
 See Uniform Guidelines on Employee Selection Procedures, 43 Fed. Reg. 38,290, 38,291 (Aug. 25, 1978) (referring to the four-fifths rule as a “rule of thumb”); id. at 38,291 (explaining why the four-fifths rule was adopted as a “rule of thumb”).
 N.Y.C. Local Law No. 144 (2021); N.Y.C. Admin. Code §§ 20-870 – 20-874 (2023). It also requires that notice be given to all City candidates and employees. Id.
 GDPR, art. 22, 2016 O.J. L119/1.
 N.Y.C. Local Law No. 144 (2021); N.Y.C. Admin. Code § 20-871 (2023).