Artificial Intelligence

The US just catapulted into being the world leader on regulating AI. Bypassing Congress, the White house issued an Executive Order focusing on safe, secure and trustworthy AI and laying out a national policy on AI. In stark contrast to the EU, which through the soon to be enacted AI Act is focused primarily on regulating uses of AI that are unacceptable or high risk, the Executive Order focuses primarily on the developers, the data they use and the tools they create. The goal is to ensure that AI systems are safe, secure, and trustworthy before companies make them public. It also focuses on protection of various groups including consumers, patients, students, workers and kids. Continue Reading White House Executive Order Ramps Up US Regulation of and Policy Toward AI

The expanded use of artificial intelligence (AI) in the delivery of health care continues to receive increased attention from lawmakers across the country. Although AI regulation is still in its early developmental stages, there are various efforts underway to address the unintended negative consequences stirred by AI technology, particularly in health care and other key sectors.[1] Of particular interest are regulatory efforts to restrict discrimination through AI and related technologies.Continue Reading At a Glance: Legal Efforts to Limit Discrimination Through AI

Employers’ burgeoning use and reliance upon artificial intelligence has paved the way for an increasing number of states to implement legislation governing its use in employment decisions. Illinois enacted first-of-its-kind legislation regulating the use of artificial intelligence in 2020, and as previously discussed, New York City just recently enacted its own law. In 2023 alone, Massachusetts, Vermont and Washington, D.C. also have proposed legislation on this topic. These legislative guardrails are emblematic of our collective growing use of artificial intelligence, underscore the importance of understanding the legal issues this proliferating technology implicates, and need to keep abreast of the rapidly evolving legislative landscape. Below is a high-level summary of AI-related state legislation and proposals of which employers should be aware.Continue Reading States’ Increased Policing of Artificial Intelligence in the Workplace Serves as Important Reminder to Employers

The U.S. Congress has introduced a bipartisan bill that would create a National AI Commission (“Commission”). A focus of the Commission will be to ensure that through regulation, the United States is mitigating the risks and possible harms of AI, protecting its leadership in AI innovation and ensuring that the United States takes a leading role in establishing necessary, long-term guardrails. Additionally, it will review the Federal Government’s current approach to artificial intelligence oversight and regulation, how that is distributed across agencies and the capacity and alignment of agencies to address such oversight and regulation.Continue Reading Congress Proposes National Commission to Create AI Guardrails

Many companies are sitting on a trove of customer data and are realizing that this data can be valuable to train AI models. However, what some companies have not thought through, is whether they can actually use that data for this purpose. Sometimes this data is collected over many years, often long before a company thought to use it for training AI. The potential problem is that the privacy policies in effect when the data was collected may not have considered this use. The use of customer data in a manner that exceeds or otherwise is not permitted by the privacy policy in effect at the time the data was collected could be problematic. This has led to class action lawsuits and/or enforcement by the FTC. In some cases, the FTC has imposed a penalty known as “algorithmic disgorgement” to companies that use data to train AI models without proper authorization. This penalty is severe as it requires deletion of the data, the models, and the algorithms built with it. This can be an incredibly costly result.Continue Reading Training AI Models – Just Because It’s Your Data Doesn’t Mean You Can Use It

As we previously reported, the Equal Employment Opportunity Commission (“EEOC”) has had on its radar potential harms that may result from the use of artificial intelligence technology (“AI”) in the workplace. While some jurisdictions have already enacted requirements and restrictions on the use of AI decision making tools in employee selection methods,[1] on May 18, 2023, the EEOC updated its guidance on the use of AI for employment-related decisions, issuing a technical assistance document titled “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964” (“Updated Guidance”). The Updated Guidance comes almost a year after the EEOC published related guidance explaining how employers’ use of algorithmic decision-making tools may violate the Americans with Disabilities Act (“ADA”). The Updated Guidance instead focuses on how the use of AI may implicate Title VII of the Civil Rights Act of 1964, which prohibits employment discrimination based on race, color, religion, sex, and national origin. Particularly, the EEOC focuses on the disparate impact AI may have on “selection procedures” for hiring, firing, and promoting.Continue Reading The Use of Artificial Intelligence in Employee Selection Procedures: Updated Guidance From the EEOC

The National Telecommunications and Information Administration (NTIA) has issued a Request for Comments (RFC) on Artificial Intelligence (“AI”) system accountability measures and policies to advance its efforts to ensure AI systems work as claimed and without causing harm. The RFC is targeting self-regulatory, regulatory, and other measures and policies to provide reliable evidence that AI systems are legal, effective, ethical, safe, and otherwise trustworthy. It is also seeking policies that can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems that they work as claimed (similar to how financial audits create trust in financial statements).Continue Reading Another Federal Agency Issues Request for Comments on AI

The rapid rise of AI used with advertising, marketing and other consumer facing applications has caused the FTC to continue to take notice and issues guidance. For example, the FTC is concerned about false or unsubstantiated claims about an AI product’s efficacy. It has issued AI-related guidance in the past. The following is some recent FTC guidance to consider when referencing AI in your advertising. This guidance is not necessarily new, but the fact that it is being reiterated should be a signal that the FTC continues to focus on this area and that actions may be forthcoming. In fact, the recent guidance states: “AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.”Continue Reading You Don’t Need a Machine to Predict What the FTC Might Do About Unsupported AI Claims

Use of AI technology can impact your rights and liabilities in ways that may not even occur to you. And whether you are aware of it or not, your employees and vendors may be using generative AI tools in the performance of their duties in ways that can significantly impact you. The FTC has made clear that ignorance is not bliss when it comes to your liability associated with use of these tools. That is why it is more important now than ever to factor these AI technology legal implications into your company’s governance and risk management, including by updating your employee policies and third-party agreements. This article provides a non-exhaustive list of examples of legal issues that are implicated by the use of this powerful technology and practice tips for risk management.Continue Reading AI Technology – Governance and Risk Management: Why Your Employee Policies and Third-Party Contracts Should be Updated