The White House’s Executive Order On The Safe Secure And Trustworthy Development And Use Of Artificial-Intelligence (“EO”) addresses many equity and civil rights issues with AI and mandates certain actions to ensure that AI advances equity and civil rights. The Fact Sheet accompanying the EO summarizes some issues and actions directing various agencies to:Continue Reading Equity and Civil Rights Issues in the White House Executive Order on AI

On October 30, 2023, the White House issued an Executive Order focusing on safe, secure and trustworthy AI and laying out a national policy on AI. In stark contrast to the EU, which through the soon to be enacted AI Act is focused primarily on regulating uses of AI that are unacceptable or high risk, the Executive Order focuses on responsible use of AI as well as developers, the data they use and the tools they create. The goal is to ensure that AI systems used by government and the private sector are safe, secure, and trustworthy. The Executive Order seeks to enhance federal government use and deployment of AI, including to improve cybersecurity and U.S. defenses, and to promote innovation and competition to allow the U.S. to maintain its position as a global leader on AI issues. It also emphasizes the importance of protections for various groups including consumers, patients, students, workers and kids.Continue Reading Flash Briefing on White House Executive Order on AI Regulation and Policy

Consistent with the White House’s Executive Order this week laying out a national policy on AI, the Federal Communications Commission (“FCC”) released a draft Notice of Inquiry (“NOI”) that would look into the implications of emerging Artificial Intelligence (“AI”) technologies on the Commission’s efforts to prevent unwanted and illegal calls and texts under the Telephone Consumer Protection Act (“TCPA”).Continue Reading FCC Launches Inquiry into the Risks of AI on Unwanted Robocalls and Texts

The White House Executive Order on AI (“EO”) is comprehensive and covers a wide range of topics. We provided a summary here. It addresses many of the risks and problems that can arise with AI. One of the topics which raises many legal issues, particularly with generative AI (“genAI”), is intellectual property. Some of the IP issues include: i) whether training AI models on copyrighted content constitutes infringement; ii) whether the output of genAI that is based on copyright-protected training material constitutes infringement; iii) what level of human authorship/inventorship is required for copyright/patent protection of genAI-assisted works; iv) whether genAI tools that create art “in the style of” particular artists constitutes copyright infringement and/or violate the right of publicity; v) whether genAI tools that are trained on copyright-protected materials must maintain copyright management information; and vi) whether genAI tools, such as AI code generators, that are trained on open source software, must comply with the terms of the open source licenses.Continue Reading White House Executive Order on AI Punts on IP Issues

The US just catapulted into being the world leader on regulating AI. Bypassing Congress, the White house issued an Executive Order focusing on safe, secure and trustworthy AI and laying out a national policy on AI. In stark contrast to the EU, which through the soon to be enacted AI Act is focused primarily on regulating uses of AI that are unacceptable or high risk, the Executive Order focuses primarily on the developers, the data they use and the tools they create. The goal is to ensure that AI systems are safe, secure, and trustworthy before companies make them public. It also focuses on protection of various groups including consumers, patients, students, workers and kids. Continue Reading White House Executive Order Ramps Up US Regulation of and Policy Toward AI

The expanded use of artificial intelligence (AI) in the delivery of health care continues to receive increased attention from lawmakers across the country. Although AI regulation is still in its early developmental stages, there are various efforts underway to address the unintended negative consequences stirred by AI technology, particularly in health care and other key sectors.[1] Of particular interest are regulatory efforts to restrict discrimination through AI and related technologies.Continue Reading At a Glance: Legal Efforts to Limit Discrimination Through AI

The growth of artificial intelligence (“AI”) and generative AI is moving copyright law into unprecedented territory. While US copyright law continues to develop around AI, one boundary has been set: the bedrock requirement of copyright is human authorship. Given this, it is clear in the US, AI alone cannot be an author. This bedrock principle was reinforced in two recent copyright decisions. But unanswered questions abound. For example, how will the Copyright Office address collaborative or joint works between a human and AI? And will this bedrock principle be limited to generative AI, or may it lead to revisiting copyright protection for other technologies where creative decisions are left to machines?Continue Reading Generative AI and Copyright – Some Recent Denials and Unanswered Questions

The rapid growth of generative AI (GAI) has taken the world by storm. The uses of GAI are many as are the legal issues. If your employees are using GAI, they may be subjecting your company to many unwanted and potentially unnecessary legal issues. Some companies are just saying no to employee use of AI. That is reminiscent of how some companies “managed” open source software use by employees years ago. Banning use of valuable technology is a “safer” approach, but prevents a company from obtaining the many benefits of that technology. For many of the GAI-related legal issues, there are ways to manage the legal risks by developing a thoughtful policy on employee use of GAI.Continue Reading Microsoft to Indemnity Users of Copilot AI Software – Leveraging Indemnity to Help Manage Generative AI Legal Risk

Employers’ burgeoning use and reliance upon artificial intelligence has paved the way for an increasing number of states to implement legislation governing its use in employment decisions. Illinois enacted first-of-its-kind legislation regulating the use of artificial intelligence in 2020, and as previously discussed, New York City just recently enacted its own law. In 2023 alone, Massachusetts, Vermont and Washington, D.C. also have proposed legislation on this topic. These legislative guardrails are emblematic of our collective growing use of artificial intelligence, underscore the importance of understanding the legal issues this proliferating technology implicates, and need to keep abreast of the rapidly evolving legislative landscape. Below is a high-level summary of AI-related state legislation and proposals of which employers should be aware.Continue Reading States’ Increased Policing of Artificial Intelligence in the Workplace Serves as Important Reminder to Employers

As generative AI becomes an increasingly integral part of the modern economy, antitrust and consumer protection agencies continue to raise concerns about the technology’s potential to promote unfair methods of competition. Federal Trade Commission (“the FTC”) Chair Lina Khan recently warned on national news that “AI could be used to turbocharge fraud and scams” and the FTC is watching to ensure large companies do not use AI to “squash competition.”[1] The FTC has recently written numerous blogs on the subject,[2] signaling its intent to “use [the FTC’s] full range of tools to identify and address unfair methods of competition” that generative AI may create.[3] Similarly, Jonathan Kanter, head of the Antitrust Division at Department of Justice (“the DOJ”), said that the current model of AI “is inherently dependent on scale” and may “present a greater risk of having deep moats and barriers to entry.”[4] Kanter recently added that “there are all sorts of different ways to deploy machine learning technologies, and how it’s deployed can be different in the healthcare space, the energy space, the consumer tech space, the enterprise tech space,” and antitrust enforcers shouldn’t be so intimidated by artificial intelligence and machine learning technology that they stop enforcing the laws.[5]Continue Reading AI Under the Antitrust Microscope: Competition Enforcers Focusing on Generative AI from All Angles