By now, most lawyers have heard of judges sanctioning lawyers for misuse of generative AI, typically for not fact checking the outputs. Other judges have issued local rules governing use of or prohibiting AI. These actions have become prevalent. What has not been prevalent is judges encouraging the possible use of AI in interpreting contracts. Perhaps that will change because of a very thoughtful concurring opinion in an appeal to the 11th Circuit Court of Appeals in insurance coverage dispute matter. In that opinion, Judge Newsom penned a 31-page concurrence which focused on whether and how AI-powered large language models should be “considered” to inform the interpretive analysis used in determining the “ordinary meaning” of contract terms.Continue Reading Appellate Judge Proposes Possible Use of GenAI for Contract Interpretation – Recognizes That AI Hallucinates but Flesh-And-Blood Lawyers Do Too!

The NY State Bar Association (NYSBA) Task Force on Artificial Intelligence has issued a nearly 80 page report (Report) and recommendations on the legal, social and ethical impact of artificial intelligence (AI) and generative AI on the legal profession. This detailed Report also reviews AI-based software, generative AI technology and other machine learning tools that may enhance the profession, but which also pose risks for individual attorneys’ understanding of new, unfamiliar technology, as well as courts’ concerns about the integrity of the judicial process. It also makes recommendations for NYSBA adoption, including proposed guidelines for responsible AI use. This Report is perhaps the most comprehensive report to date by a state bar association. It is likely this Report will stimulate much discussion.Continue Reading NY State Bar Association Joins Florida and California on AI Ethics Guidance – Suggests Some Surprising Implications

The Florida State Bar recently adopted an advisory opinion meant to provide attorneys with guidance on how to use generative artificial intelligence (“GenAI”) without running afoul of ethics rules. In doing so, Florida becomes one of the first state bars to issue formal guidance on this topic—second only to California.Continue Reading Florida Joins California in Adopting Ethical Guidelines for Attorney’s Use of Generative AI

The launch of ChatGPT 3.5 in November 2022 set up 2023 as a year for rapid growth and early adoption of this transformative technology. It reached 100 million users within 2 months of launch – setting a record for the fastest growing user base of a technology tool. As of November 2023, the platform boasted an estimated 100 million weekly active users and roughly 1.7 billion users. Notably, ChatGPT is just one of the growing number of generative AI tools on the market. The pace of technical development and user adoption is unprecedented.Continue Reading Artificial Intelligence Legal Issues – 2023 Year in Review and Areas to Watch in 2024

A UK court has ruled that Getty Image’s lawsuit against Stability AI for copyright infringement over generative AI technology can proceed. Stability had sought to have the case dismissed, alleging in part, that the AI models were trained in the US. However, the court relied on seemingly contradictory public statements by Stability’s CEO, including that Stability helped “fast track” UK residency applications of Russian and Ukrainian developers working on Stable Diffusion. This suggests that at least some development occurred in the UK. A similar case involving the parties is pending in the US. One significance of where the case is heard is that in the US, fair use can be a defense to copyright infringement. But not in the UK. This is just one example of where disparate country laws relating to AI may cause AI developers to forum shop to develop AI where the laws are most favorable. For example, Japan has announced that it will not enforce copyrights on data used in AI training. If such activity is found to be infringing in the UK, US or elsewhere, it is conceivable that some companies will move their AI training activities to Japan.Continue Reading Getty Image’s AI Model Training Lawsuit in UK Against Stability to Proceed 

President Joe Biden recently issued an executive order devised to establish minimum risk practices for use of generative artificial intelligence (“AI”) with focus on rights and safety of people, with many consequences for employers. Businesses should be aware of these directives to agencies, especially as they may result in new regulations, agency guidance and enforcements that apply to their workers. Continue Reading What Employers Need to Know about the White House’s Executive Order on AI

The White House Executive Order on AI (“EO”) is comprehensive and covers a wide range of topics. We provided a summary here. It addresses many of the risks and problems that can arise with AI. One of the topics which raises many legal issues, particularly with generative AI (“genAI”), is intellectual property. Some of the IP issues include: i) whether training AI models on copyrighted content constitutes infringement; ii) whether the output of genAI that is based on copyright-protected training material constitutes infringement; iii) what level of human authorship/inventorship is required for copyright/patent protection of genAI-assisted works; iv) whether genAI tools that create art “in the style of” particular artists constitutes copyright infringement and/or violate the right of publicity; v) whether genAI tools that are trained on copyright-protected materials must maintain copyright management information; and vi) whether genAI tools, such as AI code generators, that are trained on open source software, must comply with the terms of the open source licenses.Continue Reading White House Executive Order on AI Punts on IP Issues

The growth of artificial intelligence (“AI”) and generative AI is moving copyright law into unprecedented territory. While US copyright law continues to develop around AI, one boundary has been set: the bedrock requirement of copyright is human authorship. Given this, it is clear in the US, AI alone cannot be an author. This bedrock principle was reinforced in two recent copyright decisions. But unanswered questions abound. For example, how will the Copyright Office address collaborative or joint works between a human and AI? And will this bedrock principle be limited to generative AI, or may it lead to revisiting copyright protection for other technologies where creative decisions are left to machines?Continue Reading Generative AI and Copyright – Some Recent Denials and Unanswered Questions

The rapid growth of generative AI (GAI) has taken the world by storm. The uses of GAI are many as are the legal issues. If your employees are using GAI, they may be subjecting your company to many unwanted and potentially unnecessary legal issues. Some companies are just saying no to employee use of AI. That is reminiscent of how some companies “managed” open source software use by employees years ago. Banning use of valuable technology is a “safer” approach, but prevents a company from obtaining the many benefits of that technology. For many of the GAI-related legal issues, there are ways to manage the legal risks by developing a thoughtful policy on employee use of GAI.Continue Reading Microsoft to Indemnity Users of Copilot AI Software – Leveraging Indemnity to Help Manage Generative AI Legal Risk

As generative AI becomes an increasingly integral part of the modern economy, antitrust and consumer protection agencies continue to raise concerns about the technology’s potential to promote unfair methods of competition. Federal Trade Commission (“the FTC”) Chair Lina Khan recently warned on national news that “AI could be used to turbocharge fraud and scams” and the FTC is watching to ensure large companies do not use AI to “squash competition.”[1] The FTC has recently written numerous blogs on the subject,[2] signaling its intent to “use [the FTC’s] full range of tools to identify and address unfair methods of competition” that generative AI may create.[3] Similarly, Jonathan Kanter, head of the Antitrust Division at Department of Justice (“the DOJ”), said that the current model of AI “is inherently dependent on scale” and may “present a greater risk of having deep moats and barriers to entry.”[4] Kanter recently added that “there are all sorts of different ways to deploy machine learning technologies, and how it’s deployed can be different in the healthcare space, the energy space, the consumer tech space, the enterprise tech space,” and antitrust enforcers shouldn’t be so intimidated by artificial intelligence and machine learning technology that they stop enforcing the laws.[5]Continue Reading AI Under the Antitrust Microscope: Competition Enforcers Focusing on Generative AI from All Angles

Generative AI (GAI) applications have raised numerous copyright issues. These issues include whether the training of GAI models constitute infringement or is permitted under fair use, who is liable if the output infringes (the tool provider or user) and whether the output is copyrightable. These are not the only legal issues that can arise. Another GAI issue that has arisen with various applications involves the right of publicity. A recently filed class action provides one example.Continue Reading Celebrity “Faces Off” Against Deep Fake AI App Over Right of Publicity