By now, most lawyers have heard of judges sanctioning lawyers for misuse of generative AI, typically for not fact checking the outputs. Other judges have issued local rules governing use of or prohibiting AI. These actions have become prevalent. What has not been prevalent is judges encouraging the possible use of AI in interpreting contracts. Perhaps that will change because of a very thoughtful concurring opinion in an appeal to the 11th Circuit Court of Appeals in insurance coverage dispute matter. In that opinion, Judge Newsom penned a 31-page concurrence which focused on whether and how AI-powered large language models should be “considered” to inform the interpretive analysis used in determining the “ordinary meaning” of contract terms.Continue Reading Appellate Judge Proposes Possible Use of GenAI for Contract Interpretation – Recognizes That AI Hallucinates but Flesh-And-Blood Lawyers Do Too!

Plaintiff’s attorneys have filed a wave of lawsuits against various AI tools under a variety of legal theories. Most have had no success so far. Many of the asserted claims have been dismissed for lack of sufficiently pleaded facts to state a claim or for being legally untenable claims. Some of these dismissals have been without prejudice, meaning the plaintiffs get another chance to properly plead a viable claim. In one of the most recent decisions, claims in a nearly 200 page amended complaint were dismissed without prejudice. In a terse decision, the Court excoriated Plaintiff’s rambling complaint’s unnecessary length and distracting allegations, which according to the Court made “it nearly impossible to determine the adequacy of the plaintiffs’ legal claims.” The Court called out rhetoric and policy grievances that are not suitable for resolution by federal courts, including comparing AI’s risks to humanity to the risks posed by the development of nuclear weapons.Continue Reading Lawsuits Against Web Scraping to Train AI

We recently posted about the Jobiak case which raises the interesting question of whether scraping an AI-generated database of job listings constitutes copyright infringement (among other claims). Plaintiff has submitted its opposition, in which it raises the substantive arguments to the copyright claim set forth below.Continue Reading Jobiak’s Opposition to Motion to Dismiss Copyright Infringement Claims on AI-Created Database

A pending lawsuit raises an interesting copyright infringement question – does scraping an AI-generated database of job listings constitute copyright infringement?

In Jobiak v. Botmakers, Jobiak is an AI-based recruitment platform that offers a service for quickly and directly publishing job postings online and leverages machine learning technology to optimize third party job descriptions in real-time and generate an automated database for its job postings. Jobiak alleges copyright infringement (among other claims) because Botmakers scraped Jobiak’s proprietary database and subsequently incorporated its contents directly into its own job listings.Continue Reading Court to Decide Whether AI-scraped Job Database Is Subject to Copyright Protection and Is Infringed?

Is your M&A target a manufacturing company with automated production, a consumer products business with online sales and marketing or an education company that creates content for students? The increasing use and development of artificial intelligence (“AI”) systems and products, particularly generative AI, has created risks for businesses using such tools. AI plays a role in many industries and businesses whose products and services are not themselves AI. In the context of a M&A transaction, it is important to identify and allocate responsibility for these risks. Risks of AI may include: infringement (including through use of training data as well as outputs), confidentiality, IP ownership and protection (including limits on protection of IP generated by AI), regulatory (e.g., privacy, recent AI related legislation), and other risks arising from use such as indemnity obligations or managing contractor use of AI.Continue Reading M&A Transactions: Drafting AI Representations and Warranties for Non-AI Companies

According to published reports, George Carlin’s estate settled right of publicity and copyright claims relating to an AI-scripted comedy special using a “sound-alike” of George Carlin which performed the generated script. The special – “I’m Glad I’m Dead” – sought to reflect how Carlin would have commented on current events since his death in 2008. While most of the settlement terms are confidential, it is significant as one of the first resolutions of a case involving these issues. According to plaintiff’s lawyer, the defendants agreed to permanently remove the comedy special and to never repost it on any platform. They also agreed not to use Mr. Carlin’s image, voice or likeness on any platform without approval from the estate. There is no indication of whether the settlement included monetary damages.Continue Reading George Carlin Was Funny – Copying His Likeness AIn’t – Estate Settles AI-based Right of Publicity and Copyright Claims

The Organisation for Economic Co-operation and Development (OECD), which works on establishing evidence-based international standards and develops advice on public policies, has issued updated recommendations (“Recommendation”) on responsible AI to reflect technological and policy developments, including with respect to generative AI, and to further facilitate its implementation.Continue Reading OECD Updates Guidance on Responsible AI

The development of AI continues to advance at a blistering pace, increasing the need for companies to employ AI governance and adopt policies for the responsible development and deployment of AI. While the term “responsible AI” is frequently used, it is rarely understood and often complex. Fortunately, a growing body of resources are becoming available to help companies understand and implement responsible AI. Two of the more recent resources are a set of publications by NIST (the National Institute of Standards and Technology) and Microsoft. These publications provide examples of efforts by these institutions to develop best practices for responsible AI development.Continue Reading Responsible AI – Everyone is Talking About it But What Is It?

On March 28, 2024, the Office of Management and Budget (“OMB”) issued Memorandum M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (the “Memo”). This is the final version of a draft memorandum OMB released for public comment on November 1, 2023. The Memo primarily focuses on agency use of AI and outlines minimum practices for managing risks associated with the use of AI in the federal government. The Memo also provides recommendations for managing AI risks in federal procurement of AI that industry should keep in mind, specifically entities developing AI tools to sell to the federal government.Continue Reading Better Safe Than Sorry: OMB Releases Memorandum on Managing AI Risks in the Federal Government