As generative AI becomes an increasingly integral part of the modern economy, antitrust and consumer protection agencies continue to raise concerns about the technology’s potential to promote unfair methods of competition. Federal Trade Commission (“the FTC”) Chair Lina Khan recently warned on national news that “AI could be used to turbocharge fraud and scams” and the FTC is watching to ensure large companies do not use AI to “squash competition.”[1] The FTC has recently written numerous blogs on the subject,[2] signaling its intent to “use [the FTC’s] full range of tools to identify and address unfair methods of competition” that generative AI may create.[3] Similarly, Jonathan Kanter, head of the Antitrust Division at Department of Justice (“the DOJ”), said that the current model of AI “is inherently dependent on scale” and may “present a greater risk of having deep moats and barriers to entry.”[4] Kanter recently added that “there are all sorts of different ways to deploy machine learning technologies, and how it’s deployed can be different in the healthcare space, the energy space, the consumer tech space, the enterprise tech space,” and antitrust enforcers shouldn’t be so intimidated by artificial intelligence and machine learning technology that they stop enforcing the laws.[5]Continue Reading AI Under the Antitrust Microscope: Competition Enforcers Focusing on Generative AI from All Angles

Generative AI (GAI) applications have raised numerous copyright issues. These issues include whether the training of GAI models constitute infringement or is permitted under fair use, who is liable if the output infringes (the tool provider or user) and whether the output is copyrightable. These are not the only legal issues that can arise. Another GAI issue that has arisen with various applications involves the right of publicity. A recently filed class action provides one example.Continue Reading Celebrity “Faces Off” Against Deep Fake AI App Over Right of Publicity

Roblox recently announced that it is working on generative artificial intelligence (AI) tools that will help developers who build experiences on Roblox, to more easily create games and assets. The first two test tools create generative AI content from a text prompt and enable generative AI to complete computer code. This is just the tip of the iceberg on how generative AI will be used in games and a variety of other creative industries. Music, film, art, comic books, and literary works are some other uses. AI tools are powerful and their use will no doubt be far reaching. In the near term, so too will the associated legal issues. Some of the legal issues include:Continue Reading How Generative AI Generates Legal Issues in the Games Industry

The use of artificial intelligence (AI) is booming. Investors and companies are pouring cash into the space, and particularly into generative AI (GAI), to seize their share of the market which McKinsey reports could add up to $4.4 trillion annually to the global economy. Some companies are investing tens or hundreds of millions of dollars or more into GAI. Whether companies are building their own AI technology and training their own AI models, or leveraging third party tools, there are significant legal issues and business risks that directors need to consider as part of their fiduciary obligations and corporate governance. Five of the top issues to understand and consider are addressed in this article. Many other issues can arise. A wave of litigations and enforcement actions has swelled. Boards should get educated on these issues and ensure appropriate policies and corporate governance are implemented to manage the business and legal risks.Continue Reading 5 Things Corporate Boards Need to Know About Generative AI Risk Management

The Federal Trade Commission (FTC) has been active in enforcements involving various AI-related issues. For an example, see Training AI Models – Just Because It’s “Your” Data Doesn’t Mean You Can Use It and You Don’t Need a Machine to Predict What the FTC Might Do About Unsupported AI Claims. The FTC has also issued a report to Congress (Report) warning about various AI issues. The Report outlines significant concerns that AI tools can be inaccurate, biased, and discriminatory by design and can incentivize relying on increasingly invasive forms of commercial surveillance. Most recently, the FTC instituted an investigation into the generative AI (GAI) practices of OpenAI through a 20 page investigative demand letter (Letter).Continue Reading The Need for Generative AI Development Policies and the FTC’s Investigative Demand to OpenAI

AI-based code generators are a powerful application of generative AI. These tools leverage AI to assist code developers by using AI models to auto-complete or suggest code based on developer inputs or tests. These tools raise at least three types of potential legal issues:Continue Reading Solving Open Source Problems with AI Code Generators – Legal Issues and Solutions