President Joe Biden recently issued an executive order devised to establish minimum risk practices for use of generative artificial intelligence (“AI”) with focus on rights and safety of people, with many consequences for employers. Businesses should be aware of these directives to agencies, especially as they may result in new regulations, agency guidance and enforcements that apply to their workers. Continue Reading What Employers Need to Know about the White House’s Executive Order on AI

The White House Executive Order on AI (“EO”) is comprehensive and covers a wide range of topics. We provided a summary here. It addresses many of the risks and problems that can arise with AI. One of the topics which raises many legal issues, particularly with generative AI (“genAI”), is intellectual property. Some of the IP issues include: i) whether training AI models on copyrighted content constitutes infringement; ii) whether the output of genAI that is based on copyright-protected training material constitutes infringement; iii) what level of human authorship/inventorship is required for copyright/patent protection of genAI-assisted works; iv) whether genAI tools that create art “in the style of” particular artists constitutes copyright infringement and/or violate the right of publicity; v) whether genAI tools that are trained on copyright-protected materials must maintain copyright management information; and vi) whether genAI tools, such as AI code generators, that are trained on open source software, must comply with the terms of the open source licenses.Continue Reading White House Executive Order on AI Punts on IP Issues

The growth of artificial intelligence (“AI”) and generative AI is moving copyright law into unprecedented territory. While US copyright law continues to develop around AI, one boundary has been set: the bedrock requirement of copyright is human authorship. Given this, it is clear in the US, AI alone cannot be an author. This bedrock principle was reinforced in two recent copyright decisions. But unanswered questions abound. For example, how will the Copyright Office address collaborative or joint works between a human and AI? And will this bedrock principle be limited to generative AI, or may it lead to revisiting copyright protection for other technologies where creative decisions are left to machines?Continue Reading Generative AI and Copyright – Some Recent Denials and Unanswered Questions

The rapid growth of generative AI (GAI) has taken the world by storm. The uses of GAI are many as are the legal issues. If your employees are using GAI, they may be subjecting your company to many unwanted and potentially unnecessary legal issues. Some companies are just saying no to employee use of AI. That is reminiscent of how some companies “managed” open source software use by employees years ago. Banning use of valuable technology is a “safer” approach, but prevents a company from obtaining the many benefits of that technology. For many of the GAI-related legal issues, there are ways to manage the legal risks by developing a thoughtful policy on employee use of GAI.Continue Reading Microsoft to Indemnity Users of Copilot AI Software – Leveraging Indemnity to Help Manage Generative AI Legal Risk

As generative AI becomes an increasingly integral part of the modern economy, antitrust and consumer protection agencies continue to raise concerns about the technology’s potential to promote unfair methods of competition. Federal Trade Commission (“the FTC”) Chair Lina Khan recently warned on national news that “AI could be used to turbocharge fraud and scams” and the FTC is watching to ensure large companies do not use AI to “squash competition.”[1] The FTC has recently written numerous blogs on the subject,[2] signaling its intent to “use [the FTC’s] full range of tools to identify and address unfair methods of competition” that generative AI may create.[3] Similarly, Jonathan Kanter, head of the Antitrust Division at Department of Justice (“the DOJ”), said that the current model of AI “is inherently dependent on scale” and may “present a greater risk of having deep moats and barriers to entry.”[4] Kanter recently added that “there are all sorts of different ways to deploy machine learning technologies, and how it’s deployed can be different in the healthcare space, the energy space, the consumer tech space, the enterprise tech space,” and antitrust enforcers shouldn’t be so intimidated by artificial intelligence and machine learning technology that they stop enforcing the laws.[5]Continue Reading AI Under the Antitrust Microscope: Competition Enforcers Focusing on Generative AI from All Angles

Generative AI (GAI) applications have raised numerous copyright issues. These issues include whether the training of GAI models constitute infringement or is permitted under fair use, who is liable if the output infringes (the tool provider or user) and whether the output is copyrightable. These are not the only legal issues that can arise. Another GAI issue that has arisen with various applications involves the right of publicity. A recently filed class action provides one example.Continue Reading Celebrity “Faces Off” Against Deep Fake AI App Over Right of Publicity

Roblox recently announced that it is working on generative artificial intelligence (AI) tools that will help developers who build experiences on Roblox, to more easily create games and assets. The first two test tools create generative AI content from a text prompt and enable generative AI to complete computer code. This is just the tip of the iceberg on how generative AI will be used in games and a variety of other creative industries. Music, film, art, comic books, and literary works are some other uses. AI tools are powerful and their use will no doubt be far reaching. In the near term, so too will the associated legal issues. Some of the legal issues include:Continue Reading How Generative AI Generates Legal Issues in the Games Industry

The use of artificial intelligence (AI) is booming. Investors and companies are pouring cash into the space, and particularly into generative AI (GAI), to seize their share of the market which McKinsey reports could add up to $4.4 trillion annually to the global economy. Some companies are investing tens or hundreds of millions of dollars or more into GAI. Whether companies are building their own AI technology and training their own AI models, or leveraging third party tools, there are significant legal issues and business risks that directors need to consider as part of their fiduciary obligations and corporate governance. Five of the top issues to understand and consider are addressed in this article. Many other issues can arise. A wave of litigations and enforcement actions has swelled. Boards should get educated on these issues and ensure appropriate policies and corporate governance are implemented to manage the business and legal risks.Continue Reading 5 Things Corporate Boards Need to Know About Generative AI Risk Management

The Federal Trade Commission (FTC) has been active in enforcements involving various AI-related issues. For an example, see Training AI Models – Just Because It’s “Your” Data Doesn’t Mean You Can Use It and You Don’t Need a Machine to Predict What the FTC Might Do About Unsupported AI Claims. The FTC has also issued a report to Congress (Report) warning about various AI issues. The Report outlines significant concerns that AI tools can be inaccurate, biased, and discriminatory by design and can incentivize relying on increasingly invasive forms of commercial surveillance. Most recently, the FTC instituted an investigation into the generative AI (GAI) practices of OpenAI through a 20 page investigative demand letter (Letter).Continue Reading The Need for Generative AI Development Policies and the FTC’s Investigative Demand to OpenAI