The launch of ChatGPT 3.5 in November 2022 set up 2023 as a year for rapid growth and early adoption of this transformative technology. It reached 100 million users within 2 months of launch – setting a record for the fastest growing user base of a technology tool. As of November 2023, the platform boasted an estimated 100 million weekly active users and roughly 1.7 billion users. Notably, ChatGPT is just one of the growing number of generative AI tools on the market. The pace of technical development and user adoption is unprecedented.Continue Reading Artificial Intelligence Legal Issues – 2023 Year in Review and Areas to Watch in 2024

Recent developments in Artificial Intelligence (AI) have been transforming several sectors, and the healthcare industry is no exception. In the second episode of Sheppard Mullin’s Health-e Law Podcast, Jim Gatto, a partner at Sheppard Mullin and the co-leader of its AI Team, explores the significant implications and challenges of incorporating AI into the healthcare industry with Sheppard Mullin’s Digital Health Team co-chairs, Sara Shanti and Phil Kim.Continue Reading AI as an Aid – Emerging Uses in Healthcare: A Discussion with Jim Gatto

On November 21, 2023, the Federal Trade Commission (“the FTC”) announced its approval of an omnibus resolution authorizing the use of compulsory process for nonpublic investigations concerning products or services that use artificial intelligence (“AI”). Compulsory process refers to information or document requests, such as subpoenas or civil investigative demands, for which compliance is enforceable by courts. Recipients who fail to comply with compulsory process may face contempt charges.Continue Reading AI Enforcement Update: FTC Authorizes Compulsory Process for AI Investigations

A UK court has ruled that Getty Image’s lawsuit against Stability AI for copyright infringement over generative AI technology can proceed. Stability had sought to have the case dismissed, alleging in part, that the AI models were trained in the US. However, the court relied on seemingly contradictory public statements by Stability’s CEO, including that Stability helped “fast track” UK residency applications of Russian and Ukrainian developers working on Stable Diffusion. This suggests that at least some development occurred in the UK. A similar case involving the parties is pending in the US. One significance of where the case is heard is that in the US, fair use can be a defense to copyright infringement. But not in the UK. This is just one example of where disparate country laws relating to AI may cause AI developers to forum shop to develop AI where the laws are most favorable. For example, Japan has announced that it will not enforce copyrights on data used in AI training. If such activity is found to be infringing in the UK, US or elsewhere, it is conceivable that some companies will move their AI training activities to Japan.Continue Reading Getty Image’s AI Model Training Lawsuit in UK Against Stability to Proceed 

In a decision issued[1] November 27, 2023, a Chinese court ruled that AI-generated content can enjoy protection under copyright law. The finding, the first of its kind in China, is in direct conflict with the human authorship requirement under U.S. copyright law and may have far-reaching implications.Continue Reading Computer Love: Beijing Court Finds AI-Generated Image is Copyrightable in Split with United States

Just like Napster triggered a global, technological shift in the way music is consumed and distributed, we are now on the precipice of another major revolution certain to disrupt the music industry. Artificial intelligence, or “AI” as it is more commonly referred, has quickly emerged as a game changer across a myriad of industries and music is no exception. AI offers the promise of innovative opportunities and avenues for music creation, publishing, recording, synchronization, distribution, consumption and revenue generation. However, these opportunities also present significant, novel challenges for music rights holders and users alike—and the legal challenges have just begun.Continue Reading Rise of the Machines: How AI is Shaking Up the Music Industry

President Joe Biden recently issued an executive order devised to establish minimum risk practices for use of generative artificial intelligence (“AI”) with focus on rights and safety of people, with many consequences for employers. Businesses should be aware of these directives to agencies, especially as they may result in new regulations, agency guidance and enforcements that apply to their workers. Continue Reading What Employers Need to Know about the White House’s Executive Order on AI

The White House’s Executive Order On The Safe Secure And Trustworthy Development And Use Of Artificial-Intelligence (“EO”) addresses many equity and civil rights issues with AI and mandates certain actions to ensure that AI advances equity and civil rights. The Fact Sheet accompanying the EO summarizes some issues and actions directing various agencies to:Continue Reading Equity and Civil Rights Issues in the White House Executive Order on AI

On October 30, 2023, the White House issued an Executive Order focusing on safe, secure and trustworthy AI and laying out a national policy on AI. In stark contrast to the EU, which through the soon to be enacted AI Act is focused primarily on regulating uses of AI that are unacceptable or high risk, the Executive Order focuses on responsible use of AI as well as developers, the data they use and the tools they create. The goal is to ensure that AI systems used by government and the private sector are safe, secure, and trustworthy. The Executive Order seeks to enhance federal government use and deployment of AI, including to improve cybersecurity and U.S. defenses, and to promote innovation and competition to allow the U.S. to maintain its position as a global leader on AI issues. It also emphasizes the importance of protections for various groups including consumers, patients, students, workers and kids.Continue Reading Flash Briefing on White House Executive Order on AI Regulation and Policy

Consistent with the White House’s Executive Order this week laying out a national policy on AI, the Federal Communications Commission (“FCC”) released a draft Notice of Inquiry (“NOI”) that would look into the implications of emerging Artificial Intelligence (“AI”) technologies on the Commission’s efforts to prevent unwanted and illegal calls and texts under the Telephone Consumer Protection Act (“TCPA”).Continue Reading FCC Launches Inquiry into the Risks of AI on Unwanted Robocalls and Texts