In a strategic move to preserve their right to seek reconsideration of previously dismissed DMCA § 1202(b) claims, the plaintiffs in Andersen v. Stability AI have voluntarily dismissed with prejudice all DMCA claims. The opportunity to seek reconsideration of the dismissed claims will come if a reversal occurs in the Doe 1 v. Github interlocutory appeal. We covered more of the decisions in the Github case in this prior post.Continue Reading Andersen Plaintiffs Strategically Dismiss § 1202(b) Claims Pending Interlocutory Appeal in Github Case

On September 28, 2024, Governor Gavin Newsom signed into law California Assembly Bill 3030 (“AB 3030”), known as the Artificial Intelligence in Health Care Services Bill. Effective January 1, 2025, AB 3030 is part of a broader effort to mitigate the potential harms of generative artificial intelligence (“GenAI”) in California and introduces new requirements for healthcare providers using the technology.Continue Reading California Passes Law Regulating Generative AI Use in Healthcare

We have previously reported on the Jobiak case which raises the interesting issue of whether an AI-scraped job database is subject to copyright protection and is infringed. We were hoping that the court would make substantive rulings on some of the AI issues. Instead, the court has granted Defendant’s motion to dismiss for lack of personal jurisdiction, but granted Plaintiff leave to amend. So perhaps we will get some substantive rulings after an amended complaint is filed.Continue Reading Court Dismisses AI Scraping Claim, But Grants Leave to Amend

In two recent rules, the Department of Commerce, Bureau of Industry and Security (BIS) has begun to take significant steps to monitor, and potentially control access to, U.S. artificial intelligence (AI) technology. AI continues to pose a unique challenge for regulators due to its rapid expansion as a consumer product and potential defense applications.Continue Reading Commerce Takes on AI: Recent Developments from BIS on AI

On October 24, the CFPB issued Circular 2024-06, which warns companies using third-party consumer reports, particularly surveillance-based “black box” or AI algorithmic scores, that they must follow the Fair Credit Reporting Act with respect to the personal data of their workers. This guidance adds to the growing body of law that protects employees from potentially harmful use of AI.Continue Reading CFPB Warns Employers Regarding FCRA Rules for AI-Driven Worker Surveillance

*prepared with the assistance of artificial intelligence

In the rapidly evolving landscape of intellectual property law, artificial intelligence (AI) has emerged as a powerful tool for attorneys and inventors alike. AI drafting software, with promise of efficiency and innovation, has been increasingly adopted for drafting patent application and aiding in patent prosecution. However, this technological advancement is not without its pitfalls. Below, we explore both the risks and benefits of leveraging AI in a patent prosecution practice, providing an overview for practitioners considering its adoption.Continue Reading The Double-Edged Sword of AI in Patent Drafting and Prosecution

YouTube has announced a slate of new AI detection tools to enhance its ContentID system. The tools are designed to address the challenges posed by AI-generated content, which is becoming increasingly prevalent and sophisticated. The announcement coincides with industry-wide calls for more robust detection mechanisms as the lines between AI-generated and human-produced content continue to blur.Continue Reading YouTube Unveils AI Detection Tools: Advancing ContentID for the AI Era

In a historic turn of events, California is poised to become the first state to enact comprehensive AI safety legislation with the introduction of SB 1047. This bill, designed to address the potential risks associated with advanced AI technologies, has ignited intense debate within the tech community and among policymakers.Continue Reading California’s AI Safety Bill: A Groundbreaking Move Amidst Industry Controversy and What AI Developers Can Do to Prepare

In an era where artificial intelligence (AI) is reshaping landscapes in the healthcare industry and beyond, understanding the governance of AI technologies is paramount for organizations seeking to utilize AI systems and tools. AI governance encompasses the policies, practices, and frameworks that guide the responsible development, deployment, and operation of AI systems and tools within an organization. By adhering to established governance principles and frameworks, organizations can ensure their AI initiatives align with ethical standards and applicable law, respect human rights, and contribute positively to society. Various international organizations have set forth AI governance principles that provide organizations with a solid foundation to develop organizational AI governance based on widely shared values and goals.Continue Reading Navigating the Complex Landscape of AI Governance: Principles and Frameworks for Responsible Innovation

On August 21, 2024, Sheppard Mullin’s Healthy AI team conducted a CLE webinar on what hospitals, health systems and provider organizations should consider in building an artificial intelligence (“AI”) governance program. As they discussed, key elements of an AI governance program include: (1) an AI governance committee, (2) AI policies and procedures, (3) AI training, and (4) AI auditing and monitoring. These components of an AI governance program will help healthcare organizations embrace the complexities of AI use in healthcare by establishing appropriate guardrails and systematic practices to encourage its safe, ethical, and effective use. This post reviews each of the key elements.Continue Reading Key Elements of an AI Governance Program in Healthcare