On August 21, 2024, Sheppard Mullin’s Healthy AI team conducted a CLE webinar on what hospitals, health systems and provider organizations should consider in building an artificial intelligence (“AI”) governance program. As they discussed, key elements of an AI governance program include: (1) an AI governance committee, (2) AI policies and procedures, (3) AI training, and (4) AI auditing and monitoring. These components of an AI governance program will help healthcare organizations embrace the complexities of AI use in healthcare by establishing appropriate guardrails and systematic practices to encourage its safe, ethical, and effective use. This post reviews each of the key elements.Continue Reading Key Elements of an AI Governance Program in Healthcare

At last week’s America’s Physician Group Spring conference in San Diego, California, our team heard firsthand how physicians are leading efforts to integrate Artificial Intelligence (AI) applications in ambulatory and inpatient settings in major healthcare systems across the nation. Physician and IT leaders described in detail their organizations’ efforts to identify safe, cost-effective, desirable ways to leverage AI to enhance the efficiency and quality of patient care and reduce physicians’ administrative workload. Here, we highlight key approaches that have generated early success for various health systems and physician groups, as well as key pitfalls that participants looking to adopt these technologies need to account for in their planning.Continue Reading How Physicians are Pioneering Use of AI Applications in Ambulatory and Inpatient Care

If your organization has not updated its policies to comply with Utah’s Artificial Intelligence Policy Act (the “Act”), now is the time. As we noted in a prior blog post, this law took effect on May 1st. While it imposes certain AI-related disclosure obligations on businesses and individuals as a whole, the obligations for regulated occupations (which include those licensed by the Utah Division of Professional Licensing, such as clinical services provided by a licensed healthcare provider, including a physician or nurse), are stricter.Continue Reading Utah Providers – Are You Complying with the AI Policy Act?

This is the second post in a two-part series on PrivacyCon’s key-takeaways for healthcare organizations. The first post focused on healthcare privacy issues.[1] This post focuses on insights and considerations relating to the use of Artificial Intelligence (“AI”) in healthcare. In the AI segment of the event, the Federal Trade Commission (“FTC”) covered: (1) privacy themes; (2) considerations for Large Language Models (“LLMs”); and (3) AI functionality.Continue Reading Artificial Intelligence Highlights from FTC’s 2024 PrivacyCon

Recent developments in Artificial Intelligence (AI) have been transforming several sectors, and the healthcare industry is no exception. In the second episode of Sheppard Mullin’s Health-e Law Podcast, Jim Gatto, a partner at Sheppard Mullin and the co-leader of its AI Team, explores the significant implications and challenges of incorporating AI into the healthcare industry with Sheppard Mullin’s Digital Health Team co-chairs, Sara Shanti and Phil Kim.Continue Reading AI as an Aid – Emerging Uses in Healthcare: A Discussion with Jim Gatto

The expanded use of artificial intelligence (AI) in the delivery of health care continues to receive increased attention from lawmakers across the country. Although AI regulation is still in its early developmental stages, there are various efforts underway to address the unintended negative consequences stirred by AI technology, particularly in health care and other key sectors.[1] Of particular interest are regulatory efforts to restrict discrimination through AI and related technologies.Continue Reading At a Glance: Legal Efforts to Limit Discrimination Through AI

Since its launch in November 2022, ChatGPT (“GPT” stands for Generative Pre-trained Transformer), a type of artificial intelligence model, has gained over a million users. ChatGPT is used by entities in a wide variety of industries. On March 1, 2023, OpenAI, the developer of ChatGPT, updated its data usage policies[1] noting that (i) OpenAI will not use data submitted by customers to train or improve its models unless customers expressly opt-in to share such data, and (ii) OpenAI also will enter into business associate agreements in support of applicable customers’ compliance with the Health Insurance Portability and Accountability Act (“HIPAA”).Continue Reading ChatGPT And Healthcare Privacy Risks