In an era where artificial intelligence (AI) is reshaping landscapes in the healthcare industry and beyond, understanding the governance of AI technologies is paramount for organizations seeking to utilize AI systems and tools. AI governance encompasses the policies, practices, and frameworks that guide the responsible development, deployment, and operation of AI systems and tools within an organization. By adhering to established governance principles and frameworks, organizations can ensure their AI initiatives align with ethical standards and applicable law, respect human rights, and contribute positively to society. Various international organizations have set forth AI governance principles that provide organizations with a solid foundation to develop organizational AI governance based on widely shared values and goals.Continue Reading Navigating the Complex Landscape of AI Governance: Principles and Frameworks for Responsible Innovation
Privacy and Cybersecurity
Training AI Models – Just Because It’s Your Data Doesn’t Mean You Can Use It
Many companies are sitting on a trove of customer data and are realizing that this data can be valuable to train AI models. However, what some companies have not thought through, is whether they can actually use that data for this purpose. Sometimes this data is collected over many years, often long before a company thought to use it for training AI. The potential problem is that the privacy policies in effect when the data was collected may not have considered this use. The use of customer data in a manner that exceeds or otherwise is not permitted by the privacy policy in effect at the time the data was collected could be problematic. This has led to class action lawsuits and/or enforcement by the FTC. In some cases, the FTC has imposed a penalty known as “algorithmic disgorgement” to companies that use data to train AI models without proper authorization. This penalty is severe as it requires deletion of the data, the models, and the algorithms built with it. This can be an incredibly costly result.Continue Reading Training AI Models – Just Because It’s Your Data Doesn’t Mean You Can Use It
ChatGPT And Healthcare Privacy Risks
Since its launch in November 2022, ChatGPT (“GPT” stands for Generative Pre-trained Transformer), a type of artificial intelligence model, has gained over a million users. ChatGPT is used by entities in a wide variety of industries. On March 1, 2023, OpenAI, the developer of ChatGPT, updated its data usage policies[1] noting that (i) OpenAI will not use data submitted by customers to train or improve its models unless customers expressly opt-in to share such data, and (ii) OpenAI also will enter into business associate agreements in support of applicable customers’ compliance with the Health Insurance Portability and Accountability Act (“HIPAA”).Continue Reading ChatGPT And Healthcare Privacy Risks