This is the second post in a two-part series on PrivacyCon’s key-takeaways for healthcare organizations. The first post focused on healthcare privacy issues.[1] This post focuses on insights and considerations relating to the use of Artificial Intelligence (“AI”) in healthcare. In the AI segment of the event, the Federal Trade Commission (“FTC”) covered: (1) privacy themes; (2) considerations for Large Language Models (“LLMs”); and (3) AI functionality.Continue Reading Artificial Intelligence Highlights from FTC’s 2024 PrivacyCon
Massachusetts AG Says Consumer Protection, Civil Rights, and Data Privacy Laws Apply to Artificial Intelligence
Massachusetts Attorney General Andrea Campbell issued an advisory (“Advisory”) warning to developers, suppliers, and users of artificial intelligence and algorithmic decision-making systems (collectively, “AI”) about their respective obligations under the Massachusetts’ Consumer Protection Act, Anti-Discrimination Law, Data Security Law and related regulations. There is not much surprising here, as the Advisory addresses many of the same issues raised in the White House Executive Order and Federal Trade Commission (FTC) guidance. It is helpful however in clarifying, for consumers, developers, suppliers, and users of AI systems, specific aspects of existing state laws and regulations that apply to AI and that these laws and regulations apply to the same extent as they apply to any other product or application within the stream of commerce.Continue Reading Massachusetts AG Says Consumer Protection, Civil Rights, and Data Privacy Laws Apply to Artificial Intelligence
FTC Warns About Changing Terms of Service or Privacy Policy to Train AI on Previously Collected Data
In a prior article Training AI Models – Just Because It’s “Your” Data Doesn’t Mean You Can Use It, we addressed how many companies are sitting on a trove of customer data and are realizing that this data can be valuable to train AI models. We noted, however, that the use of customer data in a manner that exceeds or otherwise is not permitted by the privacy policy in effect at the time the data was collected could be problematic. As companies think through these issues, some have (or will) update their Terms of Service (TOS) and/or privacy policy to address this. Before companies do this, it is critical to make sure they do not jump out of the frying pan and into the fire.Continue Reading FTC Warns About Changing Terms of Service or Privacy Policy to Train AI on Previously Collected Data
AI Enforcement Update: FTC Authorizes Compulsory Process for AI Investigations
On November 21, 2023, the Federal Trade Commission (“the FTC”) announced its approval of an omnibus resolution authorizing the use of compulsory process for nonpublic investigations concerning products or services that use artificial intelligence (“AI”). Compulsory process refers to information or document requests, such as subpoenas or civil investigative demands, for which compliance is enforceable by courts. Recipients who fail to comply with compulsory process may face contempt charges.Continue Reading AI Enforcement Update: FTC Authorizes Compulsory Process for AI Investigations
AI Under the Antitrust Microscope: Competition Enforcers Focusing on Generative AI from All Angles
As generative AI becomes an increasingly integral part of the modern economy, antitrust and consumer protection agencies continue to raise concerns about the technology’s potential to promote unfair methods of competition. Federal Trade Commission (“the FTC”) Chair Lina Khan recently warned on national news that “AI could be used to turbocharge fraud and scams” and the FTC is watching to ensure large companies do not use AI to “squash competition.”[1] The FTC has recently written numerous blogs on the subject,[2] signaling its intent to “use [the FTC’s] full range of tools to identify and address unfair methods of competition” that generative AI may create.[3] Similarly, Jonathan Kanter, head of the Antitrust Division at Department of Justice (“the DOJ”), said that the current model of AI “is inherently dependent on scale” and may “present a greater risk of having deep moats and barriers to entry.”[4] Kanter recently added that “there are all sorts of different ways to deploy machine learning technologies, and how it’s deployed can be different in the healthcare space, the energy space, the consumer tech space, the enterprise tech space,” and antitrust enforcers shouldn’t be so intimidated by artificial intelligence and machine learning technology that they stop enforcing the laws.[5]Continue Reading AI Under the Antitrust Microscope: Competition Enforcers Focusing on Generative AI from All Angles
You Don’t Need a Machine to Predict What the FTC Might Do About Unsupported AI Claims
The rapid rise of AI used with advertising, marketing and other consumer facing applications has caused the FTC to continue to take notice and issues guidance. For example, the FTC is concerned about false or unsubstantiated claims about an AI product’s efficacy. It has issued AI-related guidance in the past. The following is some recent FTC guidance to consider when referencing AI in your advertising. This guidance is not necessarily new, but the fact that it is being reiterated should be a signal that the FTC continues to focus on this area and that actions may be forthcoming. In fact, the recent guidance states: “AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.”Continue Reading You Don’t Need a Machine to Predict What the FTC Might Do About Unsupported AI Claims
The Need for Generative AI Development Policies and the FTC’s Investigative Demand to OpenAI
The Federal Trade Commission (FTC) has been active in enforcements involving various AI-related issues. For an example, see Training AI Models – Just Because It’s “Your” Data Doesn’t Mean You Can Use It and You Don’t Need a Machine to Predict What the FTC Might Do About Unsupported AI Claims. The FTC has also issued a report to Congress (Report) warning about various AI issues. The Report outlines significant concerns that AI tools can be inaccurate, biased, and discriminatory by design and can incentivize relying on increasingly invasive forms of commercial surveillance. Most recently, the FTC instituted an investigation into the generative AI (GAI) practices of OpenAI through a 20 page investigative demand letter (Letter).Continue Reading The Need for Generative AI Development Policies and the FTC’s Investigative Demand to OpenAI
AI Technology – Governance and Risk Management: Why Your Employee Policies and Third-Party Contracts Should be Updated
Use of AI technology can impact your rights and liabilities in ways that may not even occur to you. And whether you are aware of it or not, your employees and vendors may be using generative AI tools in the performance of their duties in ways that can significantly impact you. The FTC has made clear that ignorance is not bliss when it comes to your liability associated with use of these tools. That is why it is more important now than ever to factor these AI technology legal implications into your company’s governance and risk management, including by updating your employee policies and third-party agreements. This article provides a non-exhaustive list of examples of legal issues that are implicated by the use of this powerful technology and practice tips for risk management.Continue Reading AI Technology – Governance and Risk Management: Why Your Employee Policies and Third-Party Contracts Should be Updated