Photo of James Gatto

Jim Gatto is a partner in the Intellectual Property Practice Group in the firm’s Washington, D.C. office. He is Co-Leader of the Artificial Intelligence Team, the Blockchain & Fintech Team, and Leader of the Open Source Team.

Colorado is the latest state to introduce a bill focused on consumer protection issues when companies develop AI tools. The bill imposes obligations on developers and deployers of AI systems. Additionally, the bill provides an affirmative defense for a developer or deployer if the developer or deployer of the high-risk system or generative system involved in a potential violation: i) has implemented and maintained a program that complies with a nationally or internationally recognized risk management framework for artificial intelligence systems that the bill or the attorney general designates; and ii) the developer or deployer takes specified measures to discover and correct violations of the bill. The obligations imposed adhere to responsible AI policy, including adopting and documenting policies to avoid algorithmic discrimination, requiring transparency and documentation of the design, data and testing used to build AI tools, avoiding copyright infringement, marking and disclosing to consumers that the synthetic content output was generated by AI tools. The bill also requires disclosure of risks, notifications if the tool makes a consequential decision concerning a consumer and other disclosures.Continue Reading Colorado Introduces an AI Consumer Protection Bill

The USPTO issued guidance on February 6, 2024 that clarified existing rules and policies and discussed how to apply them when AI is used in the drafting of submissions to the Patent Trial and Appeal Board (PTAB) and Trademark Trial and Appeal Board (TTAB). As a follow up, the USPTO has now published additional guidance in the Federal Register on some important issues that patent and trademark professionals, innovators, and entrepreneurs must navigate while using artificial intelligence (AI) in matters before the USPTO. The guidance recognizes that practitioners use AI to prepare and prosecute patent and trademark applications. It reminds individuals involved in proceedings before the USPTO of the pertinent rules and policies, identifies some risks associated with the use of AI, and provides suggestions to mitigate those risks. It states that while the USPTO is committed to maximizing AI’s benefits, the USPTO recognizes the need, through technical mitigations and human governance, to cabin the risks arising from the use of AI in practice before the USPTO. The USPTO has determined that existing rules protect the USPTO’s ecosystem against such potential perils and thus no new rules are currently being proposed.Continue Reading USPTO Issues Additional Guidance on Use of AI Tools in Connection with USPTO Matters

The NY State Bar Association (NYSBA) Task Force on Artificial Intelligence has issued a nearly 80 page report (Report) and recommendations on the legal, social and ethical impact of artificial intelligence (AI) and generative AI on the legal profession. This detailed Report also reviews AI-based software, generative AI technology and other machine learning tools that may enhance the profession, but which also pose risks for individual attorneys’ understanding of new, unfamiliar technology, as well as courts’ concerns about the integrity of the judicial process. It also makes recommendations for NYSBA adoption, including proposed guidelines for responsible AI use. This Report is perhaps the most comprehensive report to date by a state bar association. It is likely this Report will stimulate much discussion.Continue Reading NY State Bar Association Joins Florida and California on AI Ethics Guidance – Suggests Some Surprising Implications

The AI landscape is rapidly changing. To keep you up to date on the fast breaking legal updates in the AI space, we will be providing weekly updates summarizing significant news and legal developments, ranging from AI lawsuits and enforcement actions to legislation and regulations. Below are some highlights of key developments and articles you can view to learn more.Continue Reading AI Legal Updates

The SEC has charged and settled claims with two Investment advisers with making false and misleading statements about their use of artificial intelligence (AI). The SEC found that Delphia (USA) Inc. and Global Predictions Inc. marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not. SEC chair Gensler noted that when new technologies come along, they create buzz from investors and false claims by those purporting to use those new technologies. He admonished investment advisers to not mislead the public by saying they are using an AI model when they are not and that such “AI washing” hurts investors. The companies paid $400,000 in civil penalties.Continue Reading SEC Cracks Down on Over-Hyped AI Claims – Director Says This is Just the Beginning

In a prior article Training AI Models – Just Because It’s “Your” Data Doesn’t Mean You Can Use It, we addressed how many companies are sitting on a trove of customer data and are realizing that this data can be valuable to train AI models. We noted, however, that the use of customer data in a manner that exceeds or otherwise is not permitted by the privacy policy in effect at the time the data was collected could be problematic. As companies think through these issues, some have (or will) update their Terms of Service (TOS) and/or privacy policy to address this. Before companies do this, it is critical to make sure they do not jump out of the frying pan and into the fire.Continue Reading FTC Warns About Changing Terms of Service or Privacy Policy to Train AI on Previously Collected Data

The Florida State Bar recently adopted an advisory opinion meant to provide attorneys with guidance on how to use generative artificial intelligence (“GenAI”) without running afoul of ethics rules. In doing so, Florida becomes one of the first state bars to issue formal guidance on this topic—second only to California.Continue Reading Florida Joins California in Adopting Ethical Guidelines for Attorney’s Use of Generative AI

The launch of ChatGPT 3.5 in November 2022 set up 2023 as a year for rapid growth and early adoption of this transformative technology. It reached 100 million users within 2 months of launch – setting a record for the fastest growing user base of a technology tool. As of November 2023, the platform boasted an estimated 100 million weekly active users and roughly 1.7 billion users. Notably, ChatGPT is just one of the growing number of generative AI tools on the market. The pace of technical development and user adoption is unprecedented.Continue Reading Artificial Intelligence Legal Issues – 2023 Year in Review and Areas to Watch in 2024

On November 21, 2023, the Federal Trade Commission (“the FTC”) announced its approval of an omnibus resolution authorizing the use of compulsory process for nonpublic investigations concerning products or services that use artificial intelligence (“AI”). Compulsory process refers to information or document requests, such as subpoenas or civil investigative demands, for which compliance is enforceable by courts. Recipients who fail to comply with compulsory process may face contempt charges.Continue Reading AI Enforcement Update: FTC Authorizes Compulsory Process for AI Investigations

A UK court has ruled that Getty Image’s lawsuit against Stability AI for copyright infringement over generative AI technology can proceed. Stability had sought to have the case dismissed, alleging in part, that the AI models were trained in the US. However, the court relied on seemingly contradictory public statements by Stability’s CEO, including that Stability helped “fast track” UK residency applications of Russian and Ukrainian developers working on Stable Diffusion. This suggests that at least some development occurred in the UK. A similar case involving the parties is pending in the US. One significance of where the case is heard is that in the US, fair use can be a defense to copyright infringement. But not in the UK. This is just one example of where disparate country laws relating to AI may cause AI developers to forum shop to develop AI where the laws are most favorable. For example, Japan has announced that it will not enforce copyrights on data used in AI training. If such activity is found to be infringing in the UK, US or elsewhere, it is conceivable that some companies will move their AI training activities to Japan.Continue Reading Getty Image’s AI Model Training Lawsuit in UK Against Stability to Proceed 

The White House’s Executive Order On The Safe Secure And Trustworthy Development And Use Of Artificial-Intelligence (“EO”) addresses many equity and civil rights issues with AI and mandates certain actions to ensure that AI advances equity and civil rights. The Fact Sheet accompanying the EO summarizes some issues and actions directing various agencies to:Continue Reading Equity and Civil Rights Issues in the White House Executive Order on AI