Plaintiff’s attorneys have filed a wave of lawsuits against various AI tools under a variety of legal theories. Most have had no success so far. Many of the asserted claims have been dismissed for lack of sufficiently pleaded facts to state a claim or for being legally untenable claims. Some of these dismissals have been without prejudice, meaning the plaintiffs get another chance to properly plead a viable claim. In one of the most recent decisions, claims in a nearly 200 page amended complaint were dismissed without prejudice. In a terse decision, the Court excoriated Plaintiff’s rambling complaint’s unnecessary length and distracting allegations, which according to the Court made “it nearly impossible to determine the adequacy of the plaintiffs’ legal claims.” The Court called out rhetoric and policy grievances that are not suitable for resolution by federal courts, including comparing AI’s risks to humanity to the risks posed by the development of nuclear weapons.Continue Reading Lawsuits Against Web Scraping to Train AI

The NY State Bar Association (NYSBA) Task Force on Artificial Intelligence has issued a nearly 80 page report (Report) and recommendations on the legal, social and ethical impact of artificial intelligence (AI) and generative AI on the legal profession. This detailed Report also reviews AI-based software, generative AI technology and other machine learning tools that may enhance the profession, but which also pose risks for individual attorneys’ understanding of new, unfamiliar technology, as well as courts’ concerns about the integrity of the judicial process. It also makes recommendations for NYSBA adoption, including proposed guidelines for responsible AI use. This Report is perhaps the most comprehensive report to date by a state bar association. It is likely this Report will stimulate much discussion.Continue Reading NY State Bar Association Joins Florida and California on AI Ethics Guidance – Suggests Some Surprising Implications

In a prior article Training AI Models – Just Because It’s “Your” Data Doesn’t Mean You Can Use It, we addressed how many companies are sitting on a trove of customer data and are realizing that this data can be valuable to train AI models. We noted, however, that the use of customer data in a manner that exceeds or otherwise is not permitted by the privacy policy in effect at the time the data was collected could be problematic. As companies think through these issues, some have (or will) update their Terms of Service (TOS) and/or privacy policy to address this. Before companies do this, it is critical to make sure they do not jump out of the frying pan and into the fire.Continue Reading FTC Warns About Changing Terms of Service or Privacy Policy to Train AI on Previously Collected Data

A UK court has ruled that Getty Image’s lawsuit against Stability AI for copyright infringement over generative AI technology can proceed. Stability had sought to have the case dismissed, alleging in part, that the AI models were trained in the US. However, the court relied on seemingly contradictory public statements by Stability’s CEO, including that Stability helped “fast track” UK residency applications of Russian and Ukrainian developers working on Stable Diffusion. This suggests that at least some development occurred in the UK. A similar case involving the parties is pending in the US. One significance of where the case is heard is that in the US, fair use can be a defense to copyright infringement. But not in the UK. This is just one example of where disparate country laws relating to AI may cause AI developers to forum shop to develop AI where the laws are most favorable. For example, Japan has announced that it will not enforce copyrights on data used in AI training. If such activity is found to be infringing in the UK, US or elsewhere, it is conceivable that some companies will move their AI training activities to Japan.Continue Reading Getty Image’s AI Model Training Lawsuit in UK Against Stability to Proceed