The US just catapulted into being the world leader on regulating AI. Bypassing Congress, the White house issued an Executive Order focusing on safe, secure and trustworthy AI and laying out a national policy on AI. In stark contrast to the EU, which through the soon to be enacted AI Act is focused primarily on regulating uses of AI that are unacceptable or high risk, the Executive Order focuses primarily on the developers, the data they use and the tools they create. The goal is to ensure that AI systems are safe, secure, and trustworthy before companies make them public. It also focuses on protection of various groups including consumers, patients, students, workers and kids.
The Executive Order covers several important issues but fails to address other significant issues. The issues addressed include safety testing and standards for such tests, content authentication and privacy, cybersecurity and national security, equity and civil rights, consumer and worker protection, advancing US leadership in AI innovation and competition while collaborating with governments worldwide and promoting the responsible use of AI by the US government. Some of the most important issues not addressed by the Executive Order are the numerous AI-related intellectual property issues. These issues include the propriety of using copyright protected materials to train AI models and the patentability and copyrightability of AI output. The US Patent Office and the US Copyright Office have both undertaken initiatives to address some of these issues. See Patent Office AI Initiative and Copyright and Artificial Intelligence | U.S. Copyright Office. Perhaps the White House is waiting on these offices for their recommendations.
The Executive Order addresses:
- Sharing Safety Test Results – Developers of the most powerful AI systems (foundation models) must share their safety test results and other critical information with the U.S. government. This applies to any foundation model that poses a serious risk to national security, national economic security, or national public health and safety
- Rigorous Test Standards – The National Institute of Standards and Technology (NIST) will establish rigorous standards, tools and tests for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, and chemical, biological, radiological, nuclear, and cybersecurity risks. The NIST standards will likely build on its prior work as published in its AI Risk Management Framework
- Biological synthesis screening – to protect against the risks of using AI to engineer dangerous biological materials, strong new standards for biological synthesis screening will be developed by agencies that fund life-science projects as a condition of federal funding, creating powerful incentives to ensure appropriate screening and risk management
- Content Authentication – to protect Americans from AI-enabled fraud and deception, the Department of Commerce will establish standards and best practices for detecting AI-generated content, authenticating official content and watermarking and clearly labeling AI-generated content
- Cybersecurity Program – the government will establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software and make networks more secure by building on the ongoing AI Cyber Challenge
- National Security Memorandum – the National Security Council and White House Chief of Staff will develop a National Security Memorandum that directs further actions on AI and security to ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions and counter adversaries’ military use of AI
- Privacy – Recognizing that AI makes it easier to extract, identify, and exploit personal data and heightens incentives use data to train AI systems, the Executive Order calls on Congress to pass bipartisan data privacy legislation to protect Americans, especially kids, and directs these actions:
- prioritizing federal support for accelerating the development and use of privacy-preserving techniques—including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data
- strengthen privacy-preserving research and technologies, such as cryptographic tools that preserve individuals’ privacy, by funding a Research Coordination Network to advance rapid breakthroughs and development in conjunction with The National Science Foundation
- evaluate how agencies collect and use commercially available information—including information they procure from data brokers—and strengthen privacy guidance for federal agencies to account for AI risks
- Develop guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems
- Responsible AI, Equity and Civil Rights – Building on the Blueprint for an AI Bill of Rights and the Executive Order directing agencies to combat algorithmic discrimination, the Executive Order mandates additional actions to advance equity and civil rights to:
- Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination
- Address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI
- Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis
- Protecting Consumers, Patients, and Students – to avoid AI raising the risk of injuring, misleading, or otherwise harming Americans, the Executive Order mandates these actions:
- Advance the responsible use of AI in healthcare and developing affordable and life-saving drugs
- Establish a safety program by the Department of Health and Human Services o receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI
- Shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools
- Protecting Workers – to protect against the dangers of increased workplace surveillance, bias, and job displacement, the Executive Order proposes to support workers’ ability to bargain collectively, and invest in workforce training and development accessible to all, by taking these actions:
- Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection by guiding prevent employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize
- Produce a report on AI’s potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions, including from AI
- Promoting Innovation and Competition – to preserve the USA’s leadership in AI innovation and competition, the Executive Order mandates these actions:
- Catalyze AI research across the United States through a pilot of the National AI Research Resource—a tool that will provide AI researchers and students access to key AI resources and data—and expanded grants for AI research in vital areas like healthcare and climate change
- Promote a fair, open, and competitive AI ecosystem by providing small developers and entrepreneurs access to technical assistance and resources, helping small businesses commercialize AI breakthroughs, and encouraging the Federal Trade Commission to exercise its authorities
- Use existing authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews
- Advancing American Leadership Abroad – recognizing that AI’s challenges and opportunities are global, the Executive Order proposes continuing to work with other nations to support safe, secure, and trustworthy deployment and use of AI worldwide, including these actions:
- Have the State Department and the Commerce Department expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI and establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety
- Accelerate development and implementation of vital AI standards with international partners and in standards organizations, ensuring that the technology is safe, secure, trustworthy, and interoperable
- Promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, such as advancing sustainable development and mitigating dangers to critical infrastructure
- Ensuring Responsible and Effective Government Use of AI – to prevent risks, such as discrimination and unsafe decisions and to ensure the responsible government deployment of AI and to modernize federal AI infrastructure, the Executive Order proposes these actions:
- Issue guidance for agencies’ use of AI, including clear standards to protect rights and safety, improve AI procurement, and strengthen AI deployment
- Help agencies acquire specified AI products and services faster, more cheaply, and more effectively through more rapid and efficient contracting
- Accelerate the rapid hiring of AI professionals as part of a government-wide AI talent surge led by the Office of Personnel Management, U.S. Digital Service, U.S. Digital Corps, and Presidential Innovation Fellowship. Agencies will provide AI training for employees at all levels in relevant fields
Who is affected?
With few exceptions, the Executive Order primarily affects the developers of AI tools and those who train AI models. Except for biological synthesis screening, it focuses largely on the tech not the uses. Developers will have to comply with the test standards developed and share those results with the government. They will need to ensure their tools are safe before they are deployed.
The primary impact on users of third-party AI tech tools will relate to safe use of AI to protect consumers, patients, students and workers, but not necessarily prohibit categories of use as in the EU AI Act.
Check back for updates as these actions get implemented.