Covington & Burling LLP

10/11/2024 | News release | Distributed by Public on 10/12/2024 10:56

September 2024 Developments Under President Biden’s AI Executive Order

This is part of an ongoing series of Covington blogs on the implementation of Executive Order No. 14110 on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" (the "AI EO"), issued by President Biden on October 30, 2023. The first blog summarized the AI EO's key provisions and related Office of Management and Budget ("OMB") guidance, and subsequent blogs described the actions taken by various government agencies to implement the AI EO from November 2023 through August 2024. This blog describes key actions taken to implement the AI EO during September 2024. It also describes related developments in California related to the goals and concepts set out by the AI EO. We will discuss developments during September 2024 to implement President Biden's 2021 Executive Order on Cybersecurity in a separate post.

Bureau of Industry and Security Proposes Updated Technical Thresholds for Dual-Use Foundation Model Reporting Requirements

On September 9, 2024, the Department of Commerce's Bureau of Industry and Security ("BIS") published a Notice of Proposed Rulemaking ("NPRM") on updated technical thresholds that would trigger reporting requirements for AI models and computing clusters under the AI EO. Under Section 4.2(a)(i) of the AI EO, certain developers of dual-use foundation models must regularly provide the Federal government with information related to (1) training, developing, or producing dual-use foundation models, (2) the ownership and possession of model weights and physical and cybersecurity measures to protect those models weights, and (3) the results of AI red-team testing based on upcoming AI red-teaming guidance from the U.S. AI Safety Institute's ("U.S. AISI") and associated safety measures. Covington previously covered U.S. AISI's draft guidance here.

BIS's proposed rule, which implements Section 4.2(b)'s requirement that the Department of Commerce define and regularly update the technical thresholds for reporting, would require compliance with the AI EO's reporting requirements for developers of any "dual-use" AI model trained using a quantity of computing power greater than 1026 floating-point operations per second ("FLOPS"). This threshold is the same as the interim threshold in Section 4.2(b)(i) of the AI EO. With this said, the proposed rule modifies the AI EO's initial threshold for large scale "computing clusters" by removing its physical co-location requirement. Large-scale computing clusters subject to the reporting requirement are instead defined under the proposed rule as "clusters having a set of machines transitively connected by networking of over 300 Gbit/s and having a theoretical maximum performance greater than 10^20 computational operations (e.g., integer or floating-point operations) per second (OP/s) for Al training, without sparsity."

GAO Releases Report on Agency Implementation of AI EO Management and Talent Requirements

On September 9, 2024, the U.S. Government Accountability Office ("GAO") released a report detailing its evaluation of the extent to which several agencies have implemented selected AI management and talent requirements from in accordance with the AI EO. GAO found that each agency had fully implemented the relevant requirements. The GAO report determined, among other things, that the:

  • Executive Office of the President organized the AI and Technology Talent Task Force and established the White House AI Council.
  • Office of Management and Budget has convened and chaired the Chief AI Officer council, issued AI guidance and use case instructions to agencies, and established initial plans for AI talent recruitment.
  • Office of Personnel Management has reviewed hiring and workplace flexibility, considered excepted service appointments, coordinated AI hiring action across federal agencies, and issued related pay guidance.
  • Office of Science and Technology Policy and OMB have identified priority mission areas for increasing AI talent, established the types of talent that are the highest priority to recruit and develop, and identified accelerated hiring pathways.
  • General Services Administration finalized and issued its framework to enable consistent prioritization of authorization of critical emerging AI technologies in a secure cloud environment via the FedRAMP (Federal Risk Authorization Management Program) process. GSA's prioritization framework included chat interfaces, code-generation and debugging tools, and prompt-based image generators.

Inaugural Meeting of International Network of AI Safety Institutes to be Hosted in San Francisco

On September 18, 2024, the U.S. Commerce and State Departments announced plans to host the inaugural meeting of the International Network of AI Safety Institutes. The Network, announced in May by U.S. Secretary of Commerce Gina Raimondo, is intended to bring together technical AI experts from each member's AI safety institute in order to begin advancing global collaboration and knowledge sharing on AI safety.

The meeting will take place on November 20-21, 2024, in San Francisco. The initial members of the International Network of AI Safety Institutes are Australia, Canada, the European Union, France, Japan, Kenya, South Korea, Singapore, the United Kingdom, and the United States.

Agencies Publish Plans to Achieve Compliance with OMB AI Guidance

Under Section 3(a)(iii) of OMB Memorandum M-24-10 on "Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence" ("OMB AI Memo"), agencies were required to submit and publicly post plans to achieve consistency with the OMB AI Memo's AI use case inventory requirements, minimum risk management practices, and other agency obligations by September 24, 2024. As of October 4, by our count, at least 31 federal agencies have published AI compliance plans, including GSA, SEC, EEOC, NASA, and the Departments of Defense, State, Homeland Security, Commerce, and Treasury.

Although these plans generally contain similar objectives, agencies differ in their plans to assess the risks that may arise from agency AI use cases. For example, the Department of Treasury's AI Compliance Plan notes that, while "not required by M-24-10, Treasury will be using additional resources to identify and manage potential risks posed by AI use," including audit recommendations and potential impacts from supply chain risks in AI software and system environments.

White House Hosts Task Force on AI Datacenter Infrastructure

On September 12, 2024, the White House hosted a roundtable with representatives from AI companies, datacenter operators, and utility companies to discuss strategies to meet clean energy, permitting, and workforce requirements for developing large-scale AI datacenters and power infrastructure needed for advanced AI operations. After the roundtable, the White House announced that:

  • The White House is launching a new Task Force on AI Datacenter Infrastructure to coordinate policy across government.
  • The Administration will scale up technical assistance to federal, state, and local authorities handling datacenter permitting.
  • The Department of Energy (DOE) is creating an AI datacenter engagement team to leverage programs to support AI data center development.

According to the White House, these actions will enable the construction of more AI datacenters in the U.S.

California Governor Vetoes AI Safety Legislation Modeled on AI EO

On September 29, California Governor Gavin Newsom (D) vetoed the Safe and Secure Innovation for Frontier AI Models Act (SB 1047), an AI safety bill that would have imposed testing, reporting, and security requirements on developers of large AI models. SB 1047 would have implemented computational thresholds for "covered models" that would have mirrored the AI EO's thresholds for dual-use foundation model reporting, i.e., AI models trained using more than 1026 FLOPS of computing power (in addition to being valued at more than $100 million).

Notably, Newsom cited SB 1047's thresholds as the reason for rejecting the legislation. In his veto message, Newsom noted that, while "[AI] safety protocols must be adopted," SB 1047's cost and computational thresholds "applies stringent standards to even the most basic functions-so long as a large system deploys it," rather than regulating based on "the system's actual risks." Newsom added that SB 1047 could "give the public a false sense of security about controlling this fast-moving technology" while "[s]maller, specialized models" could be "equally or even more dangerous than the models targeted by SB 1047." Covington previously covered Governor Newsom's veto of SB 1047 here.