SIIA - Software & Information Industry Association

10/17/2025 | Press release | Distributed by Public on 10/17/2025 16:47

Why the Transparency in Frontier Artificial Intelligence Act is the Impetus Congress Needs to Act on Frontier Models and Direct the National AI Conversation to What People[...]

Why the Transparency in Frontier Artificial Intelligence Act is the Impetus Congress Needs to Act on Frontier Models and Direct the National AI Conversation to What People Really Care About

October 17, 2025
by Paul Lekas
Policy

SIIA has long advocated for a national approach to frontier AI model oversight as the best strategy for mitigating risks and increasing public trust in AI. Doing so would require Congress to pass legislation establishing both baseline requirements for model developers and a framework for assessing and mitigating national security and public safety risks.

States have not waited for Congress to act. And California, in enacting the Transparency in Frontier Artificial Intelligence Act (TFAIA), SB 53, has just raised the stakes by becoming the first U.S. jurisdiction to regulate the AI frontier model development.

Introduced by Senator Scott Weiner, TFAIA is a vast improvement on last year's predecessor, SB 1047. SIIA appreciates the sensible approach to frontier model oversight that aligns with emerging industry best practices and remains flexible enough to adapt to evolving technical standards. TFAAI is not perfect, however. Although this legislation is preferable than other states' foundation model legislation, SIIA has continued concerns around the law's threshold for regulation and whether it provides sufficient protection for trade secrets. As Governor Gavin Newsom has signalled a willingness to refine the law in the next term to address priority issues such as these, SIIA will continue to monitor potential changes to the enacted legislation.

It is highly likely that other states will follow California's lead on foundation models. Currently, New York, where legislators have already passed the RAISE Act, awaits Governor Kathy Hochul's signature. More states could join the chorus in 2026.

How Effective Can State Frontier Model Regulation Really Be?

While SIIA understands the necessity for legislation to address the potential costs and concerns of foundation model state regulations, it remains important to weigh the potential benefits and the level of government which these concerns should be addressed. As such, SIIA believes the federal government should carefully consider the policy recommendations below.

State frontier model regulation creates a false sense of security.

Frontier model legislation in CA and NY aims to mitigate the potential for an extreme AI-enabled event, usually involving mass death or an extraordinary amount of property damage. (TFAIA uses the term "catastrophic risk," defined as "a foreseeable and material risk that…a frontier model will materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars in damage to, or loss of, property arising from a single incident.")

Framing risk in this arbitrary way is a poor proxy for assessing the capabilities of frontier models. A considerable amount of work has been undertaken by developers and researchersto get a handle on potential risks, and the landscape is growing at the pace of innovation. A more nuanced understanding, such as that developed by the Frontier Model Forum, is both broader and more narrow than the approach favored by state legislators.

Ultimately, the focus on "catastrophic risk" creates a false sense of security. It does not consider the misuse, misalignment, and security risks that developers must consider to make their models useful and reliable for myriad applications. TFAIA will incentivize developers to comply with the terms of the law but does not incentivize frontier model developers to do more substantively to make their models more safe and secure. Those efforts are underway both within industry and in collaboration with the National Institute of Standards and Technology and its Center for AI Standards and Innovation (CAISI).

A different question is whether TFAIA will help to build public trust in AI: whether its disclosure and reporting obligations, and whistleblower protections, will satisfy what the public feels is lacking in its relationship with AI technologies. Time will tell, but it is worth pointing out what TFAIA does not do. "Catastrophic risk" is far from the top of the public's list of concerns around AI, none of which are within the scope of TFAIA. Consider this snapshot from an assortment of recent polls:

  • Recent polls show a significant majorityof Americans concerned about the spread of misleading content and deepfakes.
  • Polls also show Americans are skeptical about the use of AI in making hiring decisions.
  • A recent Gallup poll found 87% of Americans are concerned about foreign governments using AI to attack the United States.

Nothing in TFAIA would address the public's yearning for assurances that AI will not be used to discriminate against them, exploit their personal information, or render unreliable information. These are fundamentally questions about uses and specific AI applications rather than frontier model development. This is why we have supported efforts by lawmakers to provide guardrails around high-risk uses rather than the technology itself. A prime example of this is the TAKE IT DOWN Act, a federal law enacted earlier this year.

State frontier model regulation will lead to a misallocation of limited resources

The process of legislating frontier model bills has attracted considerable resources from state legislators and stakeholders on all sides of the issue - resources that arguably could be put to more productive uses.

Yet once enacted, as in California, frontier model laws will require states to siphon limited public resources from other priorities. Implementing TFAIA charges the California Office of Emergency Services (OES) and the California Attorney General with different roles for reporting, oversight, and enforcement. OES, like the U.S. Department of Homeland Security, already has an extensive portfolio, ranging from disaster response to cybersecurity. OES and the AG will need to devote resources to implement TFAIA oversight, which under limited budget flexibility could lead California to underinvest in AI-related areas within state purview that have a more tangible impact on residents. These include, for example, advancing cybersecurity operations to account for AI-related risks, enforcing the range of tech-neutral laws (such as consumer protection and anti-discrimination, laws) to address AI-enabled harms within the state, and making decisions about AI use in state and local government.

But there are limitations on what California, or any other state, will be able to do to address the overarching concerns in frontier model legislation. The motivation for frontier model bills like TFAIA and New York's RAISE Act is to address significant risks to national security and public safety from the use of these powerful models. But it is unlikely that any one state, or even a group of states acting in concert, can develop the compute resources and expertise to provide meaningful oversight and engagementwith industry experts to assess frontier model capabilities and safeguards.

State frontier model regulation will create fragmentation, inconsistencies, and burdens that stifle innovation

Model development is quintessentially a matter of interstate commerce. Models train on a vast array of data from across state borders and are designed for countless anticipated and unanticipated applications across the nation and the world. While states have a clear interest in protecting their own citizens, the Constitution limits the reach of state laws and ultimately the effectiveness of regulating something as inherently interstate as frontier model development.

Under U.S. law, states cannot compel regulationbeyond their borders. State frontier model regulation is limited to conduct within the state, limiting its effectiveness and all but assuring a situation of fragmented, inconsistent laws that apply to the same developers and the same conduct. Even if states adopt legislation identical to TFAIA, limitations on extraterritoriality mean that each state will have its own reporting structures, guidelines, enforcement mechanisms, and so on. The CAAG and OES will not, and by law cannot, handle oversight for the entire nation.

This means that a proliferation of frontier model laws is a recipe for fragmentation, inconsistency, and compliance burden. The burdens may be felt initially by the largest developers, but as the cost of model development decreases, over time the impact will be felt increasingly by smaller developers. And it is simply not possible for a developer to pick and choose where to comply. AI model development is not the province of any one state - even one, like California, that is headquarters to many leading AI labs.

State frontier model regulation cannot address potential national security risks.

Perhaps most important, no state has the capacity or subject matter knowledge to assess potential risks to national security and mitigation strategies to address those risks. The federal government is already invested in this project through NIST and CAISI. Developers are already engaged in this ongoing project through iterative frontier model safety frameworks and efforts of the Frontier Model Forum. There is a palpable risk that state regulation of frontier models - even through disclosure requirements, and even where certain requirements are tethered to align with federal efforts - will generate inconsistencies with the CAISI-led approach.

A decentralized, state-by-state approach to frontier model regulation is also a recipe for increasing the surface area for cybersecurity attacks. State regulators are likely to demand and store sensitive technical information. This information has great value to malicious actors, including foreign adversaries. Each additional point of collection and storage is another potential vulnerability, and malicious actors will seek out the weakest point. This itself could create new national security risks unrelated to the actual capabilities of the frontier models.

TFAIA is a Golden Opportunity for Congress to Take the Next Step

By signing TFAIA into law, Governor Newsom has given Congress a golden opportunity to advance a unified, national framework for frontier model oversight and risk management and to clarify the lines between federal and state lawmaking around AI.

Congress should pass legislation that creates a framework for assessing the national security and public safety risks of frontier AI models that also establishes clear baseline requirements for frontier AI model developers around disclosure, transparency, testing and evaluation, and governance. This will create a unified, national approach that builds on the federal government's resources and technical and national security expertise, centered in NIST's Center for AI Standards and Innovation (CAISI). And this approach would avoid the pitfalls noted above - fragmentation, compliance burdens, and inherent limitations of state authority.

This legislation should include both express preemption of state lawmaking on frontier model development, recognizing that the issues of national security and public safety raised in these state bills can be assessed adequately only at the federal level.

At the same time, Congress should provide clarity about the appropriate role for states in regulating AI. Congress should recognize states' rights to continue to enforce laws of general applicability. Statutes and common law on consumer protection, civil rights and discrimination, and other areas should remain. States should also retain their ability to regulate the use of AI in areas that do not involve interstate commerce - for example, the use of particular AI-enabled tools by law enforcement, in state and local government operations, in public schools, and so on.

There is indeed a role for states to play in AI governance, oversight, and regulation without veering into matters of interstate commerce and national security. We appreciate efforts by California and other states to push the federal government to act. Now it is time for Congress to take the next step.

SIIA - Software & Information Industry Association published this content on October 17, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on October 17, 2025 at 22:47 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]