Doris O. Matsui

06/05/2025 | Press release | Distributed by Public on 06/05/2025 12:26

MATSUI LEADS CA COLLEAGUES IN OPPOSING AI MORATORIUM IN RECONCILIATION BILL

WASHINGTON, D.C. - Today, Congresswoman Doris Matsui (CA-07), Ranking Member of the House Energy and Commerce Subcommittee on Communications and Technology, led a group of her California colleagues in sending a letter to Senate leadership, strongly objecting to the section of H.R. 1 that would impose a ten-year moratorium on state and local enforcement of their own artificial intelligence laws and regulations.

"This moratorium's assumption-that the United States will be unable to lead the world in AI if states identify and implement measures to protect their citizens from potential AI harms-is misguided," wrote the lawmakers. "It wrongly accepts the premise that identifying and addressing AI-specific risks and harms and imposing guardrails is counterproductive to being the world's AI leader. Nothing is further from the truth. Common sense AI guardrails can propel innovation by building trust with consumers and future users, while promoting a fair, open, and competitive playing field."

In the absence of a federal AI regulatory framework, California and other states across the nation are embracing common-sense safeguards that ensure innovation and competition can continue to thrive. As AI tools grow more sophisticated and more widely deployed, these state measures are crucial to promote safety and trust with consumers. The House-passed moratorium, spearheaded by Republicans, would strip states of their authority to respond to new and evolving AI risks-freezing vital consumer protections for a full decade.

"We should not place consumers in harm's way by pausing for a decade the good work that states have done and will continue to do," the lawmakers continued. "Instead, let us work together in a bicameral, bipartisan fashion to create smart, tailored, and consensus-driven legislative solutions that empower Americans' use of AI and automated decision systems."

Full text of the letter can be found below or HERE.

Dear Majority Leader Thune, Minority Leader Schumer, Chairman Cruz, and Ranking Member Cantwell:

We are writing to express our strong objections to the section of H.R. 1 that would impose a sweeping ten-year moratorium on state and local enforcement of their own artificial intelligence (AI) laws and regulations.

As part of being the global AI leader, the United States must take the lead on identifying and setting common sense guardrails for responsible and safe AI development and deployment. To prevent states, including our state of California, from enforcing state AI regulations that provide such guardrails-particularly without any meaningful federal alternative-is inconsistent with the goal of AI leadership. This moratorium's assumption-that the United States will be unable to lead the world in AI if states identify and implement measures to protect their citizens from potential AI harms-is misguided. It wrongly accepts the premise that identifying and addressing AI-specific risks and harms and imposing guardrails is counterproductive to being the world's AI leader. Nothing is further from the truth. Common sense AI guardrails can propel innovation by building trust with consumers and future users, while promoting a fair, open, and competitive playing field.

California is the fourth largest economy in the world in part because innovative technology companies, including 32 of the world's 50 leading AI companies, call the state home. As a hub of AI activity, our state has been a national leader in ensuring that innovation and competition thrive alongside common-sense safeguards, starting with transparency. In our increasingly digital world, AI and other emerging technologies are rapid disruptors. To place a ten-year hold on state and local enforcement of their own AI laws, especially without federal alternatives, exposes Americans to a growing list of harms as AI technologies are adopted across sectors from healthcare to education, housing, and transportation. The resulting regulatory gap created by the AI moratorium in H.R. 1 would decimate the good work that California and other states, led by both Democrats and Republicans, have done, such as:

  • requiring transparency regarding training data or the use of AI to communicate with patients in medical settings
  • giving performers and their families rights over digital replicas of their likenesses
  • protecting American artists' voice and likeness from unauthorized AI impersonations,
  • requiring employers to ensure AI-enabled employment decisions comply with civil rights laws, and
  • requiring mental health platforms to disclose to users that they are interacting with an AI mental health chatbot, not a human therapist.

These examples and other proposed state legislation exemplify the mounting desire among AI experts and the American public to provide guardrails to promote AI safety, trust, and transparency. This is an extension of bipartisan concerns over online safety and manipulative algorithms-issues that, if left unchecked, leaves Americans vulnerable to harms impacting their health, their jobs, their education, and ultimately, their lives. Now is the time for Congress to work on bipartisan legislation to address these harms. The House Republican ten-year moratorium, by contrast, would gut protections for the very people we represent.

This bill provision isn't limited to state laws and regulations of new and emerging AI. It imposes a ten-year moratorium on laws and regulations regulating "automated decision making systems" which arguably covers any computer processing.

Furthermore, the provision covers state and local regulations of their own use of AI and of automated decision making systems, which will mean states and localities cannot impose procurement requirements on AI and computer systems that are different than those imposed on other technologies. Under this provision, they would not be allowed, for example, to adopt regulations imposing safeguards on education technology to be used in public schools or on AI systems that they want to use to improve the provision of government services. That makes no sense at all.

Late in the process, House Republicans added an exception to the ten-year moratorium for state and local laws to the extent they impose criminal penalties. But that exception only underscores the absurd breadth of the 10-year moratorium. Why should the federal government incentivize states and localities to adopt criminal penalties to deal with harms from AI models and systems, and automated decision-making systems, in instances where a civil penalty, breach of contract claim, injunctive relief or some other non-criminal remedy is more appropriate to address the problem at hand?

We have already seen an outpouring of opposition to this moratorium, including bipartisan opposition from state attorneys general, state legislators, voters, and over 140 consumer advocacy, online safety, and civil rights groups. The House Bipartisan AI Taskforce last Congress acknowledged the "risks" of enacting an AI moratorium on state activity and, instead, recommended that Congress "commission a study to analyze the applicable federal and state regulations and laws that affect the development and use of AI systems across sectors." We should not place consumers in harm's way by pausing for a decade the good work that states have done and will continue to do. We must learn from them. After all, we have had the opportunity to learn from five years' worth of several state efforts to criminalize the sharing of non-consensual intimate imagery, real and AI-generated, to produce the TAKE IT DOWN Act that President Trump recently signed into law. Now is not the time to deny Congress the critical insight our states provide as laboratories of democracy.

Additionally, this moratorium is procedurally deficient, as it bears no relationship to the federal budget. House Republicans stretch credulity beyond its breaking point when claiming this moratorium is necessary to effectuate their reconciliation bill's $500 million for the Department of Commerce to update its IT and cybersecurity systems. Under the Supremacy Clause, states cannot pass laws that restrict or impose obligations on the federal government, including the Department of Commerce and federal procurement rules governing agency IT systems. Consequently, the moratorium does not impact the federal budget and must fall out as an "extraneous matter" prohibited, under the Senate Byrd Rule, from inclusion in a reconciliation bill.

As you take up the House Republicans' reconciliation bill for consideration, we urge you to remove the AI moratorium provision. Instead, let us work together in a bicameral, bipartisan fashion to create smart, tailored, and consensus-driven legislative solutions that empower Americans' use of AI and automated decision systems. We can learn from what the states-like California, New York, Tennessee, Utah, and many others-are doing to leverage the benefits of AI technologies while protecting consumers from their harms.

# # #

Doris O. Matsui published this content on June 05, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on June 05, 2025 at 18:26 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at support@pubt.io