SIIA - Software & Information Industry Association

06/11/2025 | Press release | Distributed by Public on 06/11/2025 14:17

SIIA Raises Ongoing Concerns About NY’s Updated Responsible AI Safety and Education (RAISE) Act

SIIA Raises Ongoing Concerns About NY's Updated Responsible AI Safety and Education (RAISE) Act

June 11, 2025
by Paul Lekas
Policy

On Monday, legislators in New York unveiled a new version of the Responsible AI Safety and Education (RAISE) Act. The bill purports to advance safety by regulating the most advanced AI models, known as frontier models. SIIA has been tracking this legislation closely and has expressed a series of concerns about the bill in an op-ed published in the Albany Times Union in April and in a coalition letter and separate correspondence to New York legislators in May.

The changes made to the RAISE Act are improvements but do nothing to address the fundamental concerns we have raised. These stem from the definitions of "critical harm" and "safety incident". As those terms are defined, the bill would impose requirements on AI developers that are literally impossible to implement in any reliable manner. This is because AI models of the type covered in this bill are intended to be used and modified downstream; in fact, that adaptability is one of the fundamental reasons for open source AI models. Developers can convey their intentions about how models should be used, and they can build-in safeguards, and US-based developers are doing that. But it is not possible to track, assess, and evaluate all downstream uses and to prevent any downstream uses that may be unintended or may circumvent safeguards.

Safety and security of AI models undoubtedly has important implications for society writ large. Critically, however, the absence of federal or state legislation on frontier model safety and security should not be seen as an indication that industry and policymakers have done nothing. Significant investments, research, and projects are underway in many forums, including within companies, through industry/academia partnerships, through international technical standards organizations, through public-private collaboration with the UK AI Security Institute, the US counterpart (re-branded this week the Center for AI Standards and Innovation), and international efforts (such as the Seoul Frontier AI framework, the Bletchley Park Declaration, and the G7 Hiroshima Process).

Legislation like the RAISE Act has significant stakes for the future of AI development in New York and globally. As a society, we need to continue investing in the development of measurement science, standards, and frameworks that advance the security of AI systems while also promoting continued innovation. The RAISE Act would create barriers to achieving this vision without improving the safety and security of the AI ecosystem.

SIIA - Software & Information Industry Association published this content on June 11, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on June 11, 2025 at 20:17 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at support@pubt.io