ITIF - The Information Technology and Innovation Foundation

01/26/2025 | News release | Distributed by Public on 01/27/2025 11:13

Texas’s AI Law Won’t Deliver the Accountability It Promises

Everything is bigger in Texas, the saying goes, and the Texas Responsible AI Governance Act (TRAIGA) is no exception. Introduced in December 2024 to address algorithmic bias, the sweeping bill would impose stringent state-level restrictions on AI systems if passed into law. But its heavy-handed approach risks creating more problems than it solves, prioritizing bureaucratic hurdles over meaningful progress in fairness and accountability.

TRAIGA casts a wide net both in terms of what it regulates and who it would apply to. It defines an AI system as a system that uses machine learning and related methods to train models capable of performing tasks typically associated with human intelligence, such as visual recognition, language processing, and content creation. These systems are deemed "high-risk" if they are used in consequential decisions-those affecting access to housing, healthcare, employment, or critical utilities such as water and electricity. Developers of these high-risk systems would be required to submit detailed reports outlining how their models might harm protected groups and the steps they've taken to mitigate such risks. Distributors, which are the entities that bring these tools to market, would be obligated to ensure compliance with the law's standards and potentially withdraw non-compliant products. Organizations deploying the technology would face semiannual impact assessments for each use case, and would be required to update them whenever significant changes occur in an AI system. Compounding this comprehensive approach, the bill would establish a Texas AI Council-a new centralized authority with powers to issue ethical guidelines and rules for AI deployment across the state.

To its credit, TRAIGA seeks to address an important issue. Preventing AI systems from perpetuating or amplifying bias is a worthy goal, but intent alone does not make good policy. As written, TRAIGA's sweeping mandates risk prioritizing process over substance, creating layers of compliance that obscure rather than ensure fair outcomes. Worse, it threatens to exacerbate the fractured patchwork of state and federal AI governance, actively undermining efforts to build a unified and effective approach to tackling bias and discrimination.

First, TRAIGA's approach is flawed because it hinges on transparency of process, assuming that exhaustive reports, risk documentation, and assessments will translate into meaningful accountability. But paperwork alone doesn't ensure progress. The Attorney General is tasked with scrutinizing these materials and enforcing compliance, but given the scope of oversight required, expecting this office to have the resources or expertise to tackle such a monumental task is fanciful. It would turn compliance into a hollow ritual: developers churn out paperwork, but no one meaningfully interrogates it or ensures it leads to progress. A better approach would support the use of performance metrics for any high-risk AI system procured by the state government, including for accuracy and error rates by age, race, and gender. Setting a performance standard for state governments can promote better accuracy rates across all sectors of the economy and will ensure agencies don't waste tax dollars on ineffective systems or ones with significant performance disparities. For commercial AI systems, states should avoid setting pre-deployment performance metrics. The federal government is already working on how best to evaluate AI systems, which is best handled at the national level to ensure consistency, but states have a valuable role to play in informing these efforts by tracking where AI systems fail-such as instances of bias or unintended harm-and contributing data and insights that can strengthen any toolkits for evaluation at the national level.

Second, TRAIGA's plan to create a centralized Texas AI Council recycles an idea that has repeatedly fallen flat. Proposals to vest AI oversight in a single body, from the city level to the national, have run aground for both practical and conceptual reasons. For instance, New York City's Automated Decision Systems Task Force, designed to guide government use of AI in areas like policing and education, collapsed in 2019 after years of bureaucratic delays and limited access to critical data, leaving it without any actionable recommendations. If a narrowly scoped initiative like New York City's couldn't overcome these challenges, it's hard to see how the state of Texas, with a far more ambitious mandate, expects to succeed. The lesson from the national level is similar: Congress dismissed calls in 2023 for a single AI regulator, recognizing that no single body could effectively manage such a wide range of applications. The state level is no exception to these dynamics; one-size-fits-all oversight cannot handle the complexities of AI across diverse sectors. Instead of replicating past failures, TRAIGA should focus on strengthening the sector-specific agencies that already understand the unique risks in each domain. That approach would bolster AI governance where it matters most, rather than adding another layer of centralized bureaucracy that is ill-suited to the task.

Finally, TRAIGA deepens the disarray of America's fragmented AI governance landscape, pulling the country further from a unified national direction. State privacy laws already show how costly and confusing a patchwork approach can be for businesses and consumers. But while privacy laws at least follow some predictable patterns along red and blue state lines, AI regulation is more chaotic. Rep. Giovanni Capriglione has pitched TRAIGA as a "red state model," but the bill shares more in common with blue-state efforts like Colorado's than with Utah's more restrained approach-proving there's no real coherence in how states are governing AI. For businesses, this lack of coherence will translate to even greater uncertainty and higher costs. Without a unified framework, companies face not just compliance headaches but a regulatory minefield where expectations differ wildly between states. On top of that, this fragmented approach undermines the United States' ability to lead globally on AI governance, allowing other nations to set the standards in ways that could harm American innovators.

As it stands, TRAIGA is more of a cautionary tale than a guiding light. Far from setting a bold model for others to follow, it risks becoming a case study in how to obscure progress, entrench bureaucracy, and fail to deliver on its promises.