01/13/2025 | Press release | Distributed by Public on 01/13/2025 13:35
Photo: Cemile Bingol/DigitalVision Vectors via Getty Images
Commentary by Yinuo Geng
Published January 13, 2025
Despite much discussion and no lack of bills being introduced, a comprehensive approach to managing any risks from artificial intelligence (AI) remains elusive in the United States. California governor Gavin Newsom's decision to veto SB 1047, or the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, closed the most recent chapter on the high-profile debates of how to regulate AI development, but it did little to diminish ongoing pressures to do so. There is now a common pattern: a regulation is proposed, a flurry of back-and-forth commentaries and arguments about the validity of the proposal comes soon after, then a hardening of perspectives, and-so far-the proposal then fails to become law. Rather than scrambling to react to each new proposal with support or horror, it would be beneficial to have an objective framework for discussion. And the most straightforward and impactful organizing principle for evaluating AI regulations would take advantage of the AI development lifecycle.
Without a structure for comparisons, the temptation exists for announcements of AI regulatory proposals to become primarily political talking points to the detriment of effective management of real AI risks. The race to propose regulations has also seen little consideration of how the various rules could work together without contradictions or overlaps. It is increasingly evident that a more structured approach is desirable to evaluate, compare, and design proposals for regulating AI and to bring a degree of coherence to the wide variations in AI governance. This is especially important as AI encompasses many techniques, enables many technologies, and can be embedded into many existing products and workflows and used to create many new ones.
The AI development lifecycle can be thought of as the entire end-to-end process through which data is collected, techniques and models are designed for the uncovering of patterns from the data, content or other output is created leveraging those resulting models, and then that content or output is circulated in some manner to be consumed by a given set of intended audiences. Different stages of this lifecycle involve different actors. The outputs of each stage also focus on distinct goals, with outputs of later stages of the lifecycle increasingly targeting narrower objectives and use cases; for example, "general purpose" AI, by definition, exists toward the beginning of the lifecycle before the models are applied to (for example, industry-specific) applications. Consequently, the value of using the AI development lifecycle to evaluate, compare, and design regulations is to have a systematic structure to distinguish who would be held accountable for remedies, and who would be most affected. More importantly, given the implications that AI-driven technologies, processes, products, and techniques will have across a wide array of industries and activities, it is immensely unlikely that a single law will be sufficient to regulate the creation and use of AI. Instead, there will be a collection of interacting laws. Using the AI development lifecycle can help to organize and bring unity to the ways in which future regulations can work together.
The AI development lifecycle can be roughly grouped into the following stages:
Proposals for regulating AI can cover multiple stages of the AI development lifecycle, and actors at each stage of the lifecycle often try to expand their role to other parts of the lifecycle. Nonetheless, looking at AI regulation through this lens provides for a greater degree of specificity of appropriate actions, brings objectivity to some of the most passionate debates regarding how true risks of AI are created, and highlights the actors that can be held accountable.
As an example, the concept of transparency is frequently touted as a good thing to enforce but the devil is in the details of how to do so. Taking into account the AI development stages can help. That is, transparency at the stage of circulation is more about providing those who view an AI-generated output with a clear understanding of the AI elements, while transparency at the stage of development tends to be more focused on the ability to assess the quality of the training dataset. Depending on the outcomes or the risks targeted, future rules will need to establish actions that aim at the right kind of transparency.
Then there is the debate on AI risks that seems to overwhelm all other debates: whether governments should focus on addressing long-term "existential" threats from frontier models or on addressing existing real-world harms like bias. Using the development lifecycle structure, the debate can move from a "which is riskier" argument to a discussion on a multilayer approach across actors and stages to defend against both future threats and currently-known harms. In fact, risks from AI will frequently require mitigation tactics that place requirements on actors across all stages of the development lifecycle.
Pressure for AI regulation has now been building for some time. Yet the push for governance risks becoming a red herring if it lacks a structure to manage the long-term trade-offs in the ongoing creations and uses of AI. Effective management of the various aspects of AI will require multiple sets of laws, rules, and norms to account for the diversity and complexity of risks, opportunities, harms, and objectives of AI. If there is one thing that is agreed upon, it is that AI as a technology is going to continue to evolve; consequently, a structure like the AI development lifecycle can enable a more objective framing of how to assess new rules and new uses.
Yinuo Geng is an adjunct fellow (non-resident) with the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C., and a vice president with Gartner, Inc. The views expressed are her own.
Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).
© 2025 by the Center for Strategic and International Studies. All rights reserved.