CSIS - Center for Strategic and International Studies Inc.

01/13/2025 | Press release | Distributed by Public on 01/13/2025 13:35

Using the AI Development Lifecycle as an Organizing Principle for AI Regulations

Using the AI Development Lifecycle as an Organizing Principle for AI Regulations

Photo: Cemile Bingol/DigitalVision Vectors via Getty Images

Commentary by Yinuo Geng

Published January 13, 2025

Despite much discussion and no lack of bills being introduced, a comprehensive approach to managing any risks from artificial intelligence (AI) remains elusive in the United States. California governor Gavin Newsom's decision to veto SB 1047, or the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, closed the most recent chapter on the high-profile debates of how to regulate AI development, but it did little to diminish ongoing pressures to do so. There is now a common pattern: a regulation is proposed, a flurry of back-and-forth commentaries and arguments about the validity of the proposal comes soon after, then a hardening of perspectives, and-so far-the proposal then fails to become law. Rather than scrambling to react to each new proposal with support or horror, it would be beneficial to have an objective framework for discussion. And the most straightforward and impactful organizing principle for evaluating AI regulations would take advantage of the AI development lifecycle.

Without a structure for comparisons, the temptation exists for announcements of AI regulatory proposals to become primarily political talking points to the detriment of effective management of real AI risks. The race to propose regulations has also seen little consideration of how the various rules could work together without contradictions or overlaps. It is increasingly evident that a more structured approach is desirable to evaluate, compare, and design proposals for regulating AI and to bring a degree of coherence to the wide variations in AI governance. This is especially important as AI encompasses many techniques, enables many technologies, and can be embedded into many existing products and workflows and used to create many new ones.

The AI development lifecycle can be thought of as the entire end-to-end process through which data is collected, techniques and models are designed for the uncovering of patterns from the data, content or other output is created leveraging those resulting models, and then that content or output is circulated in some manner to be consumed by a given set of intended audiences. Different stages of this lifecycle involve different actors. The outputs of each stage also focus on distinct goals, with outputs of later stages of the lifecycle increasingly targeting narrower objectives and use cases; for example, "general purpose" AI, by definition, exists toward the beginning of the lifecycle before the models are applied to (for example, industry-specific) applications. Consequently, the value of using the AI development lifecycle to evaluate, compare, and design regulations is to have a systematic structure to distinguish who would be held accountable for remedies, and who would be most affected. More importantly, given the implications that AI-driven technologies, processes, products, and techniques will have across a wide array of industries and activities, it is immensely unlikely that a single law will be sufficient to regulate the creation and use of AI. Instead, there will be a collection of interacting laws. Using the AI development lifecycle can help to organize and bring unity to the ways in which future regulations can work together.

The AI development lifecycle can be roughly grouped into the following stages:

  • Development and design of AI models and technologies.
    This early stage is where new models and techniques are created, and where cutting-edge research into "general purpose" and "foundation" AI models is conducted. The actors involved include researchers, such as in academia, and "pure AI" companies focused on exploring and experimenting with AI techniques, such as large language models, neural networks, and reinforcement learning. These actors can be motivated by the innovation potential of their work to push the limits of what is currently technologically feasible. While it is possible to start to gauge the degree of risk at this stage, there are significant follow-on applications, both positive and negative, that are not discoverable at this stage.
  • Application of and creation of content or outputs using AI models and technologies.
    This is the stage where general-purpose AI techniques are used for targeted applications that are narrow and can be industry-specific or even organization-specific. Actors here are the organizations that leverage AI models to solve a tangible problem or create a specific process or product, often requiring a large shift in business models to do so. For example, a travel agency can use genAI-based chatbots to highlight new beach vacation packages to clients who have previously gone on Caribbean cruises, a sports stadium can seek to use facial recognition at entry points, or a pharmaceutical company can use AI to find new ways of combining biochemicals. At this stage, the diversity of ways to use AI is broad, and so are the risks. Deep expertise in the industry or organization in question is valuable for understanding the impacts of AI applications, impacts that overarching regulations cannot effectively account for. Some have described AI as analogous to electricity in emphasizing this stage. At its simplest, restricting certain AI applications-as included in the recently-published Framework To Advance AI Governance And Risk Management In National Security that accompanied President Biden's AI in national security memorandum-is the most straightforward way to regulate this stage.
  • Circulation and use of content or outputs created with AI technologies (e.g., for decisionmaking or for for influence).
    This is the stage where outputs from AI are being dispersed in some form. This can be internal to an organization in how they use AI in their decision-making. Or it can involve third-party actors like social media platforms to circulate the AI-generated outputs. As a result, actors can include the organizations and individuals that create output using AI when they also distribute the output or choose whether to use AI outputs without humans-in-the-loop, but it also includes platform owners on whose platforms such content can circulate. In the case of things like deepfakes, this is the stage where there are significant debates (and regulatory differences across jurisdictions) about how to regulate the sharing (and who can enable the sharing), and not just the creation, of such altered images.
  • Absorption and consumption of AI-generated content or outputs by specific audiences.
    This final stage is where audiences, end-users, consumers, and individuals come into the discussion as it is about how AI-generated output is consumed. Typical actors at this stage are the individuals and groups in broader society that view and engage with AI-generated output, knowingly or unknowingly. Though less prominent in current debates, the ability to teach audiences how to account for AI in the products they interact with will be crucially important. Education will play a key role here in enabling audiences to understand and evaluate AI-generated outputs. Focusing on this stage will also mean thinking through the legal tools that individuals should have access to when they feel personally harmed by AI.

Proposals for regulating AI can cover multiple stages of the AI development lifecycle, and actors at each stage of the lifecycle often try to expand their role to other parts of the lifecycle. Nonetheless, looking at AI regulation through this lens provides for a greater degree of specificity of appropriate actions, brings objectivity to some of the most passionate debates regarding how true risks of AI are created, and highlights the actors that can be held accountable.

As an example, the concept of transparency is frequently touted as a good thing to enforce but the devil is in the details of how to do so. Taking into account the AI development stages can help. That is, transparency at the stage of circulation is more about providing those who view an AI-generated output with a clear understanding of the AI elements, while transparency at the stage of development tends to be more focused on the ability to assess the quality of the training dataset. Depending on the outcomes or the risks targeted, future rules will need to establish actions that aim at the right kind of transparency.

Then there is the debate on AI risks that seems to overwhelm all other debates: whether governments should focus on addressing long-term "existential" threats from frontier models or on addressing existing real-world harms like bias. Using the development lifecycle structure, the debate can move from a "which is riskier" argument to a discussion on a multilayer approach across actors and stages to defend against both future threats and currently-known harms. In fact, risks from AI will frequently require mitigation tactics that place requirements on actors across all stages of the development lifecycle.

Pressure for AI regulation has now been building for some time. Yet the push for governance risks becoming a red herring if it lacks a structure to manage the long-term trade-offs in the ongoing creations and uses of AI. Effective management of the various aspects of AI will require multiple sets of laws, rules, and norms to account for the diversity and complexity of risks, opportunities, harms, and objectives of AI. If there is one thing that is agreed upon, it is that AI as a technology is going to continue to evolve; consequently, a structure like the AI development lifecycle can enable a more objective framing of how to assess new rules and new uses.

Yinuo Geng is an adjunct fellow (non-resident) with the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C., and a vice president with Gartner, Inc. The views expressed are her own.

Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).

© 2025 by the Center for Strategic and International Studies. All rights reserved.

Image
Adjunct Fellow (Non-resident), Strategic Technologies Program