01/22/2025 | News release | Distributed by Public on 01/22/2025 07:10
If you are a public cloud or SaaS user you have undoubtedly encountered shared responsibility models. A shared responsibility model distinguishes between your cloud/SaaS provider's responsibility (as the platform owner) and your responsibility (as the data owner and platform user). These distinctions are details so both parties can plan and invest appropriately.
At a high level, you can typically summarize it like this: the cloud provider manages security of the cloud, while security in the cloud is the responsibility of the customer.
But shared responsibility models are not limited to the cloud. Indeed, Microsoft has developed a publicly available AI shared responsibility model that delineates the responsibilities throughout the AI stack. Notably, this model differs from traditional cloud shared responsibility models in its complexity, as we now need to consider the responsibilities of LLM providers (such as OpenAI, Cohere, Mistral, Meta, and others).
Microsoft categorizes the AI stack into three distinct layers of functionality: the AI platform, the AI application, and AI usage. Each layer has specific responsibilities depending on the consumption type, such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS).
The big question is: As the data owner, how do you uphold your role in the shared responsibility model in each of these layers? Fortunately, data security posture management (DSPM) technology can help data owners manage their part of the AI shared responsibility model to protect critical enterprise data no matter which layer of the stack it interacts with.
DSPM plays a pivotal role in enhancing data governance for generative AI by addressing critical challenges such as data visibility, security, and compliance. DSPM solutions automatically discover and classify sensitive data utilized in generative AI models, including personal data, confidential business information, and intellectual property. This enhanced visibility is paramount for comprehending data usage and ensuring its appropriate handling.
DSPM further aids in understanding data exposure for generative AI systems, providing a extensive view of potential risks and facilitating the implementation of effective controls. Additionally, DSPM can uncover "shadow data" (data used in AI projects without proper authorization or oversight). This proactive approach helps prevent the inadvertent use of inappropriate or risky data in generative AI models.
By continuously assessing the security posture of data used in generative AI, DSPM helps identify vulnerabilities and potential risks like data breaches, unauthorized access, and data poisoning. It effectively enforces data security policies, helping organizations ensure that data used in generative AI adheres to organizational standards and regulatory requirements.
The AI platform layer provides the infrastructural underpinnings of the AI application. But it also contains the LLM training and security foundations that ultimately power and influence the capabilities of the AI application stack.
A significant aspect of the AI platform layer's security lies in comprehending and governing the LLM training data. Unintended data used for model training (and fine-tuning or augmentation) can influence the output of the generative AI application and potentially lead to harmful responses.
To address this concern, users can leverage DSPM to filter input data and help ensure that only relevant data is used for generative AI systems.
Another potential issue at the AI platform layer is model theft. If you've invested time and resources in training and fine-tuning your models, you want to ensure they aren't inadvertently (or maliciously) used in cloud locations without the knowledge of your security teams. DSPM can detect model attachment to cloud VMs, for instance, and provide visibility into unexpected use of an organization's model.
The AI application layer provides adjacent applications access to AI capabilities and offers the user-consumed service or interface. The complexity of this layer varies depending on the application. For example, simple, standalone AI applications act as interfaces to APIs, taking text-based user prompts and passing them to the model for responses. But more complex applications include grounding the user prompt with additional context, such as a persistence layer, semantic index, or plugins to access more data sources. Advanced applications may also interface with other applications and systems.
Data security for the AI application layer is crucial, since AI agents, plugins, and data connections pass in and out of various applications as they do their work. This involves identifying organizational-wide shared information or publicly exposed information that is susceptible to risk and manipulation.
DSPM can help manage those interactions to ensure that only the intended consumers of the data have the appropriate access.
The AI usage layer describes how the AI capabilities are ultimately used and consumed.
Generative AI offers a fundamentally different user/computer interface compared to other interfaces like APIs, command-prompts, and graphical user interfaces (GUIs). These dynamic interactions allow the computer to adapt to the user and their intent.
In contrast to interactions at the other two layers, (which primarily require users to learn the system design and functionality) the interactivity inherent to the AI usage layer allows user input to significantly influence the system's output. So security guardrails are essential to protect people, data, and business assets that connect to this layer of the AI stack.
DSPM helps manage and control the rules that grant access to data utilized in generative AI. That means only authorized individuals and systems can interact with sensitive information at the AI usage layer.
Furthermore, DSPM can help bolster organizational compliance with various data privacy regulations, such as GDPR, CCPA, and HIPAA. DSPM equips organizations with the tools to help them ensure that sensitive data used in generative AI is handled in accordance with these regulations.
To learn more about integrating DSPM into your generative AI strategy, read Responsible AI With Rubrik DSPM.