03/26/2025 | News release | Archived content
The Tenable Cloud AI Risk Report 2025 reveals that 70% of AI cloud workloads have at least one unremediated critical vulnerability - and that AI developer services are plagued by risky permissions defaults. Find out what to know as your organization ramps up its AI game.
With AI bursting out all over these are exhilarating times. The use by developers of self-managed AI tools and cloud-provider AI services is on the rise as engineering teams rush to the AI front. This uptick and the fact that AI models are data-thirsty - requiring huge amounts of data to improve accuracy and performance - means increasingly more AI resources and data are in cloud environments. The million dollar question in the cybersecurity wheelhouse is: What is AI growth doing to my cloud attack surface?
The Tenable Cloud AI Risk Report 2025 by Tenable Cloud Research revealed that AI tools and services are indeed introducing new risks. How can you prevent such risks?
Let's look at some of the findings and related challenges, and at proactive AI risk reduction measures within easy reach.
Using data collected over a two-year period, the Tenable Cloud Research team analyzed in-production workloads and assets across cloud and enterprise environments - including Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP). We sought to understand adoption levels of AI development tooling and frameworks, and AI services, and carry out a reality check on any emerging security risks. The aim? To help organizations be more aware of AI security pitfalls. In parallel, our research helps fuel Tenable's constantly evolving cloud-native application protection platform (CNAPP) to best help our customers address these new risks.
Let's explore two of the findings - one in self-managed AI tooling, the other in AI services.
An unremediated critical CVE in any cloud workload is of course a security risk that should be addressed in accordance with an organization's patch and risk management policy, with prioritization that takes into account impact and asset sensitivity. So high an incidence of critical vulnerabilities in AI cloud workloads is an alarm bell. AI workloads potentially contain sensitive data. Even training and testing data can contain real information, such as personal information (PI), personally identifiable information (PII) or customer data, related to the nature of the AI project. Exploited, exposed AI compute or training data can result in data poisoning, model manipulation and data leakage. Teams must overcome the challenges of alert noise and risk prioritization to make mitigating critical CVEs, especially in AI workloads, a strategic mission.
Securing identities and entitlements is a challenge in any cloud environment. Overprivileged permissions are even riskier when embedded in AI services building blocks as they often involve sensitive data. You must be able to see risk to fix it. Lack of visibility in cloud and multicloud environments, siloed tools that prevent seeing risks in context and reliance on cloud provider security all make it difficult for organizations to spot and mitigate risky defaults, and other access risks that attackers are looking for.
The Artificial Intelligence Index Report 2024, published by Stanford University, noted that organizations' top AI-related concerns include privacy, data security and reliability; yet most have so far mitigated only a small portion of the risks. Good security best practices can go a long way to getting ahead of AI risk.
Here are three basic actions for reducing the cloud AI risks we discussed here:
Ensuring strong AI security for cloud environments requires identity intelligent, AI-aware cloud-native application protection to manage the emerging risks with efficiency and accuracy.
Cloud-based AI has its security pitfalls, with hidden misconfigurations and sensitive data that make AI workloads vulnerable to misuse and exploitation. Applying the right security solutions and best practices early on will empower you to enable AI adoption and growth for your organization while minimizing its risk.
JENGA® is a registered trademark owned by Pokonobe Associates.