09/11/2025 | Press release | Distributed by Public on 09/11/2025 06:39
The entire world of technology is abuzz about AI/ML. It's arguably the most disruptive technology to society since the smartphone. In fact, Gartner estimates that the number of companies using open-source AI directly will increase tenfold by 2027.
While this rapid advance is fueling quantum leaps in innovation, it also ignites increasing scrutiny from regulatory bodies worldwide, demanding unprecedented levels of rigor, transparency, and accountability from development teams deploying AI/ML in production environments. This accountability is even broader and more important today, as teams can use models without directly deploying them, accessing powerful AI through external APIs via a simple internet connection (ChatGPT, Gemini, etc.).
In this article, we'll discuss the guardrails governments are putting around AI/ML development, and how you can maintain a proactive stance to protect your business from regulatory fallout.
AI/ML development happens at breakneck speed, and requires a vast volume of large datasets, complex algorithms, and continuous model training and evaluation. This is significantly widening the scope of risk assessment in software development. Given the sheer power of AI/ML, it's no surprise regulators are intensifying their scrutiny.
Beyond traditional concerns like code quality and data security, regulators are now assessing intelligent agents for their potential harmful impacts, ethical implications, and societal consequences. For example, regulators have data privacy and intellectual property concerns around how AI/ML models are trained, and are worried about the dangers they create, like discriminatory outcomes (algorithmic bias) or misinformation (AI hallucinations).
In Europe, regulators have formalized comprehensive regulations around the need to thoroughly assess and certify your AI-based solutions before utilization. There are serious ramifications associated with failing to comply with the European Union Artificial Intelligence Act, which classifies AI systems based on risk levels and imposes strict requirements accordingly - with financial penalties potentially reaching up to 6% of global annual revenue for the most severe violations.
In the US, AI regulations are being defined at the state (California, Colorado, New York, Texas, and Washington) and federal level (Executive Order 14110). Other countries are also suitably concerned and introducing their own requirements, building around these existing laws as well as ISO 42001. With all these demands to demonstrate compliance, the potential headache associated with a whole new supply chain model can be daunting.
Imagine a world where the system would not let you promote an application into production unless it had passed all the tests (yes, I mean all the tests!). Exceptions could be recorded, tracked, and the approvals documented in a single system.
Continuous compliance automation (CCA) offers the solution. By applying controls and enforcing regulations from beginning to end-from initial design to production release-we can eliminate the need for point-in-time checks (i.e., audits). The presence of centrally managed evidence sets the foundation - where all your tools and procedures are integrated into a single data source of truth, enabling stakeholders (developers, AppSec teams, security personnel, auditors, and business owners) to see how the software complies with specific regulatory requirements. This also benefits ML engineers and data scientists, who can trust that they're working with models that have been vetted and conform with organizational policy. An example of such a foundational solution that can enable CCA is JFrog's Evidence Collection.
An approach that combines consistent tools and processes with a reliable path to production creates a trusted environment that automatically generates information demonstrating adherence to regulations and compliance obligations. As some of my fellow CISOs are beginning to recognize, the most effective way to integrate security is to automate it through a methodical process.
AI, ML, LLM, and GenAI are here to stay. With the upcoming challenge of agentic AI, we need to build the foundations of a secure approach to managing risk. This includes addressing traditional concerns like vulnerabilities, personally identifiable information (PII), and business risks, while also developing the flexibility to adjust to new demands as the world confronts emerging threats generated by this new intelligence landscape.
The JFrog Software Supply Chain Platform includes end-to-end AI/ML model lifecycle management, which provides a trusted environment for AI/ML developers to build and fine-tune models. And with the AI Catalog, which we just announced at swampUP 2025, as your one source of truth for the entire AI ecosystem, you can rapidly adopt AI without compromising on governance, security, or compliance. You can also leverage Evidence Collection to collect evidence of every step taken to advance AI models into production.
Need help getting started? Take a tour, or book a demo with an MLSecOps expert at JFrog!