01/08/2025 | News release | Distributed by Public on 01/08/2025 09:01
When we embarked on this journey, we set out with some very specific design objectives to drive adoption. Rather than force users to change their ways of working, we had to meet them where they were. This meant helping find and connect to all data, enabling self-service, supporting mobility and portability, establishing an ecosystem to drive collaboration and innovation, and using large language models (LLMs) to accelerate model training.
Connect to data wherever it sits
Good AI starts with data, and the F5 AI Data Fabric helps generate insights from the massive amounts of data collected from products across our portfolio. Like most enterprises that run apps at scale, we acknowledge the reality that data resides in many different stores and not just in a single data lake. We accommodate apps and data sources deployed on-prem and across multiple clouds, both public and private; the AI Data Fabric can connect and attach compute to data wherever it sits. A common data catalog helps users navigate many data sources, while it allows us as a team to fit into F5's data governance strategy to ensure proper controls and auditing over data assets. This strategy allows us to generate insights, manage, and provide governance for data from different applications and products, across multiple data lakes and data sources, without changing how our users work.
Giving users the ability to attach compute to data is only part of the story. To drive adoption, we had to consider the entire data science user journey and where we could eliminate friction.
Packaging and deploying AI apps is one great example. We do a lot of the hard work so our users don't have to. The AI Data Fabric can manage python dependencies, package AI apps in a container with an HTTP server, configure that endpoint in our API gateway, and deploy the AI apps in Kubernetes with the right GPU, CPU, and memory requirements wherever they are needed. Imagine training, packaging, and deploying a model, all in one automated workflow through an easy-to-use SDK. With this system, a data scientist can deploy a new version of their AI application in 10 to 15 minutes. Reducing friction helps data scientists do what they do best: data science.
Build for AI mobility and portability
This is another example of meeting our customers where they are. All built containers are stored in the AI Data Fabric's container registry, freeing users to deploy AI apps wherever they need to run, even in air-gapped environments.
Accelerating AI adoption means rapid collaboration. It means leveraging and building on the work of those who have come before you. Inside the AI Data Fabric is an "AI ecosystem," tools and pre-built modules for performing common and complex AI functions. When users contribute modules and models back to the ecosystem, that's a real accelerant to innovation. Modules can range from pre-built anomaly detection and classification models to apps for performing retrieval augmented generation, or RAG.
Use LLMs to accelerate model training
To further accelerate adoption, we took the principles of Agentic AI and applied them to the F5 AI Data Fabric. We ended up with a system that can train and deploy models to reason about data, use AI to reason about those resulting insights, then identify and complete a task. In short, we're using large language models to generate data that in turn helps us train smaller, task-specific models. A great example is how we're labeling training data. This is a huge burden on data scientists we can alleviate-the AI Data Fabric connects to training data, extracts meaning from that data, and then uses AI to reason about that extracted meaning to perform a labeling task before pushing the resulting labeled data elsewhere.