02/11/2026 | Press release | Distributed by Public on 02/11/2026 03:27
As Financial Services firms accelerate AI adoption, a new class of cybersecurity risk is emerging - fundamentally different from traditional IT threats.
The opinions expressed here are those of the authors. They do not necessarily reflect the views or positions of UK Finance or its members.
These AI systems are dynamic, data-intensive, and unpredictable, introducing vulnerabilities spanning the entire lifecycle of development and deployment.
From customer service chatbots to fraud detection and investment models, AI is transforming FS operations. Yet security implications - model manipulation, data leakage, biased outputs, and adversarial attacks - are often underestimated. As these risks evolve rapidly, staying ahead requires cybersecurity practices built for the AI era.
Drawing on insights from the 2025 Wavestone AI Cyber Benchmarkand over 20 major AI security engagements, five critical actions emerge for FS leaders to build secure, trustworthy AI.
1. Governance: from principles to execution
Governance is the foundation of secure AI. While 87 per cent of organisations have principles for 'trustworthy AI', just 7 per cent feel confident in internal expertise - leaving firms vulnerable.
Trustworthy AI spans cybersecurity, ethics, compliance, and brand integrity. Leading firms centralise AI oversight via hubs or Centres of Excellence (CoEs) uniting legal, risk, compliance, and technology teams to align AI with business goals and risk appetite.
However, C-suites want rapid benefits. FS clients, often "first movers", need quick, secure "fail-fast" mechanisms. Alongside CoEs, many establish decentralised "experimentation hubs" under pre-agreed governance, enabling rapid, secure testing and swift adoption of successful proofs of concept to keep pace with market expectations.
2. Identify and classify AI risks early
Effective AI risk management starts early. Today, 71 per cent of firms embed AI-specific risk assessments into project intake. A structured triage - asking if AI is involved, what data is used, whether models are proprietary or third-party, plus agent behaviour scope - helps assign oversight from day one. This aligns with the EU AI Act's risk-based model and avoids costly late-stage compliance issues.
Beyond compliance, FS clients value consolidating siloed risk assessments (e.g. Data Privacy, Legal, ESG,) into a single process. This reduces duplication, captures overlapping risks, and brings stakeholders together to foster learning and discussion onevolving AI risk.
3. Adapt cyber controls to the AI landscape
While 70 per cent of AI security controls stem from traditional cybersecurity, AI introduces new variations of attack paths, through APIs, model training, and third-party integration.
Leading firms map AI architectures to identify vulnerabilities across the full stack, from inference to user inputs, and employ AI red teaming: lifecycle testing for hallucinations, prompt injection, and robustness.
However, our FS clients aim to balance robust security testing with operational agility. With options like Meta's PurpleLlama and Microsoft PyRIT, it's vital to first understand the native security controls in AI platforms (e.g. AWS Bedrock) and extend enterprise security measures to AI, avoiding the need to "reinvent the wheel".
4. Build AI-aware monitoring and detection
Despite increasing AI use, monitoring maturity lags. While 72 per cent of firms collect AI logs, only 13 per cent integrate them into Security Operations Centres (SOCs). Without observability, threats go undetected.
FS firms must build AI observability into platforms and test models for bias, harmful outputs, and drift. As AI use shifts from consumption to orchestration and creation, detection must scale accordingly.
5. Prepare for AI-specific incidents
Just 9 per cent of organisations have AI-specific incident response plans, despite growing risk. Firms must extend playbooks to cover AI scenarios, including adversarial attacks, model retraining, and regulatory engagement.
Building forensic capabilities and joining cross-industry AI-CSIRTs will help FS firms respond quickly and build resilience.
Conclusion:
AI security is no longer just a technical concern; it's a top FS strategic priority. CIOs and CISOs must lead cross-functional teams to embed trust-by-design across AI systems. Acting now positions firms to reduce risk, meet regulatory expectations, and build customer and market trust in the age of intelligent finance.