UK Finance Ltd.

05/06/2026 | Press release | Distributed by Public on 05/06/2026 02:18

Blog Managing AI risk in a divergent regulatory landscape - financial crime challenges across jurisdictions

The likely extension of the EU AI Act's timelines gives firms a valuable window to reassess key risks and strengthen how AI is governed across Financial Crime operations and systems. This piece explores the operational impacts and the actions firms should take now.

The opinions expressed here are those of the authors. They do not necessarily reflect the views or positions of UK Finance or its members.

Artificial Intelligence ('AI') is reshaping how institutions manage Financial Crime ('FC') risk. At the same time, evolving regulation is forcing firms to rethink how their programmes, governance and operating models must adapt.

This shift comes amid rising cost pressures, making the balance between innovation, compliance and operational efficiency more challenging than ever.

Why the EU Artificial Intelligence Act matters now

The EU Artificial Intelligence Act ('AIA') entered into force in 2024 with phased requirements for high-risk AI systems to be compliant by August 2026. Under the Council of the EU's "Omnibus VIII" package, these dates are expected to be extended to December 2027, offering firms an extended window of time to assess all associated risks, with further guidance expected from the EU Commission.

For UK firms any institution operating in the EU, placing AI on the EU market or whose AI outputs or services affect individuals within the EU must now consider how these obligations shape the design and governance of their FC platforms across the region.

Firms will therefore be required to demonstrate that model design, data usage, and model governance align with the Act's regulatory expectations.

Operational impacts across financial crime

Working with firms globally, we are already seeing UK-EU divergence driving operational impacts. Firms are reassessing cross-border operating models, governance structures and customer-treatment risks as AI becomes embedded within FC operations.

Many financial institutions deploy multiple configurations of the same vendor platforms across jurisdictions. Because Transaction Monitoring ("TM"), Customer Due Diligence ("CDD"), Fraud, and broader FC systems process sensitive behavioural and transactional data, any embedded AI components must be handled with caution.

While the AI Act does not mention AML systems on the list of high-risk use cases, AML models are often complex and subjective, requiring careful review to avoid discriminatory outcomes or unwarranted de-banking of customers. The frequent interaction with law-enforcement adds further complexity to how AI models with AML systems are managed and how their impact on customers assessed.

As AI-enabled monitoring becomes embedded especially within AML, Sanctions and Name Screening frameworks, firms must ensure their shared platforms can produce consistent, explainable and jurisdiction-compliant outputs.

This is critical for cross-border investigations where analysts rely on high-quality data, accurate behavioural insights to make defensible decisions.

Stepping up AI risk management and oversight

AI is leveraged across FC functions supporting predictive alert generation, prioritisation, disposition (for screening), customer segmentation and automation of investigation workflows, including drafting SAR narratives and investigation summaries.

As higher levels of automation emerge powered by Agentic AI, the AIA will require firms to apply enhanced scrutiny from FC SMEs, data owners and model-risk managers.

Compliance with AIA does not require full global alignment, but it does require that any processes, models or investigative EU touchpoints follow a consistent approach to data integrity, explainability and AI governance.

A fragmented approach across an organisation increases the likelihood of inconsistent outcomes and investigations relying on inputs that do not meet AIA standards.

Practical actions firms can take now

  • Embed Core AIA Principles: Apply key AIA principles of governance, transparency and data quality on a global scale to reduce inconsistent outcomes and strengthen foundations for responsible AI.
  • Align EU-Exposed Functions: Ensure that AI models, datasets or investigations touching the EU meet high-risk AIA requirements, while building awareness of the expectations influencing EU-related decision making.
  • Foster Global Collaboration: Bring together FC operations, local product owners, data teams and model-risk stakeholders to share insights, align on common guardrails and avoid duplicated efforts.
  • Strengthen Human Oversight and Ownership: Reinforce the ethical responsibility humans hold in AI-enabled FC, ensuring data stewardship, bias awareness and SME challenge remain central to model oversight and decision-making.

Regulatory divergence often fragments financial crime controls and leads to inconsistent cross-border risk management. Firms should now define what a Trusted AI framework looks like for them, and build an ongoing, principle-led programme that embeds responsible, human-centred oversight across financial crime controls.

Area of expertise:
UK Finance Ltd. published this content on May 06, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on May 06, 2026 at 08:19 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]