04/22/2026 | Press release | Distributed by Public on 04/22/2026 01:57
The FCA is not slowing your AI down. Your own governance design is doing that for them.
The opinions expressed here are those of the authors. They do not necessarily reflect the views or positions of UK Finance or its members.
Ask most technologists what slows AI deployment in regulated financial services and governance will feature prominently. Model risk committees. Review cycles that outlast the relevance of what is being reviewed. The frustration is understandable. The diagnosis is wrong.
Governance is not the problem. Governance architecture is. And in most UK banks, that architecture was designed for a different era of AI entirely. The cost of that mismatch is already visible: in slower deployment cycles, retrospective scrambles for evidence, and a growing gap between what AI systems are doing and what risk functions can see.
The UK regulatory moment has changed the stakes
The FCA and PRA have been explicit: continuous oversight, demonstrable explainability, real control over AI systems influencing customer outcomes. Consumer Duty makes fair outcomes a supervisory expectation, not an aspiration.
The EU AI Act and DORA add parallel obligations for European operations. Credit scoring, , and life or health insurance risk are classified as high-risk AI applications. For UK banks with cross-border exposure, these are not a European concern only.
Institutions that treat these frameworks as constraints will spend the next decade retrofitting controls. Those that treat them as a design specification will build once and deploy repeatedly.
Why legacy oversight fails
Traditional model governance was built for static models with defined inputs and periodic review cycles. Quarterly sign-offs and manual documentation do not work for AI that learns continuously and influences customer outcomes at scale. When banks apply legacy governance to modern AI, the result is a framework that simultaneously slows delivery and reduces actual control. Development teams spend time on administrative compliance that adds process without adding insight. Risk functions lose visibility because the tools available to them were not designed for continuous monitoring of dynamic systems. This is governance failure through design, not intent.
What governance-by-design looks like in practice
UK bank: customer complaints analysis
A UK bank deployed language models to categorise customer complaints in real time, with automated flagging of regulatory risk signals. Model outputs were continuously monitored against defined thresholds. When the institution sought approval to expand the system, the governance evidence was already being generated automatically. Approval came faster than any previous deployment. The architecture had made compliance easier, not harder.
Governance-by-design does not add friction to deployment. It removes the bottlenecks: the scramble for documentation, the retrospective audit, the evidence gap that pauses an approval process.
The agentic AI question is already arriving
Boards that have resolved governance for predictive models face a more complex version of the same challenge with agentic AI. Systems capable of autonomous multi-step decision-making across fraud investigation, credit assessment, and compliance monitoring require oversight frameworks that were not written with them in mind. The stakes around explainability, human-in-the-loop control, and audit completeness are considerably higher.
Institutions investing in governance infrastructure now are building foundations that will extend to these systems. Those waiting for agentic AI to mature before addressing governance will design controls reactively, under supervisory scrutiny, at speed. That is a significantly worse position.
At a recent Gartner event in London, analysts forecast that by 2027 a quarter of ungoverned AI-assisted decisions will cause financial or reputational loss, partly through AI sycophancy: models confirming what decision-makers want to hear rather than what the data shows. That risk category does not yet appear in most banking governance frameworks.
The FCA and PRA reward institutions that can evidence control continuously rather than retrospectively. Those institutions are in a materially stronger position, commercially and in the supervisory relationship that governs how much latitude they are given to innovate. The banks building governance into architecture now are not just compliant. They have a structural advantage in every subsequent deployment. Governance is not the price you pay for AI. Designed well, it is the infrastructure that makes AI investment returnable.
Read previous blog: