01/22/2026 | Press release | Archived content
Our frameworks were built for a world where systems updated occasionally, models behaved predictably and responsibility was clearly located within the firm. AI challenges all three of those assumptions.
Models now update continuously. Harms can scale in hours, not months. And responsibility sits across developers, data providers, model hosts and regulated firms.
Accountability under the Senior Managers and Certification Regime (SM&CR) still matters - but what does 'reasonable steps' look like when the model you rely on updates weekly, incorporates components you don't directly control, or behaves differently as soon as new data arrives?
What will the Critical Third-Party regime look like as AI firms continue to shape the landscape of financial services? And as firms continue to develop AI assurance platforms to monitor, audit, and evaluate AI systems, what should the role of the FCA be?
Our approach isn't changing. We remain outcomes-based, technology-neutral and proportionate.
But how those principles apply in a world of fast-evolving systems is something we must explore now, not later.
We want to examine how AI will change the way we apply our rules and give you the clarity you need. Designing for the unknown means building a regulatory model that can evolve with the technology - without compromising clarity or trust.
And we won't do this alone. The FCA doesn't regulate AI as a whole, nor should we. I shall work with the Information Commissioner's Office (ICO), the Competition and Markets Authority (CMA), and international counterparts to ensure a coherent environment for firms innovating in the UK.