03/23/2026 | Press release | Distributed by Public on 03/23/2026 11:49
The content of this report reflects views and insights from individual FIFAI speakers and participants. This report should not be interpreted as guidance from and does not necessarily reflect the views of the Bank of Canada, the Department of Finance Canada, the Financial Consumer Agency of Canada, the Financial Transactions and Reports Analysis Centre of Canada, the Office of the Superintendent of Financial Institutions or any other regulatory authorities, currently or in the future.
Foreword
This phase of FIFAI is about deepening our understanding of how AI technologies are reshaping the industry now and going forward. It is as much about capturing the opportunities as it is about effective AI risk management strategies, both of which are of increasing importance for the sector.
It's been three years since the Office of the Superintendent of Financial Institutions (OSFI) and the Global Risk Institute (GRI) launched the Financial Industry Forum on Artificial Intelligence (FIFAI), bringing together experts from the financial industry, academia, policymakers and regulators.
Today, the fast pace of technological change and AI adoption highlights the need for renewed collaboration between the public and private sectors, leading to FIFAI Phase II with the overall goal to:
This FIFAI II final report outlines the AI risks and opportunities raised at forum workshops and introduces the AGILE framework for financial-industry stakeholders to navigate the evolving impacts of AI.
The sponsors wish to express their deep gratitude to the more than 170 participants who shared perspectives throughout the FIFAI II discussions. Participants included representatives from banks, insurers, asset managers, non-financial corporations, consumer advocates, universities, research institutes, government and regulatory agencies. All were committed to balancing the risk and opportunity inherent in the continued advancement of AI within financial services and in the wider environment affecting the industry.
Global Risk Institute
Office of the Superintendent of Financial Institutions
Department of Finance Canada
Financial Transaction and Reports Analysis Centre of Canada
Financial Consumer Agency of Canada
Bank of Canada
Background
The first phase of FIFAI focused on the internal risks associated with the development, deployment, and use of AI within financial institutions. The FIFAI I report, A Canadian Perspective on Responsible AIFootnote 1, established the EDGE (Explainability, Data, Governance, and Ethics) principles as pillars for responsible AI adoption across the financial industry and encouraged harmonized regulation that reflects Canadian values while enabling innovation. Adherence to the EDGE principles involves appropriate explainability and disclosure; consumer-centric approaches that uphold ethical values and protect privacy; and strong, risk-based governance supported by sound data practices and multidisciplinary collaborationFootnote 2.
It was the view of the participants that Canadian financial institutions have generally aligned with EDGE principles. Evident Insights, an independent firm that benchmarks AI use in bankingFootnote 3 and insuranceFootnote 4, ranked Canada's five largest banks and two Canadian insurers among the top 15 globally for "transparency of responsible AI activities" in 2025.
Since FIFAI I, AI-related risks have expanded beyond EDGE's scope due to rapid adoption of AI within the financial industry, significant technological advances, and growing impacts of AI on the external risk environment. FIFAI II therefore convened extensive discussions on escalating cyber threats, third-party risk, financial well-being and consumer protection impacts, financial crime, and financial stability implications.
FIFAI II reflects a shared commitment among industry, government, and regulators to achieve progress through collaboration. Between May and November 2025, four workshops sponsored by GRI with a combination of OSFI, Finance Canada, Financial Transactions and Reports Analysis Centre of Canada (FINTRAC), Financial Consumer Agency of Canada (FCAC), and the Bank of Canada across them, examined AI risks, mitigants, and opportunities. Interim reports covered:
This final report, informed by the respective workshops and participant views, outlines critical risks and affirms that continued responsible AI adoption is necessary for both competitive resilience and effective management of inherent AI risks as well as heightened defence against sophisticated external threats. EDGE remains the foundation for responsible adoption while "agility" emerged as a central theme to guide a sector that must move dynamically to capture AI's benefits while responding to fast-evolving risks.
Executive summary
AI is a transformative force-both awe-inspiring and potentially perilous…Its true impact will hinge on disciplined, responsible innovation & robust collaboration across borders and sectors.
AI is reshaping financial services globally, redefining operating models and competitive dynamics. Automation and human augmentation can unlock efficiency, improve decisions, and strengthen competitiveness - but only when innovation aligns with principle-based governance to maintain trust and resilience. Canada's financial sector, with strong data foundations and disciplined risk culture, is uniquely positioned to lead responsibly in AI adoption while unlocking significant productivity and growth. At the same time, AI is reshaping the risk landscape, with potential systemic implications. Most pressingly, AI is enabling fraudsters and cybercriminals to operate with unprecedented speed, scale and sophistication. Institutions increasingly need AI not only to compete but to strengthen their defences and risk management.
FIFAI II discussions underscored a broader set of critical areas that demand attention. Institutions must navigate strategic risk in the face of competitive pressures, execution risks and governance challenges from rapid AI adoption. AI is also intensifying threats to organizational security, from automated spear phishing to synthetic IDs being used to infiltrate organizations via the hiring process. Consumer-facing applications bring their own risks, as gaps in transparency, explainability and accountability may expose consumers to bias, fraud and other harms. At the same time, talent shortages and uneven upskilling may slow responsible innovation. Growing dependence on a small number of AI providers and opaque AI supply-chain dependencies heighten systemic fragility. More broadly, AI-driven operational disruptions, correlated trading behaviours and potential credit risk impacts introduce new challenges for financial instability.
With that in mind, FIFAI II introduces the AGILE framework (Awareness, Guardrails, Innovation, Learning, Ecosystem Resiliency) and suggests implementation priorities for navigating AI risks and seizing AI opportunities:
Canada's financial sector stands at a pivotal moment. Strategic choices made now will shape competitiveness, resilience, and trust for years to come. The benefits of AI are within reach, but only through deliberate action that balances innovation with robust risk management and consumer trust.
By embracing an AGILE framework, the industry can unlock growth while safeguarding stability and consumer well-being. This is the Canadian financial sector's opportunity to lead in AI innovation and set a national standard for trust, security, and productivity.
The Evolving AI Risk environment
AI is increasingly giving rise to critical risks, both from growing internal adoption of AI within institutions and the broader effects of AI on the external risk environment. The urgency of managing these risks lends itself to the recognition that continued responsible AI adoption is necessary, both for competitive resiliency and to defend against more sophisticated AI-enabled threats. While the risks outlined below represent a snapshot of risks as of the time of the workshops, industry stakeholders expect the risk landscape to evolve rapidly and in unanticipated ways.
1. Strategic risks
The biggest risk is not doing enough.
Pace of adoption: Moving too quickly to adopt AI without proper risk management can lead to potential operational and consumer harms. Conversely, moving too slowly may result in missed opportunities and competitive disadvantages (for example, potential disruption from technology firms and other new entrants).
Fragmented or near-sighted approach: A rushed, incomplete, or inflexible AI strategy can result in critical lapses across data, technology, infrastructure, business models, and consumer outcomes (including consumer protection and financial well-being). Implementing AI systems in isolation risks creating longer-term issues and increases execution risk. Often this is the result of hype-driven deployments, where AI is adopted without proper long-term strategic planning or success metrics. Unrealistic expectations from senior leadership can amplify this risk, as can a focus on scoring a "quick win" rather than incorporating AI for the longer term.
Resource constraints: Effective AI adoption requires significant capital, expertise, and infrastructure. Underinvestment in AI limits the sector's ability to detect cyber threats, fraud, financial crime, and emerging systemic risks. It also reduces opportunities to use AI to improve productivity in core operations and free capacity for higher-value risk management, supervision, and innovation. Competition for investment and R&D budgets is intense, with smaller institutions and public sector entities particularly challenged in this area.
Data strategy and quality: AI performance depends on an effective data strategy that prioritizes data governance and quality. In a 2024 AI report published by OSFI and FCAC, federally regulated financial institutions identified data-related risks as a "top concern" for AI deployment.Footnote 5 Data that is inconsistent or incomplete, and fragmented across platforms (including those of third parties and offshore storage) can lead to increased data sovereignty concerns, privacy risks for consumers, and challenges for regulatory oversight. Complex data workflows without proper data lineage and governance further compound these issues. Sub-standard data quality undermines efficiency gains and increases the likelihood of harmful outcomes (e.g., consumer harm, diminished trust, and failures in AML reporting).
AI regulatory uncertainty: Financial sector regulation in Canada spans 14 jurisdictions, creating a matrix of guidelines, rules, and legislation which need to be adhered to, potentially creating operational overhead. Furthermore, many financial institutions in Canada also operate in foreign jurisdictions. While the amount of AI-specific guidance by Canadian financial regulators has been limited, financial institutions may be hesitant to invest in and deploy AI if there is perceived uncertainty about compliance obligations or the potential for future regulatory constraints.
2. Security and cybersecurity threats
AI is enabling smarter fraud detection, faster investigations, and more adaptive compliance. However, it's also introducing new risks that are evolving just as quickly as the technology itself.
Social engineering and synthetic identity fraud: As Michael Barr of the Federal Reserve Board of Governors noted in April 2025: "Deepfake attacks have seen a twentyfold increase over the last three years."Footnote 6 AI can enable effective deepfakes with minimal information, often obtained through social media. The rise of these tactics has increased the importance of secure digital identification and authentication. Canada, like most countries, lacks a universally adopted secure digital identity potentially exposing identity as a key attack opportunity across onboarding, consumer channels, and remote work environments. In one notable case, AI-generated synthetic identities allowed foreign operatives to obtain remote employment at North American firms, gaining access to internal systems and data.Footnote 7
Voice spoofing and Fraud-as-a-Service (FaaS): A 2024 industry survey found that 91% of financial institutions globally are reconsidering voice-verification systems due to AI voice-cloning capabilities.Footnote 8 Call centres and IT helpdesks are also vulnerable as AI can convincingly impersonate employees or customers to request new devices, reset passwords, or obtain access credentials. AI has also accelerated the use of FaaS, which enables criminals to purchase turnkey, AI-powered tools that dramatically increase the scale, speed, and sophistication of financial fraud.
AI-assisted cyber-attacks: With AI, cyberattacks can be more easily automated, accelerated, and tailored by threat actors. The barriers to entry are lower and the capabilities are greater. Simultaneously, the overall attack surface has expanded as financial institutions increasingly implement outward-facing AI systems. Potential vulnerabilities extend to regulators and government, as they hold significant sensitive data. Organized groups can use AI to scale cyber and fraud operations and to facilitate money laundering and sanctions evasion. AI agents could automate multi-step attacks end-to-end, further lowering the barrier and enabling scale. In 2025, AI company Anthropic reported that it had disrupted an operation by a state-sponsored group to manipulate one of its models to autonomously attack various corporate and government targets.Footnote 9
Disinformation and misinformation: Disinformation and misinformation can spread quickly due to the ubiquity of social media and the increasingly polarized political environment. The use of AI can enable a malicious disinformation campaign to be deployed at scale, which could then be amplified by misinformation. For instance, deepfakes and automated bots can disseminate false or misleading claims about a bank's solvency, regulatory actions, or system stability, actions that can quickly undermine trust.
3. Consumer risks
[This work] is focused on ensuring that innovation in the financial marketplace is not only forward-thinking and efficient, but also grounded in fairness, transparency, and a strong commitment to protecting consumers.
Consumer confidence and well-being: Applications of AI that impact consumers -such as product recommendations, credit adjudication, underwriting, and investment advice- are becoming more pervasive. As consumer-impacting AI applications increase and become more fully integrated into the internal operations of Canadian financial institutions, the consumer-trust pillars of transparency, explainability, and accountability become increasingly important. Many consumers may not know when AI is involved or may question how these systems reach decisions. This challenge is intensifying with generative and agentic AI, which can make end-to-end decisions harder to explain. Disclosure and consent are closely linked to explainability and transparency while accountability frameworks and complaint handling need to keep pace with AI-enabled services to reduce the possibility of adverse consumer outcomes. These challenges intensify when consumers use AI-powered self-serve tools and products where there is less opportunity for human oversight.
Data bias and security: Unfair outcomes driven by incomplete or biased data hold the potential to erode consumer trust in AI systems. The use of alternative or inferred data to personalize products can create hidden proxies for identifiable information, exposing consumers to unintended profiling. These dynamics can amplify unwanted bias in technology-driven decision-making and systemically disadvantage consumers at scale. Increasingly complex data flows, often involving multiple third-party systems, heighten privacy and security vulnerabilities to which consumers are exposed. Data governance frameworks that lag behind AI adoption compound the risks.
Increased exposure to attempted fraud: AI is expanding the scale and sophistication of fraud and related criminal activities. Personalized frauds exploit consumer data and behavioural patterns, making them more persuasive and harder to detect. Consumers may find it increasingly difficult to distinguish legitimate communications from AI-generated frauds, putting them at greater risk and eroding overall trust in Canada's financial sector. The true extent of the problem is unknown, as the Canadian Anti-Fraud Centre (CAFC) estimates 90 to 95% of fraud goes unreported.Footnote 10 Without proactive measures to counter these threats, some consumers may feel that their financial institutions are not sufficiently protecting them.
Inequality of access: AI could widen the digital divide between those with access to digital technologies and those without. Even among those with access, gaps in AI literacy and confidence may limit the ability to benefit from AI-enabled services. Absent a thoughtful approach, AI could exacerbate existing inequalities faced by some groups, including low-income individuals and households, newcomers to Canada, Indigenous communities, the elderly, and Canadians living with disabilities.
4. Knowledge and talent gaps
As technology continues to evolve rapidly, it's important for us to welcome new graduates who are inherently digital savvy and bring fresh perspectives. We're also prioritizing ongoing learning and development for our employees, leaders, and Board members, so everyone has an understanding of the latest tools and how to use them to harness emerging opportunities.
Shortages of AI talent: As AI becomes more foundational to financial services, a scarcity of top AI talent represents a potentially significant threat to a company's ability to operate safely to execute on strategy and to compete. This also imperils the regulatory capacity and ability to oversee rapid industry change and complex systemic risks amplified by AI. While Canadian universities educate thousands of AI specialists annually, supply remains insufficient, especially for those who also have financial industry knowledge. Canadian financial institutions may struggle to compete with substantially higher compensation packages offered by foreign technology companies. Regulatory agencies are similarly limited in their ability to match private sector compensation or provide access to cutting-edge technology. AI expertise is concentrated in larger institutions; smaller firms may have little or none.
Shortfalls in AI learning: Moving slowly or failing to upskill workforces could impede the ability of institutions to thrive in an increasingly AI-dominated world. Meanwhile, limited AI knowledge and trust among consumers, especially vulnerable groups, could prevent them from benefitting from AI-enhanced services and expose them to a greater risk of being scammed or defrauded.
Potential AI misuse: A Large Language Model (LLM) can generate entirely fabricated information that appears authoritative (so-called hallucinations). There have been incidents where unverified LLM generated figures, statements and references have led to material errors. A lack of awareness of AI hallucination and other AI knowledge gaps represent a clear risk to both consumers and organizations.
"Learning velocity" mismatch: LLMs that seemed revolutionary in 2023 are now considered primitive. Individuals and organizations alike may struggle to keep up with the pace of change. By the time an institution develops comprehensive AI training, the technology might have already advanced. For instance, the pace of advancements by threat actors in their social engineering techniques is quickly outpacing internal training programs. Similarly, AI-enabled money laundering techniques often evolve faster than detection capabilities and associated training can be developed and deployed.
5. Third-party concentration and supply chain risks
As AI adoption accelerates, third-party concentration and supply chain dependencies are becoming core sources of systemic risk. Financial institutions must look beyond individual vendor resilience and understand where shared dependencies, limited visibility, and single points of failure could amplify disruption across the system.
AI supply chain and third-party concentration risk: Growing AI dependencies span data, models, software components (including open source), and compute/cloud infrastructure. AI third-party service providers often depend on additional parties through complex 'nth party' or multi-tiered supply chains. Disruptions or compromises at any layer can propagate across institutions. Furthermore, AI adoption is deepening financial institutions' dependence on a small set of third-party technology providers. The July 2024 CrowdStrike outage resulted in an estimated financial loss of $5.4 billion for the Fortune 500 (excluding Microsoft),Footnote 11 illustrating the systemic impact of single points of failure. Mid-sized and smaller institutions may be more exposed because they often rely proportionately more heavily on external vendors.
Lack of visibility and control: Risks arise from limited visibility and transparency into third-party controls and practices related to data, governance, security, and model risk. Financial Institutions are accountable for the use of third-party services but have little control to ensure third parties comply with their expectations. A security failure by a third-party AI model can expose sensitive data, increasing vulnerability to adversarial attacks and resulting in the loss of intellectual property while eroding consumer trust. Given the size and influence of some third-party providers, even Canada's largest financial institutions may have limited leverage regarding contractual terms, operational transparency, or remediation timelines. Visibility into fourth- and fifth-party relationships is especially limited in the context of AI services, where financial institutions often lack clarity on how models were trained, what data was used, and which other entities are embedded in the supply chain. The introduction of open banking is expected to further expand the third and 'nth party' ecosystem, increasing both the complexity and scale of risk that institutions must manage.
Sovereignty and oversight of critical providers: Many critical providers operate outside the financial regulatory perimeter, limiting visibility into the likelihood and potential impact of failures at a system level and complicating regulatory oversight. Global cloud architectures and limited domestic infrastructure mean that many AI services and associated data reside abroad, exposing institutions to foreign legal regimes and geopolitical risks.
6. Financial stability risks
[We must] understand the role of AI in the financial industry and mitigate the risks it represents to financial stability effectively. A better understanding can dispel unfounded fears and support policymakers in aligning oversight efforts with the most material AI-related vulnerabilities.
Operational shocks: The impact of internal system outages, reputational incidents, and data breaches can be magnified by the rapid growth of AI. As AI-enabled systems become embedded in essential processes, institutions face heightened exposure to risks such as model failures, data corruption, and system misbehaviour, including critical infrastructure providers that support the sector (e.g., payments and telecommunications). Anti-money laundering (AML) risks may also be elevated as institutions often see only parts of criminal networks that span capital markets, casinos, payment service providers, and cross-border flows.
Market volatility: Many AI-powered trading algorithms and models are trained on similar data which may intensify market volatility, particularly on a short-term basis. If AI-based trading models move in concert, this could lead to procyclical shifts in financial markets during periods of stress. Unregulated market participants using AI tools can further undermine systemic resilience. Equity markets and exchange-traded derivatives are possible areas of vulnerability.
Labour and business disruption: The International Monetary Fund (2024) estimates that 60% of jobs in advanced economies will be affected by AI automation.Footnote 12 Citi's research (2024) predicts that 54% of finance jobs face potential AI-led displacement, the highest percentage among major industries.Footnote 13 While technological transformations historically have created new roles, the unprecedented speed of AI advancement suggests displacement could occur faster than workforce retraining, creating a critical transition period. AI-driven automation will also impact firms across manufacturing, retail, transportation, professional services, and other sectors that form the backbone of Canada's economy. This economy-wide disruption poses systemic risks to financial institutions, particularly in terms of rising credit risk to affected individuals and businesses. This could lead to a "k-shaped" economy where some flourish and others face shrinking opportunities, a scenario where lenders could face increased default risk.
Gaps in threat information sharing: There are established channels through which threat information currently flows between financial institutions, other critical sectors and governmental agencies. However, various factors can impede the optimal flow of threat information; as a result, threat actors can conceivably strike multiple institutions and sectors before defences have time to adapt. Current information-sharing channels may experience delays to anonymize the information which limits their ability to support real-time responses to AI-enabled fraud, cyberattacks, or third-party incidents. Competitive dynamics and privacy mandates further limit information sharing.
Vulnerabilities in crisis response mechanisms: Canada's existing incident-response and crisis-coordination arrangements provide a solid foundation, but blind spots may exist in the face of unprecedented AI-enabled threats. Information, access and systems may be fragmented across agencies, or the mandates of the bodies tasked with incident response. At an institutional level, business continuity processes may not incorporate specific cases of failure in AI systems or models. Novel attack strategies enabled by AI could involve multi-pronged attacks at a speed and scale that current crisis response mechanisms were not designed to handle. Recent technology and cloud outages, like the CrowdStrike or AWSFootnote 14 events, have also illustrated the gap that can exist between plans on paper and practical readiness, particularly in novel or unprecedented scenarios.
Agentic AI and emerging financial stability risks: Agentic AI systems can act autonomously, make multi-step decisions, and trigger financial actions at machine speed across markets and institutions.Footnote 15 As these systems gain traction, their behaviour may become increasingly difficult to predict, monitor, or constrain. Agents that make investments on behalf of retail clients, for example, may respond simultaneously to similar data sources or market cues, amplifying short-term volatility and intensifying liquidity pressures during stress events. Corporate treasury agents could rapidly reallocate deposits in reaction to news, social-media sentiment, or shifting rate environments. During times of stress this could potentially accelerate funding outflows and destabilize bank balance sheets. As agents are deployed in more financial use cases, the risks may evolve further in unexpected ways. Other emerging technologies are also materially reshaping risk profiles. AI is accelerating progress toward fault-tolerant quantum systems, raising the prospect of breakthroughs that could overturn current cybersecurity assumptions.
Seizing the AI opportunity
AI presents a strategic opportunity to strengthen Canada's financial system. The Canadian financial sector creates 7-8% of the nation's GDPFootnote 16 and employs almost 850,000 Canadians.Footnote 17 It is a significant adopter of AI today. Globally, across a broad range of industry sectors, financial services report the third-highest use of AI at work (72%) and the second-highest level of organizational support (75%).Footnote 18 More broadly, it is projected that continued AI deployment could add $298 billion in cumulative GDP from 2025 to 2035 and generate an average of 41,500 new jobs annually.Footnote 19 Canadian financial institutions that successfully harness AI could capture much of this potential value creation.
The ability of AI to process vast datasets and detect patterns makes it a powerful risk management tool. Financial institutions can monitor market activity, identify anomalies, and anticipate stress scenarios before they escalate, enhancing resilience and reducing systemic vulnerabilities. Regulators can also leverage AI to enhance systemic monitoring, harmonize guidance and improve supervisory capacity. Beyond risk mitigation, AI can deliver significant productivity gains. Applications across compliance, fraud detection, and operational workflows can free skilled professionals for higher-value priorities. These efficiencies translate into cost savings that can be reinvested in innovation, infrastructure, and talent development, which are essential for fueling long-term competitiveness.
AI also offers transformative opportunities for financial well-being, consumer protection and financial crime prevention. Advanced authentication, behavioural analytics, and anomaly detection help prevent identity theft, account takeovers, and deepfake-enabled fraud and strengthen trust in digital financial ecosystems. At the same time, AI is democratizing access to financial advice through personalized guidance at scale. Conversational platforms and recommendation engines can make sophisticated insights available to underserved populations, potentially promoting quality financial inclusion and financial literacy. In combatting financial crime, AI models can analyze vast transaction networks in real time, detecting suspicious patterns and enabling proactive intervention to reduce fraud, money laundering and other emerging threats.
If adopted responsibly and at scale, AI can reinforce Canada's global competitiveness and serve as a catalyst for a smarter, safer, and more inclusive financial system. Achieving this balance between innovation and risk management requires a shared, practical approach to AI adoption across the sector.
The AGILE Framework
The AGILE framework (Awareness, Guardrails, Innovation, Learning, and Ecosystem Resiliency) can help guide responsible AI adoption, innovation and resilience across Canada's financial sector. Developed from workshop insights, this framework empowers stakeholders to capture the benefits of AI while effectively managing the risks.
Awareness of emerging threats and systemic risks
Awareness is critical for the Canadian financial sector as AI increasingly reshapes the risk landscape and the sector expands its AI use. Stakeholders need to understand the ways in which AI can alter the risk environment and how further developments, such as agentic AI or AI-driven macroeconomic disruptions, can impact them. Organizational enhancements such as AI oversight, board engagement, and expanded monitoring and stress testing scenarios will help to manage the risks in this area.
Adapt risk identification and governance in response to technological change
Address AI risks at the senior management and board level
Prepare for new technologies
Monitor AI-driven labour market and business disruptions
Monitor AI-driven market volatility
Guardrails to ensure responsible AI adoption continues
Organizations should ensure that AI-enabled systems operate safely, predictably, and fairly across their lifecycle, particularly where failures or misuse could cause consumer harm or systemic risk. Effective AI guardrails embed fundamental best practices into day-to-day operations, keep control frameworks evergreen, and establish clear accountability for outcomes produced by AI. Guardrails include appropriate human oversight for high-impact decisions, where able, ensuring vigilance about data quality and rigorous standards for third parties.
Focus on the fundamentals
Implement Robust and flexible control frameworks
Focus on inclusive consumer well-being practices
Remain accountable for AI outcomes
Enhance due diligence requirements for engaging third parties
Innovation through bold AI adoption
Responsible AI adoption can empower workforces, strengthen defence against evolving threats, enhance consumer financial well-being, and drive growth across both their top and bottom lines. To do so, the industry must adopt an AI growth mindset, viewing AI not as a replacement for human expertise, but as an enabler of its workforce and a catalyst for expanding opportunities and product offerings. Adequate resourcing, investment in technological infrastructure, and the purposeful use of AI will be required to strengthen operational resilience while advancing consumer financial well-being, including access and protection.
Adopt an AI growth mindset
Take a strategic approach to innovation
Modernize technology infrastructure
Boost operational resiliency through AI innovation
Enhance consumer financial well-being and consumer protection through AI innovation
Learning to cultivate AI fluency
AI education and training are vital investments for the financial sector. Continuous learning programs can help institutions keep up with the pace of change, and through collaboration, the industry can pool resources and accelerate progress. Enhanced learning programs around crucial areas like AI-enhanced social engineering and fraud are urgently needed to protect consumers and the integrity of the financial system. Educational efforts must also extend to consumers so that they understand how AI systems work, how their information is used, and how to recognize and protect themselves against threats, including AI-enabled fraud. At the same time, education should highlight the benefits AI offers, including clearer insights, personalized guidance, and easier day-to-day financial management.
Develop talent strategically
Pursue continuous and comprehensive AI learning
Collaborate on learning opportunities
Improve employee training for phishing and cybersecurity
Educate consumers on AI use in financial services
Ecosystem resiliency for a stronger financial system
Enhancing resiliency across the financial ecosystem will depend on a more coordinated, system-wide approach. Financial sector participants will need to work together to create common standards and disclosure requirements for critical third parties, upgrading response frameworks for critical financial infrastructure and AI-related shocks or attacks. The public and private sector will need to collaborate to ensure a clear regulatory environment, potentially by establishing sandboxes, enhancing secure digital IDs, and addressing any gaps in information sharing and incident response mechanisms.
Establish common standards and oversight for critical third parties
Enhance regulatory certainty
Implement secure digital identification and authentication
Strengthen information-sharing arrangements
Upgrade incident response frameworks
An AGILE Framework: Implementation Priorities
AI will continue to rapidly evolve. The AGILE framework is built to evolve with it. Its strategic tenets will endure even as AI's reach and capabilities continue to advance. Immediate and longer-term opportunities for AI navigation using the AGILE framework include:
Immediate priorities
Short to medium term
Conclusion
Artificial Intelligence is reshaping the financial sector at a pace demanding both boldness and discipline. The potential is vast - for economic growth, productivity gains, strengthened defenses, improved consumer well-being and protection, and enhanced systemic resilience - yet the risks of inaction or missteps are equally significant. Success requires balancing innovation with responsible adoption and safeguards to consumer trust, organizational resilience, and financial stability.
The AGILE framework provides a strategic roadmap for this transformation. Building on EDGE, they are designed for action:
This report translates this framework into concrete steps such as strengthening cyber hygiene, scaling AI literacy, and modernizing infrastructure. These actions form a framework for collective progress and reflect a critical workshop insight: that the greatest risk of AI is failing to act decisively. Institutions that hesitate risk falling behind technologically and competitively while exposing themselves to AI amplified threats.
Success requires coordinated effort across industry, regulators, and government. Collaboration must become the norm through information sharing, learning initiatives, and greater regulatory certainty. Investment in talent and infrastructure must be prioritized to ensure Canada's financial system leads in responsible AI adoption. By operationalizing AGILE, the sector can deliver value for consumers, strengthen systemic resilience, and reinforce Canada's global competitiveness.
The challenges are urgent, but a clear path forward exists. With agility, foresight, and collaboration, Canada's financial ecosystem can seize AI's promise while safeguarding the trust and stability on which it depends.
Glossary
Adversarial attacks Techniques used to mislead or compromise AI systems, potentially undermining model integrity and reliability. Agentic AI AI systems capable of autonomously initiating actions or decisions. AI literacy Understanding of AI capabilities, limitations, risks, and responsible use. Attack surface All potential points where unauthorized actors could attempt to compromise systems or data. Authentication Processes that verify identity, such as passwords, biometrics, or multi-factor authentication. Automated bots Software agents that execute tasks automatically. Barriers to entry Structural or regulatory factors that limit new competitors from entering the financial market. Circuit breakers Mechanisms that automatically pause or limit trading or system activity to prevent instability or cascading failures during abnormal or volatile conditions. Cyber attack Any attempt to disrupt, access, or compromise information systems without authorization. Cyber criminals Individuals or groups that conduct illegal digital activities, including fraud and data theft. Data center Facility that houses the specific IT infrastructure needed to train, deploy and deliver AI and other digital applications and services. Data sovereignty Requirement that data be stored and processed under the laws of the jurisdiction where it originates. Deepfakes AI-generated synthetic media that convincingly imitates real individuals, increasing risks of impersonation and fraud. Digital identity Technologies used to verify a user's identity electronically, often through documents, biometrics, or cryptographic credentials. Disclosure Required communication to regulators, stakeholders, or customers regarding material incidents or information. Disinformation False information deliberately created and spread to mislead, manipulate, or cause harm. Entrants New organizations entering a market, including fintechs and technology firms. Explainability The extent to which AI decision-making processes can be interpreted and justified to different stakeholders. Fault-tolerant quantum systems Quantum systems engineered to operate reliably even when qubits (i.e., the basic unit of quantum information) encounter errors. Financial fraudster A person or group conducting fraudulent financial activities using stolen, synthetic, or manipulated identities. Financial stability The resilience of the financial system to shocks, ensuring continued functioning of markets and institutions. Generative AI AI that generates content, such as text, images, or code, based on patterns learned from training data. Governance Framework (structures, oversight, and controls) by which organizations are directed and controlled. Inclusion by design The deliberate creation of financial and AI systems that ensure equitable access and outcomes for diverse populations. Kill switches Controls that can instantly shut down an AI system, trading process, or automated operation to stop harmful or unintended actions. Misinformation False or inaccurate information shared without intent to deceive. Nth-party Any indirect third-party provider deeper in a supply chain (such as fourth- or fifth-party) that supports a contracted vendor but is not directly engaged by the financial institution. Open-source models AI models whose architectures or weights are publicly accessible, enabling collaboration. Phishing The fraudulent practice of sending emails or other messages purporting to be from reputable companies in order to induce individuals to reveal personal information, such as passwords and credit card numbers. Privacy Assurance that the confidentiality of, and access to, certain information about an entity is protected. Quantum computing Computing technology leveraging quantum mechanics to solve complex problems, with potential implications for cryptography. Robust safeguards Controls, policies, and technologies able to withstand or overcome adverse conditions. Social engineering Manipulation techniques used to exploit human behaviour and gain unauthorized access to systems or information. Social media Online platforms where individuals share content. Spear phishing Highly targeted phishing attacks customized to specific individuals or organizations. Spoofing Impersonation of a trusted entity to deceive users into sharing sensitive information. Synthetic identity A partially fabricated identity combining real and fictitious information. Synthetic identity fraud Fraud committed using synthetic identities to open accounts or access credit. Systemic vulnerability A weakness with potential to impact the broader financial system rather than a single institution. Threat actor An individual or group responsible for malicious cyber or fraud activity. Vishing Voice-based phishing in which attackers impersonate trusted individuals or institutions over the phone to obtain sensitive information. Zero-trust architecture A cybersecurity model that assumes no user or device is inherently trustworthy and requires continuous verification, least-privilege access, and strict system segmentation.