04/21/2025 | News release | Distributed by Public on 04/20/2025 20:43
Learn about the ways AI components can unknowingly "sneak" into an institution's processes.
Artificial intelligence (AI) continues to be among the hottest topics in financial services, as financial institutions weigh the risks and benefits of leveraging AI to enhance productivity and drive outcomes. While many banks are jumping into the world of AI by opting to use AI-based modeling techniques, there are several opportunities for AI components to unknowingly "sneak" into an institution's processes, presenting a unique set of risks and considerations.
For example, existing third-party models in the model inventory may undergo automatic vendor updates that use new AI-driven features. A departmental team may be using AI-driven products and tools as intended, but those products and tools are not currently considered part of the model inventory and, therefore, may remain uncontrolled or unvalidated. Alternatively, team members may be using or relying upon unapproved applications to enhance productivity, leading to instances of "Shadow AI," which is the unauthorized use of AI tools by employees without IT approval.
Although these are just a few examples of how AI can gain access into an organization's inner sanctum, it is only a small representation of the potential hidden risks that can manifest if left unchecked. While some financial institutions may have a general idea of how to work with AI and navigate risks, it is important to consider the unique risks that may emerge from hidden AI within a potentially disjointed AI governance framework, and how institutions can safeguard themselves, their employees, and their reputations with risk mitigation strategies.