Artificial intelligence (AI) continues to be among the hottest topics in financial services, as financial institutions weigh the risks and benefits of leveraging AI to enhance productivity and drive outcomes. While many banks are jumping into the world of AI by opting to use AI-based modeling techniques, there are several opportunities for AI components to unknowingly “sneak” into an institution’s processes, presenting a unique set of risks and considerations.
For example, existing third-party models in the model inventory may undergo automatic vendor updates that use new AI-driven features. A departmental team may be using AI-driven products and tools as intended, but those products and tools are not currently considered part of the model inventory and, therefore, may remain uncontrolled or unvalidated. Alternatively, team members may be using or relying upon unapproved applications to enhance productivity, leading to instances of “Shadow AI,” which is the unauthorized use of AI tools by employees without IT approval.
Although these are just a few examples of how AI can gain access into an organization’s inner sanctum, it is only a small representation of the potential hidden risks that can manifest if left unchecked. While some financial institutions may have a general idea of how to work with AI and navigate risks, it is important to consider the unique risks that may emerge from hidden AI within a potentially disjointed AI governance framework, and how institutions can safeguard themselves, their employees, and their reputations with risk mitigation strategies.
Navigating the Hidden AI Risks
The unknown or unauthorized use of AI within an institution can present its own set of novel risks, some of which may include:
Data Risk:
Data management and risk of mismanagement reaches new heights with AI models, making it essential to oversee data sourcing, storage, and privacy for both training data and in production model interactions. AI systems are frequently developed by private companies with lessened regulatory and transparency requirements. This may lead to opaqueness around many large language models, and consequently, institutions may not know how the underlying training data of a model was collected, how it is stored, for how long, and how data is protected for ongoing use so that institutional and/or client data is not misused or leaked.
Legal Risk:
Legal and contractual risk can arise when an institution is unknowingly using AI, as the use of AI-driven products and tools may conflict with the terms of a client contract or trigger regulatory, privacy, or copyright ramifications. If an institution is not regularly checking client contracts as well as jurisdictional law, a client contract or regulation may unknowingly be breached by using an AI product or tool.
Third-Party Risk:
While third-party risk can stem from contract breaches, additional risks may emerge when institutions use third-party tools and products with contracts that have not been updated to account for AI components or features. Models and tools may undergo automatic updates, introducing new features into production without the business’s knowledge. Some of these updates may incorporate AI components, necessitating extra governance and oversight.
Reputational Risk:
Mistakes and incidents that arise from using AI, such as exposing sensitive data or breaching a contract, can lead to reputational damage. For example, utilizing AI in customer service runs the risk of producing harmful or unpredictable chatbot hallucinations, where the chatbot comes up with information that is misleading, untrue, or even harmful, causing end-users to question the institution’s reputability.
Unpredictability Risk:
Certain AI modeling techniques, such as neural networks, carry a heightened risk of unpredictability, where the model may exhibit behaviors that were unintended and not explicitly programmed. This unpredictability can appear in both the model’s capabilities and the errors it might make. In addition, it is difficult to predict how humans will interact with the model, which can lead to potential misuse.
Understanding AI Risk Mitigation
Embracing the power of AI can introduce complex and unprecedented risks, but fortunately, there are risk mitigation strategies to help protect the institution, its employees, and its clients. Below are some ways institutions can mitigate these novel risks:
- Verify that data management and governance processes across the institution include AI.
- Expand and refine established data governance and metadata management best practices to incorporate AI use cases, with a focus on data lineage, traceability, data quality, and retention of both internal and external data.
- Regularly review contracts with third-party vendors to confirm they include clauses that protect the institution’s data and provide notification on any AI usage.
- Make sure that both parties are aware of any clauses pertaining to AI and add these clauses to contracts that do not currently specify AI. Stay aware of how data, especially sensitive data, is being collected, stored, and protected.
- Establish an organizationwide AI use policy as well as a process to identify and maintain an AI use case inventory.
- Develop, document, and implement an organizationwide process for acceptable AI use so that all employees are trained on the policy and are aware of the risks.
- Perform Model Risk Management’s (MRM) model determination process for AI-driven tools and products and consider enhancing risk governance surrounding these tools, dependent upon the outcomes of the determination process (tools versus model).
- Evaluate AI use cases and existing non-models and consider revising the institution’s definition of a model, so that any AI-driven solutions adhere to the same MRM policies and procedures as other models in the inventory.
- Develop, document, and implement a thorough review and approval process for all new AI tools, products, and partnerships.
- Implement a multidisciplinary approval process to potentially restrict access to certain tools and products and help ensure any new products are being thoroughly vetted.
- Evaluate existing internal controls and limitations for users.
- Implement internal controls around AI use cases as they are built in partnership with the second and third lines. As use cases are implemented, pilot with a controlled group and expand usage upon successful testing. For certain models and tools, user access controls and limitations can help ensure that only necessary parties have access to potentially sensitive information.
- Construct an agreed-upon AI framework that includes all aspects of the business.
- A comprehensive AI framework will help align all teams with respect to objectives, acceptable use cases, and procedures surrounding AI. Implementing both multidisciplinary and integrated second line and third line oversight will help to ensure enterprisewide partnership for controlled AI use, and long-term understanding of and adherence to AI policies and frameworks.
- Check that any periodic testing, ongoing monitoring, validations, and audits are performed to assess outputs of any known AI components.
- Periodic testing, ongoing monitoring, outcomes analysis, and second and third line governance for all AI tools or AI-driven models help the business identify anomalies and undesired outcomes. To minimize unpredictability, AI models can also undergo randomized controlled trials. Model validation will also be critical. Furthermore, audits should be performed and cover every other risk listed above, holistically checking for common risks, promoting governance, and helping ensure the AI risk framework and related policies and procedures are being adhered to within an institution.
Wielding the Power of AI Wisely
AI’s potential to revolutionize business processes is unlimited. With quickly evolving technology solutions, AI will have a greater presence within many organizations looking to remain lean and competitive in the marketplace. However, wielding the power of AI wisely and understanding how to identify and mitigate associated risks early is critical.
By adopting a comprehensive, multidisciplinary approach to AI governance, businesses can enhance the integrity and reliability of their AI-driven tools and products. In establishing robust processes for use case acceptance, third-party contract review, ongoing monitoring, and AI-specific control testing, along with implementing strict user access controls and a companywide AI framework, businesses can significantly mitigate known and unknown risks and align teams on consistent objectives and practices. This strategic oversight will not only help safeguard sensitive information but also foster innovation and trust in AI applications throughout the organization.
For more information about AI, read our other FORsights™ articles.