Strategic Inquiry into AI Adoption: Federal Agencies Seek Stakeholder Insights on AI in Financial Institutions – Part 1

August 15, 2024

Alejandro Mijares
Founder and Chief Executive Officer, Mijares Consulting

November 8, 2023

The federal agencies, namely the Board of Governors of the Federal Reserve System, the Bureau of Consumer Financial Protection, the Federal Deposit Insurance Corporation, the National Credit Union Administration, and the Office of the Comptroller of the Currency, were actively collecting information and seeking input on the utilization of artificial intelligence (AI), which includes machine learning (ML), within the realm of financial institutions (FI). The primary objective of this request for information (RFI), issued in March 2021, was to gain insights from stakeholders regarding the incorporation of AI technologies by financial institutions in their service offerings to clients and for various business and operational purposes.

The RFI aimed to obtain feedback on several critical aspects, including the appropriate governance structures, risk management protocols, and control mechanisms applied to AI. Additionally, it looked to understand the challenges financial institutions face in developing, adopting, and managing AI solutions. Furthermore, the RFI aimed to gather opinions on the application of AI within the financial services sector to determine whether any clarifications or guidance from the agencies would be beneficial to ensure that financial institutions utilize AI securely and compliantly, including adherence to consumer protection laws and regulations.

“The Request for Information (RFI) was methodically segmented into the subsequent categories:

1. Background Information

The agencies encourage financial institutions to innovate responsibly, involving identifying and managing risks associated with new technologies, such as AI. Proper governance, risk management, and regulatory compliance can enhance business decision-making and improve services for consumers and businesses when using these innovative technologies. This RFI appendix provides a non-exhaustive list of laws, regulations, and agency directives that may apply to using AI by supervised institutions. Financial institutions are exploring various applications of AI, which are not limited to the ones listed below:

  • Flagging unusual transactions: AI aids in identifying suspicious activities using both structured and unstructured data. It helps in fraud detection, financial crime monitoring, and Bank Secrecy Act/anti-money laundering investigations.
  • Personalization of customer services: AI, including voice recognition and NLP, improves customer experiences. Examples include chatbots for routine customer queries and AI in call centers for customized service.
  • Credit decisions: AI informs credit decisions using traditional and alternative data sources.
  • Risk management: AI complements traditional risk management practices. It supports credit monitoring, loan management, and even liquidity risk management.
  • Textual analysis: Using NLP, AI analyzes unstructured textual data to obtain insights. Applications include reviewing regulations, news, earnings reports, and more.
  • Cybersecurity: AI detects threats, reveals attackers, and aids in mitigation, including real-time attack investigation and detection of malicious activities.

2. Potential Benefits of AI

AI can significantly enhance the operations of financial institutions by increasing efficiency, cutting costs, and boosting performance. It excels in processing large datasets, both structured and unstructured, discerning patterns and relationships that traditional methods might miss. This capability enables better processing of textual information and offers advantages like swifter, more accurate underwriting. Furthermore, AI can expand credit access to previously underserved consumers and small businesses while allowing institutions to provide highly customized products and services.

3. Potential Risks of AI

Financial institutions need to recognize and manage the risks linked to AI, similar to other tools or models they utilize. Some risks tied to AI are general, like operational vulnerabilities, cyber threats, IT issues, third-party risks, model risks, and threats to institutional safety. Additionally, AI can magnify consumer protection risks, including potential illegal discrimination or privacy violations per acts like the Dodd-Frank and FTC Act. Three unique AI risk areas are:

  • Explainability: Some AI systems can be challenging to interpret in their overall function (global explainability) and specific decisions (local explainability). This opacity can hinder understanding of AI’s reliability, hamper audits, and complicate compliance with laws.
  • Data Usage: AI heavily relies on training data. If this data is flawed, biased, or unrepresentative, AI can perpetuate or intensify these errors, leading to biased or incorrect predictions.
  • Dynamic Updating: Certain AI models can self-update without human intervention. Such evolving systems can be complex to monitor, mainly when changes in external factors cause significant deviation from the original training data, leading to unpredictable and potentially problematic AI behavior.

4. Agencies request comments in the following areas:

  • Explainability – 3 questions
  • Risks from Broader or More Intensive Data Processing and Usage – 2 questions
  • Overfitting – 1 question
  • Cybersecurity – 1 question
  • Dynamic Updating – 1 question
  • AI Use by Community Institutions – 1 question
  • Oversight of Third Parties – 1 question
  • Fair Lending – 5 questions
  • Additional Considerations – 2 questions

Agencies acknowledge AI’s potential to enhance efficiency, performance, and cost-effectiveness for financial institutions while benefiting consumers and businesses. Through the RFI, the agencies aim to understand financial institutions’ AI-related risk management practices, their challenges in adopting AI, and the perceived benefits of AI usage. The RFI also seeks opinions on AI in financial services to guide the agencies in clarifying guidelines for AI’s safe, lawful, and consumer-friendly application in the sector.

5. List of Laws, Regulations, Supervisory Guidance, and Other Agency Statements Relevant to AI:

Laws and Regulations:

  • Section 39 of the Federal Deposit Insurance Act, as implemented through the agencies’ safety and soundness regulation
  • Sections 501 and 505(b) of the Gramm-Leach Bliley Act as implemented through the agencies’ regulations and standards, including Interagency Guidelines Establishing Information Security Standards
  • Fair Credit Reporting Act (FCRA)/Reg. V
  • Equal Credit Opportunity Act (ECOA)/Reg. B
  • Fair Housing Act (FHA)
  • Section 5 of the Federal Trade Commission Act (prohibiting UDAP) and sections 1031 and 1036 of the Dodd-Frank Act (prohibiting unfair, deceptive, or abusive acts or practices (UDAAP))

Supervisory Guidance and Statements:  

  • Interagency Statement on the Use of Alternative Data in Credit Underwriting
  • Supervisory Guidance on Model Risk Management
  • Third-Party/Outsourcing Risk Management
  • New, Modified, or Expanded Bank Products and Services
  • CFPB Innovation Spotlight on Providing Adverse Action Notices When Using AI/ ML Models

We invite you to read Part 2 and 3a and 3b , which offers an in-depth look at the use of AI in financial and community institutions, linking insights from the agencies’ RFI with our Firm’s findings based on the analysis performed by reviewing 61 responses to the RFI.

Your insights and feedback are invaluable in shaping the future of AI applications in a secure, lawful, and consumer-friendly manner…If you found these insights valuable, please comment below with your thoughts.

Thank you for your continued interest in our research and analysis.

Share:

Comments

Leave the first comment