Strategic Inquiry into AI Adoption: Federal Agencies Seek Stakeholder Insights on AI in Financial Institutions – Part 3 (B): Community Institutions

September 13, 2024

Alejandro Mijares

Founder and Chief Executive Officer, Mijares Consulting

Welcome to Part “B” of our in-depth exploration into the challenges faced by community institutions in developing, adopting, and using Artificial Intelligence (AI). In Part 3 of our series, focusing on AI Adoption in Community Institutions (Question 9), we delve into the complexities that these smaller financial entities encounter. Notably, only 28% of organizations responded to this question, making it the least addressed issue in our survey. This low response rate could imply that the impact of AI is perceived as less significant in smaller, community-focused financial entities, or it may reflect a lower level of engagement with AI technologies in these institutions.

In Part “A”, we thoroughly examined the challenges related to Governance, Resource Limitations, Operational Integrity, and Regulatory Challenges. Building on this foundation, Part “B” will focus on the equally critical aspects of Third-Party Oversight, Data Management, Data Security, and Testing & Validation. Each of these areas presents its own set of hurdles for community institutions, from navigating the complex regulatory landscape that governs AI usage to ensuring the security and proper management of the vast amounts of data that AI systems require. We will also explore the intricacies of establishing effective oversight of third-party AI solutions and the challenges involved in the rigorous testing and validation of AI models. Join us as we continue to unravel the multifaceted challenges of AI adoption in community institutions, offering insights and potential strategies to overcome these obstacles.

In Part 1 of our series, we explored the efforts of federal agencies in gathering insights on AI in financial institutions, focusing on the structure and objectives of their Request for Information (RFI). Moving to Part 2, we shifted our focus to a detailed analysis of responses from 61 diverse organizations and individuals. These responses, which addressed 17 key questions from the RFI, provided a wide range of perspectives and deepened our understanding of the current and potential roles of AI in the financial sector.

Question 9 (a): Do community institutions face particular challenges in developing, adopting, and using AI?

Question 9 (B): If so, please provide detail about such challenges. What practices are employed to address those impediments or challenges?

Third-party oversight

Community financial institutions looking to leverage AI/ML tools often depend on third-party vendors for the deployment of these technologies. While the integration of AI/ML has been simplified through cloud computing and API-based connections, the AI/ML techniques used in vendor solutions can be opaque. The institutions may not fully understand the decision-making processes of these AI tools, as vendors might be reluctant to disclose proprietary information about how their AI algorithms reach conclusions. This situation can lead to significant issues, such as an AI inadvertently engaging in discriminatory credit decision practices without the financial institution’s knowledge or understanding of the cause. This lack of transparency presents a substantial compliance risk, particularly with the use of less established products.

The agencies have acknowledged that, irrespective of an institution’s size, capacity, or technical know-how, it is the duty of senior management to ensure a thorough evaluation of their AI systems, including their limitations, underlying assumptions, and potential areas for enhancement. This responsibility holds true even when these systems or capabilities are sourced from third-party providers. Financial institutions must undertake risk assessments related to these third-party relationships, exercise due diligence in selecting partners, manage contract negotiations, and ensure quality control of the services provided by these third parties. This obligation also applies in less apparent scenarios where AI is utilized, such as in the use of social media platforms for advertising purposes.

Additionally, numerous vendor tools often depend on the historical data supplied by the institution. If this data is biased, it is uncertain if and how the vendor might eliminate such bias. Community banks might request validation reports from vendors, but it remains uncertain if their staff possess the required resources to understand the testing methods used. Furthermore, there’s ambiguity regarding whether these vendor-provided validations would effectively uncover or highlight any weaknesses in the models.

In June 2023 and May 2024, the agencies released definitive guidance on handling risks linked to third-party relationships. This final guidance presents the agencies’ perspective on robust risk management principles that banking organizations should adopt throughout all phases of third-party relationship lifecycles. According to this guidance, effective management of third-party risks should consider the risk level, complexity, and size of the banking organization, as well as the specific characteristics of the third-party relationship.

Data Management

Community banks often encounter significant challenges with data readiness when implementing artificial intelligence (AI). These challenges are exacerbated by regulatory barriers and a scarcity of data, which larger institutions typically have in abundance. This data limitation makes the effective application of AI difficult for these smaller institutions, often leading them to rely on traditional methods for data analysis, credit, and loan issuance.

As AI systems require large volumes of high-quality, well-structured data for effective training and operation, these institutions frequently struggle with gathering and preparing data that meets these criteria. Many community banks may not have access to the same breadth and depth of data as larger banks, limiting the effectiveness of their AI applications. Additionally, issues such as data silos, outdated data management systems, and lack of standardized data formats further complicate their ability to utilize AI effectively. This lack of data readiness not only hinders the development and deployment of AI solutions but also impacts their accuracy and reliability, posing a substantial barrier to leveraging AI’s full potential in enhancing banking operations and customer service.

Developing a robust, production-level ML system is unfeasible without the right data and data infrastructure, essential for continuous model training and prediction, as well as for tracking data lineage and pinpointing the causes of data drift. Smaller financial institutions (FIs) may struggle to allocate the necessary resources for building a proper data infrastructure. However, it’s worth noting that larger entities often grapple with more complex data infrastructures, which bring their own set of challenges, particularly in managing data lineage and detecting data drift. With multiple systems involved in data generation and collection (such as various core banking applications, loan origination systems, CRMs, etc.), changes in one system may not always be adequately documented or communicated to all data users. Additionally, larger infrastructures can be more cumbersome to modify for better ML compliance. Moreover, smaller banks may lack the advanced data management tools and expertise needed to efficiently organize large datasets, which is crucial for training accurate and reliable AI models.

In contrast, smaller players, who either lack or have minimal data infrastructure, are uniquely positioned to create an ML-ready infrastructure from the outset. This approach allows them to avoid the costs and complexities associated with overhauling legacy systems. Therefore, while smaller FIs face challenges in resource allocation, they also have a potential advantage in building efficient, ML-compatible data infrastructures right from the start.

Given that the current customer base of community financial institutions may not be extensive enough, there’s a possibility that these institutions will not see a substantial improvement in performance compared to existing underwriting and scoring methods. Moreover, the likelihood of these smaller financial institutions effectively applying AI techniques to their in-house data is low. To uncover patterns that traditional methods cannot detect, a much larger data set is required than what is typically accessible to these institutions. In addition, the expenses involved in acquiring, administering, and implementing controls for non-traditional data might surpass the potential benefits.

The success of AI implementations is heavily reliant on the quality and structure of the data used. Many community banks, however, grapple with data systems that are either fragmented or unstructured, leading to data being scattered across multiple platforms and formats. Often, data is compartmentalized in various applications and systems that lack interoperability. Such disorganization poses significant challenges in compiling, cleansing, tagging, and processing data in a manner suitable for AI algorithms. This issue not only hampers the seamless adoption of AI in their operations but also restricts the full range of benefits AI can provide, including enhanced customer insights, improved risk management, and increased operational efficiency.

Data Security

Data security remains a paramount concern in the realm of AI. The extensive datasets required to train AI systems present an attractive target for cybercriminals, posing significant risks. Unauthorized access to this data, whether it occurs through a third-party AI provider or within the financial institution (FI) itself, represents a substantial threat. The safeguarding of such data is critical, not only to protect sensitive information but also to maintain customer trust and regulatory compliance.

Additionally, the ‘walled garden’ nature of many FIs’ core systems further complicates data accessibility. These systems are often designed to be opaque, limiting the visibility of the data they hold. Vendors, who control these systems, are typically hesitant to relinquish this control, making it challenging for FIs to access and utilize their own data effectively.

This issue extends beyond just community organizations to encompass companies, research institutions, governments, and other entities. A common challenge across these sectors is the lack of expertise in configuring and understanding neural network AI. Setting up the numerous parameters for a neural network is a complex task and comprehending the rationale behind the AI’s decisions or classifications, even in a general sense, is often beyond the capability of many within these organizations. This gap in understanding and expertise underscores the need for greater education and training in AI technologies, to enable more effective and informed use of these powerful tools.

Testing & Validation

There is still uncertainty around whether and how community institutions can independently verify their models to ensure they produce unbiased outcomes. This situation poses distinct challenges for these organizations in terms of model validation, especially since they often lack access to the source code, or the validation tests conducted by vendors. Regulators are attempting to address this issue by requiring community financial institutions to conduct validations of AI/ML systems in high-risk areas, such as BSA/AML. Although community banks may request validation reports from vendors, it’s not always clear if the bank staff have the requisite resources to understand the testing methods used. Additionally, there’s uncertainty about whether such vendor-provided validations would effectively identify or disclose any weaknesses in the models.

AI models, especially those based on machine learning, can be intricate and not entirely transparent. Community institutions might struggle to grasp the full extent of risks associated with these models, such as biases, errors, and the possibility of unforeseen outcomes. A lack of in-depth understanding of these risks can lead to insufficiently rigorous testing and validation, raising concerns about the safety and reliability of the models.

The process of testing and validating AI models demands substantial resources, both in terms of time and finances. Given their generally smaller budgets and streamlined operations, community institutions may find it challenging to dedicate the necessary resources for comprehensive testing and validation.

The integration of AI systems typically necessitates considerable modifications to existing procedures and workflows. Community institutions often face difficulties in managing these changes, which includes training personnel to operate new AI systems and adjusting internal processes to integrate these new technologies effectively.

Addressing these challenges requires a strategic approach, including investment in staff training, partnerships with knowledgeable vendors, and a focus on data management and model risk assessment.

Thank you for reading our article on community financial institutions and artificial intelligence. If you found these insights valuable, please ‘Like’ and ‘Share’ to spread the knowledge within your professional network. Your engagement is crucial in fostering a broader understanding of these important topics.

For ongoing updates and deeper dives into the intersection of finance and technology, don’t forget to ‘Follow’ us. Your involvement is key to building a community that stays at the forefront of industry trends and innovations.

We appreciate your support and look forward to continuing this journey together. Let’s keep the conversation going!

Share:

Comments

Leave the first comment