AI adoption in finance expanding beyond underwriting

Beyond Underwriting: Where AI Is Expanding in Finance—and Why Adoption Still Stalls

Executive Summary

AI adoption in finance ecosystem overview

Respondents identified a wide array of additional AI uses across financial services, from fraud and cybersecurity to customer engagement, trading, and back-office processes. At the same time, they cited concrete impediments—explainability, data quality, regulatory clarity, legacy systems, skill gaps, and governance scalability—that slow adoption. With 92.11% of responses answering “Yes” to this question, the industry consensus is clear on both the breadth of use and the persistence of risk challenges. The key challenge is to surface these new use cases while confronting the specific risk management barriers that impede their safe deployment.

Key takeaways:

Financial AI use cases and adoption barriers summary
  • 92.11% Yes (70 of 76), 100% coverage confirms broad recognition of additional AI uses and adoption challenges.
  • Banks report use beyond credit: fraud/AML, cybersecurity, marketing, customer service, back-office processing, and collateral valuation.
  • Front-office and client-facing growth areas include chatbots, remote onboarding, biometrics, and personalized offers.
  • Investment and markets functions add algorithmic trading, portfolio optimization, model validation, and back testing.
  • Explainability and interpretability remain top blockers, especially for credit and compliance-heavy use cases.
  • Data quality and alternative data governance are recurring risks; predictions are only as good as the underlying data.
  • Regulatory clarity and “regulatory comfort” materially affect willingness to scale AI, particularly for adaptive models.
  • Legacy systems, limited AI talent, and uneven model governance hinder especially smaller institutions.

Bottom line:

Financial institutions are extending AI into fraud, cybersecurity, customer engagement, markets, and operations, not just underwriting. Adoption is impeded by explainability, data quality, regulatory clarity, legacy tech, skills, and scalable governance—factors firms must address to realize benefits safely.

AI adoption in finance expanding beyond underwriting

The Question (Ref #16)

To the extent not already discussed, please identify any additional uses of AI by financial institutions and any risk management challenges or other factors that may impede adoption and use of AI.

Direct Response to the Catalog Question

Fraud detection system icon

Additional uses identified include fraud detection/prevention, AML, cybersecurity, marketing, customer service/chatbots, back-office processing, and collateral valuation.

Customer chatbot automation icon

Client-facing expansion spans remote onboarding, biometric ID, personalized advertisements, and chatbots.

Algorithmic trading analytics icon

Markets and advisory use cases include algorithmic trading, portfolio optimization, model validation, back testing, robo-advising, and regulatory compliance.

Data quality validation icon

Key impediments: explainability/interpretability, overfitting/robustness, and bias/fair lending compliance concerns.

Regulatory balance uncertainty icon

Operational barriers include data quality, legacy systems and limited data sharing, model governance gaps, and AI skills shortages.

Legacy system integration icon

Regulatory uncertainty and the need for supervisory comfort slow scaling, with adaptive models requiring flexible governance and more frequent updates.

By-the-numbers — Question 16

MetricValue
Total Yes70.0
Total No6.0
Total (Yes+No)76.0
% Yes92.11%
% No7.89%
% of answers (coverage)100.0%
AI adoption in finance expanding beyond underwriting

Introduction

Question 16 asks: To the extent not already discussed, what additional uses of AI are financial institutions pursuing, and what risk management challenges or other factors may impede adoption and use? The materials collectively point to expanding AI applications across the financial value chain, paired with a consistent set of risk, governance, and operational constraints that shape deployment pace and scope.

Historic Lessons in the Evidence

Governance and risk barriers in financial AI expansion

Respondents’ reasoning converges on a few themes: models that cannot be explained or robustly validated invite adoption delays; weak or uneven governance, especially for third-party and adaptive models, raises supervisory concerns; and poor data quality and alternative data challenges degrade performance and fairness. Organizations also emphasized that legacy systems and institutional inertia can outweigh technical readiness, while skill gaps intensify reliance on opaque tools—compounding trust and accountability issues.

Recent Developments

Not observed in the provided materials.

The Challenge

Explainability and data quality challenges in AI deployment

Institutions see material upside from applying AI to new domains, but they wrestle with explainability for high-stakes uses, data quality and bias in alternative data, and model robustness across shifting conditions. Regulatory clarity and scalable governance for self-learning systems—alongside legacy tech constraints and workforce shortages—create a practical ceiling on how fast and how far AI can be deployed.

Evolving Metrics

Respondents referenced risk-based expectations and called for explicit thresholds and standards—bias measurement techniques, minimum explainability requirements, and rigorous model risk testing—to justify AI use. Several emphasized data-centric validation and quality controls as prerequisites for trustworthy outcomes, noting that models are only as good as their training data. One survey reported quantified impediments—AI expertise, data sufficiency, ethics, privacy/security, and legal/compliance—as top barriers guiding prioritization.

A Framework Inspired by the Inputs

AI adoption in finance risk tiering framework

An implicit pattern emerges: start with a data-centric approach and a risk-based tiering of use cases; apply strong model governance and explainability tooling; iterate controls for adaptive models; and engage early with supervisors to build regulatory comfort. Institutions prioritize lower-risk, high-ROI domains (e.g., fraud, chatbots) before extending into credit and markets where fairness, robustness, and auditability standards are tighter.

Case Study

A representative pattern shows firms piloting AI in fraud monitoring and customer support, then progressing toward underwriting and risk models once governance matures. They implement fair lending policies and model risk management controls, add explainability to address supervisory expectations, and pursue data quality improvements to reduce bias and overfitting. Scaling proceeds as institutions gain regulatory comfort and adapt governance for more frequent updates in adaptive models.

AI adoption in finance expanding beyond underwriting

Recommendations

  1. Map additional AI uses beyond credit—fraud/AML, cybersecurity, chatbots, onboarding/biometrics, collateral valuation, and portfolio analytics—while tiering them by risk and impact.
  2. Institutionalize explainability with documented methods, thresholds, and challenge processes for high-stakes decisions to address supervisory and fair lending expectations.
  3. Adopt a data-centric program: strengthen data quality controls, alternative data governance, lineage, and monitoring to reduce bias and model drift.
  4. Modernize model governance for adaptive models, including more frequent reviews, update logs, and independent validation of third-party tools.
  5. Reduce legacy constraints by upgrading data-sharing pipelines and integrating AI with core systems to enable auditable, scalable deployment.
  6. Close skills gaps by training and hiring AI professionals and enabling cross-functional governance (risk, compliance, legal) to oversee development and use.
  7. Engage supervisors early to build regulatory comfort, aligning on explainability, fairness testing, and documentation standards before scaling.
  8. Sequence deployment: start in lower-risk, well-understood domains (fraud, service automation) and expand to credit and markets as controls and evidence mature.

Conclusion

Scalable governance for financial AI growth

The record shows AI advancing across fraud, cybersecurity, customer engagement, markets, and operations—well beyond traditional underwriting. Yet explainability, data quality, regulatory clarity, legacy constraints, skill shortages, and governance scalability still impede adoption. Addressing these barriers with a risk-based, data-centric, and explainability-forward approach—paired with early supervisory engagement—will unlock broader, safer use. That is the practical path to meeting Question 16’s call: identify new uses and resolve the specific impediments that stand in the way.

This analysis will continue in our next publication. Don’t miss the next installment.

Follow us, stay informed, stay secure, and let’s navigate the risk landscape together.