Executive Summary

Across submissions, lack of explainability is most problematic where AI drives or informs consequential financial decisions, especially credit underwriting, pricing, and other consumer-impacting determinations. Respondents describe compounded challenges when complex or deep learning models are used for high-stakes uses, raising hurdles for fairness, compliance, and required consumer disclosures. Financial institutions manage these varied risks through use-case-based expectations, prioritizing inherently interpretable models for critical decisions, applying post-hoc tools where appropriate, and, at times, forgoing opaque models. The consensus is strong: explainability standards and governance must align to the risk of the specific use.
Key takeaways:

- 88.16% of respondents answered “Yes” to Question 3 (67 Yes, 9 No; 100% coverage).
- Explainability challenges are greatest for consumer-impacting uses like credit underwriting, risk assessment, and pricing.
- Complex models (e.g., deep learning) make outcomes harder to explain, especially for investment decisioning and capital adequacy.
- Explaining model results and adverse actions is cited as the most significant challenge in credit decisioning applications.
- Executives identified lack of explainability and transparency as a top risk factor in AI adoption (54%).
- Institutions often prioritize inherently interpretable models for critical use cases and may decline to deploy models that cannot be explained.
- Respondents advocate assessing explainability by use case and employing policies, procedures, and auditability to manage risks.
- Some regulators are moving to classify AI-based credit underwriting as high-risk, reinforcing explainability expectations.
Bottom line:
Lack of explainability is most challenging when AI affects consumers and compliance—particularly credit decisioning and other high-stakes determinations— and when complex models obscure rationale. Financial institutions address this by tiering requirements to the use case, privileging interpretable models for critical decisions, using validated post-hoc tools, and instituting governance, testing, and disclosure controls.

The Question (Ref #3)
For which uses of AI is lack of explainability more of a challenge? Please describe those challenges in detail. How do financial institutions account for and manage the varied challenges and risks posed by different uses?
Direct Response to the Catalog Question

Where explainability is most challenging: consumer-impacting decisions such as credit underwriting, pricing, and risk assessment, where fairness, adverse action notices, and regulatory scrutiny are paramount.

Complex/black-box models (e.g., deep learning) heighten challenges in investment decisioning, capital adequacy, and fraud/risk analytics due to layered abstraction and ongoing model changes that erode traceability.

Credit decisioning is singled out: explaining results and adverse actions is reported as the most significant operational hurdle for AI/ML decisioning uses.

Institutions manage by aligning explainability to the use case—prioritizing inherently interpretable models for critical decisions, and sometimes not implementing opaque models that consumers or regulators cannot understand.

They augment with post-hoc explainability and interpretability tools, policies and procedures, conceptual soundness reviews, and auditability to satisfy governance and disclosure requirements.

Use-case risk classification (e.g., high-risk for AI-based underwriting) informs stronger testing, fairness assessments, and escalation paths for models with significant consumer or regulatory impact.
By-the-numbers — Question 3
| Metric | Value |
|---|---|
| Total Yes | 67.0 |
| Total No | 9.0 |
| Total (Yes+No) | 76.0 |
| % Yes | 88.16% |
| % No | 11.84% |
| % of answers (coverage) | 100.0% |

Introduction
Question 3 asks: For which uses of AI is lack of explainability more of a challenge, what are those challenges, and how do financial institutions manage the varied risks across uses? Respondents consistently point to high-stakes, consumer-facing decisions and complex modeling as the flashpoints, with institutions tailoring controls and model choices to each use case’s risk.
Historic Lessons in the Evidence

Respondents’ reasoning converges on a simple lesson: opacity is tolerable only where stakes are low. When AI decisions affect individual consumers or regulatory obligations, institutions have learned to favor interpretable approaches, invest in explanation tooling, or avoid deployment altogether. Attempts to retrofit explanations after the fact can be fragile, so firms increasingly treat explainability as a design constraint for high-risk uses.
Recent Developments
Respondents note momentum to classify AI-based credit underwriting as a high-risk category, elevating explainability and oversight expectations. Industry surveys report that explaining results—particularly adverse actions—remains the top challenge in credit decisioning, sharpening focus on practical, defensible explanation methods.
The Challenge

Practically, explainability breaks down when models rely on complex, evolving patterns that resist intuitive narratives, leading to consumer confusion, regulatory friction, and fairness concerns. Adverse action requirements amplify this pressure in credit, while deep learning’s abstraction complicates investment and capital decisions. Resource constraints to diagnose discrimination, variability in data quality, and the heuristic nature of some post-hoc methods further strain traceability and accountability.
Evolving Metrics

Respondents justify concerns with experience measures: executives ranking explainability as a top AI adoption risk, and practitioners citing adverse-action explanations as the most difficult hurdle in credit decisioning. They also point to use-case-centered definitions of explainability, conceptual soundness evaluations complicated by less transparent methods, and the growing use of explanation tools and visualization to supplement model understanding.
A Framework Inspired by the Inputs
An implicit, risk-based framework emerges: calibrate explainability expectations to the use case; choose inherently interpretable models for high-stakes decisions; where complexity is needed, add validated post-hoc explanations; document conceptual soundness; enforce policies, procedures, and auditability; and, where reliable explanations are infeasible, limit automation or decline deployment.

Case Study
Across submissions, credit underwriting illustrates the pattern. Institutions adopting ML face the need to generate clear, defensible adverse action reasons and demonstrate fairness. Many respond by preferring interpretable models for the decision core, layering post-hoc tools when complexity is unavoidable, implementing policies and audits for conceptual soundness, and—in some cases—electing not to use opaque approaches that cannot meet consumer and regulatory expectations.
Recommendations

- Map explainability requirements by use case, elevating standards for consumer-impacting and compliance-critical decisions (e.g., credit underwriting and pricing).
- Prioritize inherently interpretable models for high-stakes uses; avoid deploying models that cannot produce reliable, consumer-understandable reasons.
- When complexity is necessary, employ validated post-hoc explanation techniques and document conceptual soundness and limitations.
- Establish policies, procedures, and auditability to trace inputs, feature impacts, and decision rationale throughout the model lifecycle.
- Build and test adverse action reason generation as a first-class requirement in credit decisioning workflows.
- Conduct fairness and outcome testing tailored to the use case; escalate controls where classification as “high-risk” is warranted.
- Strengthen data quality and governance to support consistent, defensible explanations across iterative model updates.
- Maintain human-in-the-loop review and challenge for opaque or borderline cases, and be prepared to defer or replace models that cannot meet explainability thresholds.
Conclusion
Explainability is most problematic precisely where it matters most: in high-stakes, consumer-impacting decisions and in complex models whose logic resists translation. Financial institutions respond by aligning model choices and controls to the use case, emphasizing interpretability for critical decisions, augmenting with explanation tools, and maintaining robust governance. Returning to the question, the varied challenges track the use case and model complexity, and institutions manage them through tiered expectations, documented conceptual soundness, and, when necessary, refusal to deploy opaque systems.
This analysis will continue in our next publication. Don’t miss the next installment.
Follow us, stay informed, stay secure, and let’s navigate the risk landscape together.

