What is Explainable AI

Explainable AI – A Key to the Practical Success of AI and ML
Explainable AI

Artificial intelligence (AI) and machine learning (ML) often generate answers or predictions with no explanation or reasoning.  In many cases, even the data scientists — who know the inner workings of the AI/ML platform — are unable to explain the reason behind the platform-generated answers and predictions. One of the best examples of this black-box lack of explainability is ChatGPT, which gives amazing answers that are usually correct, but cannot clearly explain how those answers were generated.

In contrast, explainable AI provides relatively simple explanations or reasons behind answers or predictions and is a key component of Transparent AI.

Why is Explainable AI important?

The biggest reason why explainable AI is important is trust. 

AI can only be fully leveraged when it is trusted. Subject matter experts will discount the value of AI if it provides answers and predictions that contradict their expertise. This will be true whether the subject matter expert is a loan officer or a surgeon. Citizens will demand that the government limit or outlaw certain AI applications if they don’t trust it. We’ve already seen various limits and even outright bans on AI-based facial recognition in certain states and cities.

Trust is fundamental to the acceptance of the recommendations of any expert – human or AI. This need for trust is the driving force behind various laws and regulations that grant individuals a “right to explanation”.  For example —

  • The U.S. banking and financial services industry is required by law to give an explanation to creditors who are denied credit with the specific reasons for the denial (Equal Credit Opportunity Act, Title 12, Chapter X, Part 1002, §1002.9). 
  • U.S. insurance companies are required to explain their rate and coverage decisions to their customers.
  • The European General Data Protection Regulation (GDPR) mandates that explanations be available for algorithm-based decision-making. This is particularly important as GDPR may apply to U.S. businesses that do business in the EU or that process the personal data of EU citizens.

Confidence scores

One of the issues with explainable AI is determining what constitutes an adequate explanation.

According to Jim Guszcza, Chief Data Scientist at Deloitte, humans comprehend explanations best as rules-and-thresholds. Each reason output by the AI/ML platform should include a confidence score that rates the believability of the reason — to let a human decide whether to act on that reason (high confidence score) or not (low confidence score).

Based on training data, confidence scores are typically expressed as percentages, reflecting how strongly we should rely on an AI recommendation.  For example, a confidence score over 70% reflects a high confidence in the prediction or recommendation. In contrast, a 30% confidence score suggests that the recommendation may be correct, but that the level of certainty is low.

In summary, confidence scores quantify explainability. Without a confidence score, AI explanations are academic – not actionable.

Counterfactuals – How to change a “no” answer to a “yes”

One of the problems with explainable AI is that people don’t always want a reason for an AI recommendation.  They want the answer to be changed!

For example, if your company wants a loan, and you believe that your company can repay the loan, you’re not going to be happy if the AI platform tells the loan officer to deny the loan – even if it gives reasons. You may be more willing to accept the answer, however, if the system can also give some guidelines for what changes you could make in your business in order to get the loan approved. This information on how to change a recommendation is called a counterfactual.

Explainable AI provides the reasons for a recommendation or prediction, but we take it a step further. Counterfactual inferencing explains how to identify the optimal way to change the recommendation, including what features to change and by how much.  As with the original explanations, the counterfactuals should be given (using an LLM) in natural language.

Celsior Explainable AI / ML Solution

Our Explainable AI / ML solution is a leading example of a platform that provides explainable AI, confidence scores, and counterfactuals. These benefits are available whether you use it as a stand-alone platform or alongside your existing AI/ML systems.

If you are interested, we would be happy to provide more information about our solution or to give you a demo of it.

To explore the full set of strategic AI recommendations for financial institutions — including explainability, governance, and adoption frameworks — download our white paper, Banking on AI.

Download the white paper

 

MORE BLOGS

BLOG
more
From Legacy Infrastructure to Competitive Advantage: A Technology Guide for Small Insurance Carriers

Most insurtech was built for enterprise carriers. Here's the practical guide for small insurance carriers who need modern technology — without the enterprise complexity.

Learn More
BLOG
more
Breaking Down Silos: Bringing Underwriting, Claims, and IT Operations Together

Most insurers run underwriting, claims, and IT in silos. This post explains why that's become unsustainable and how a unified platform layer solves it.

Learn More
A group of four professionals in a modern office meeting, with one man actively presenting to the others while gesturing.
BLOG
more
The New Standard for Guidewire Delivery: Solving for Velocity in a Continuous Release World

As Guidewire programs shift to cloud and continuous delivery, traditional staffing models fall short. Explore how AI-driven quality engineering and automation bridge the gap between strategy and execution to ensure long-term velocity and success.

Learn More
BLOG
more
ServiceNow: One Platform to Streamline Compliance, Risk, and Service Delivery

Discover how ServiceNow helps financial institutions automate compliance, strengthen IT resilience, and unify service delivery. Learn how Celsior's implementation approach brings it all together.

Learn More
A younger male professional in a blue shirt and tie pointing at a laptop screen while explaining something to two older colleagues in an office setting.