Sign up to get the latest news and developments in technology, business analytics, data science and Polestar
Editor’s note: As AI has advanced, the issue with the explainability of its outputs has become quite apparent. To understand this, a new technique/method has been developed that could explain the factors that an AI is considering and to what extent, to generate an output. These techniques or methods are termed Explainable Artificial Intelligence (XAI). In this blog, we will delve deeper into XAI, discuss its strategies, and look at a few of its applications.
In every article you read about AI, you are bound to find mentions of “lack of explainability” or “black boxes” in the drawbacks section. Despite its widespread use in the different functions across various industries, there is no explanation for how these advanced models magically arrive at a specific output.
Even for the data scientists and engineers, who took part in the fine-tuning of the model, the reasoning being AI is shrouded in mystery. Not having a definite answer can create a sense of skepticism toward using the output generated by AI for crucial decision-making.
With the growing requirement for responsible AI, organizations must also make sure that their output can be explained to the different stakeholders and customers. Moreover, the organization must also take care of the regulatory requirements that can mandate the explanation of the AI output.
This is where Explainable AI (XAI) comes into the picture. It helps demystify the reasoning of these AI black boxes, improving trust in their output. It also empowers users to compare the effects of each input data point on the final output.
“Explainable AI is the set of techniques/methods that could easily explain factors influencing an AI decision.”
Learn how Explainable Forecasting can drive better decisions.
Explore BlogBefore we start the dissection of explainable AI, we must address the two terms that are floating around- Interpretability and Explainability. How are they different and why we must not confuse ourselves between the capabilities of explainable AI and Interpretable AI.
Explainability refers to the ease of understanding “why did a certain model make a specific decision?”. The main goal is to grasp the reasoning behind the output. This is about the “why” and involves explaining the model’s behavior and revealing the factors affecting the output.
On the other hand, interpretability is an understanding of the inner workings of an AI model. It focuses more on the “how” side of things, “how the AI generated an output”. A model will be called interpretable if it is transparent enough for you to understand how it functions just by looking at its parameters and structure.
This blog will delve into Explainable Artificial Intelligence, XAI, while we will leave the topic of interpretable AI for another time.
But why do we need the explanation for an output, if the AI performed as intended?
Explainability offers a lot of advantages. For starters, with the understanding of the reason behind the output, users do not get the feeling that they are blindly trusting the AI. It instills a sense of confidence in the output generated. Moreover, when organizations include insights that were generated by AI in their decision-making process, it becomes easier for them to explain it to the stakeholders, increasing the transparency and trust among consumers.
~ McKinsey
Unfortunately, it has been observed that the ability to explain an AI model comes at the cost of the model’s capabilities, i.e., the more capable the model is, the tougher it is to explain its output. For example, a model based on a decision tree can easily be explained by simply tracing the branches used to arrive at a decision. However, it will be impossible to explain a forecasting model based on Random Forest.
You must have heard AI being called a “black box”. However, as we have seen previously, not all AI models are difficult to understand. Hence, based on the ease of explainability, AI models have been categorized into three main categories- white box, black box, and grey box. By understanding these categories, you can better utilize the trade-offs between transparency and the capability that these AI models have to offer.
White boxes- These models are easy to understand and are highly transparent. They break down complex problems into a series of clear steps, making them easier to understand. However, this transparency comes at the cost of accuracy and capabilities.
Black boxes- These models behave as if they are opaque boxes. It is very difficult to interpret how exactly are these AI models thinking. Even though they are highly accurate and can perform difficult tasks. These models usually have a complex layer of interconnected nodes and mathematical functions making it difficult to understand. Grey boxes- They lie somewhere in between white boxes and black boxes, i.e., they offer limited interpretability compared to white boxes but aren’t as opaque as black boxes.
We can all agree that the ability to understand AI reasoning already sounds very lucrative and powerful. But the story doesn’t end here. With explainability comes various other advantages -
1. Improved decision-making- XAI helps users understand the reasoning behind an AI model’s output. This transparency enables decision-makers to combine AI insights with their expertise for more informed decision-making.
2. Fine-tuning/Debugging- Engineers and analysts can identify areas of improvement by understanding the AI model’s decision-making. For example, they can check if the model assigning appropriate weightage to important factors.
3. Identifying biases- XAI can identify the factors causing biases, helping you resolve them with surety.
4. Meet regulatory requirements- In some functions like finance, medicine, judiciary, etc., it is mandatory to provide the reasoning behind the decision when using AI insights.
5. Trust building- With proper reasoning, stakeholders and consumers find it easier to trust the AI-implemented decision-making.
Explore our RGM suite for seamless forecasting, planning, tracking, impact prediction, simulations, and analytics.
Explore NowThe methods used by an XAI model follow different techniques to understand the outputs of AI. The selection of this method/technique depends on- the business purpose that is required to be served, It also depends on the type of AI model- the Generative AI and Predictive AI that you have implemented will require different XAI models. Let’s look at a few of these XAI techniques/methods that can be used-
1. Simple Local Analysis:
Think of it as taking a close-up look at a single prediction. Through this method, we try to understand the “why” behind each decision. This is useful for
Technique used- Local Interpretable Model Agnostic Explanations (LIME) and Partial Dependence Plots (PDPs).
2. Deep Model Analysis:
With this method, we shift our focus from understanding the reason behind a single prediction to the overall mechanics of the model. It investigates how different inputs can impact a model output and how different features interact with each other. This can be used for
The technique used- is Shapley Additive exPlanation (SHAP).
3. Surrogate Model Development:
It involves creating a simpler and easily interpretable model that mimics the behavior of a more complex model. Imagine it like a simpler, scaled-down version of the original model. This can also be used for explaining complex models to non-technical stakeholders.
Technique used- Cross-batch memory (XBM) techniques; generalized additive models, short decision trees, linear models
4. Contrastive Analysis:
It is a “what-if” scenario analysis that looks at how changing or removing specific input values affects the model’s output. This is useful for:
Technique used- Counterfactuals
Explainable AI is among the growing trends. As AI is growing in complexity and reach, so is the requirement of the understanding of its working. Explainable Artificial Intelligence has already seen its uses in many industries.
Let’s explore the case of Wells Fargo, where they improved loan rejection transparency with XAI implementation.
Wells Fargo used an AI to automate the process of loan application. To get a clear understanding of why a loan application was rejected or passed by their AI, Wells Fargo implemented an XAI explainable AI model called LIFE. It generates codes that represent different reasons for loan rejection. For example, a code might indicate a high debt-to-income ratio while another indicates that the FICO score is below the minimum requirement. Their model considered anywhere from 40 to 80 of such various variables when explaining rejection. This improved the transparency of their system and helped applicants understand the factors behind these rejections, potentially improving their chances of securing future loans.
Let’s look at some of its uses in the following industry-
Financial services-
- Transparent and open loan and credit approval processes are improving the trust and experience of the customers.
- Accelerate the decision-making that requires assessing credit risk, wealth management, and financial crime risk.
- With an appropriate explanation for the decision, the resolution of issues and complaints is sped up.
- It Boosts confidence in pricing, investment services, and product recommendations generated by AI.
Healthcare-
- Slower processes like diagnostics and image processing etc can done quicker. While streamlining the pharmaceutical approval process.
- It helps to improve traceability and transparency in decision-making for patient care.
Criminal Justice-
- Processing of DNA analysis, prison population analysis, and crime forecasting optimized which can fasten the decision-making process
- Detect potential biases in training data and algorithms which can be dangerous for making critical decisions.
With its capability to explain an AI model output, explainable AI bridges the gap between human understandability and AI decision-making. As the AI models become more complex and capable, the principles of XAI will be crucial in the development of responsible AI. XAI will be able to ensure that AI models are not only capable but also understandable and ethical. Implementing XAI is not just a technical necessity; it is also a moral imperative.
About Author
Data Whisperer
Listening to the silent stories that data has to tell.