top of page
  • Writer's picturePooja Vijaykumar

We need to talk about Explainable-AI.

So, you’ve created this really complex ML model/Neural Network to solve a business problem for your organization. The code runs with zero errors, all semicolons, and parentheses in place, your evaluation metrics are all off the charts, you're patting yourself on the back with a smirk on your face at the stakeholders meeting, that is, until you’re hit with this question from them - “But why is it doing that?”


You’re stumped. How do you go about explaining mathematical equations to a bunch of business executives? They don’t necessarily care about y=mx+c or backpropagation. They don’t want to know about weights and biases and residuals. All they want to know is how the model arrived at said results so that they can translate decisions to customers as well.


How do you explain decisions made by a MODEL?


We have cognitive psychology to explain human behavior and decision-making (“Explain” is quite the understatement because humans are, after all, walking-talking beings of utter random-ness) But what about machine learning and deep learning algorithms? How do you encapsulate the decision-making process of a black-box that convinces business executives?


Explainable-AI.


One of the biggest challenges of using AI solutions in enterprises is the lack of transparency since the technology used is a “black box” of sorts. We need to understand that data scientists aren’t the only ones trying to rule the world (how fun would that be?) It is of utmost importance to gain the trust of non-tech folks, customers, government officials, executives, etc., in order to succeed as an organization while making an impact.


People need answers, not just predictions.


Explainable-AI, or Explainable-ML, is a suite of techniques that help explain the inner workings of a machine learning model. Machine learning algorithms' output and outcomes can now be understood and trusted by human users thanks to a set of procedures and techniques known as explainable artificial intelligence (XAI). An AI model, its anticipated effects, and potential biases are all described in terms of explainable AI. It contributes to defining model correctness, fairness, transparency, and outcomes in decision-making supported by AI. When putting AI models into production, a business must first establish trust and confidence. A company can adopt a responsible approach to AI development with the aid of AI explainability and with it gaining popularity, it is high time organizations inculcate these techniques in their analytical problems and solutions.


Here are some of the popular X-AI libraries used, taking an example of a dataset that contains loan acceptance and rejection status of a customer based on various features:


LIME: Local Interpretable Model-Agnostic Explanations

By building an interpretable model locally around the prediction, LIME can "explain the predictions of any classifier in an interpretable and faithful manner," according to the researchers. This implies that the LIME model creates an approximation of the model by testing it to see what happens when certain model components are altered. Essentially, it involves using an experimental technique to attempt to duplicate the output from the same input.


LIME in action: Explanations for why a customer gets their loan rejected


SHAP: SHapley Additive exPlanations

A feature value's average marginal contribution over all potential coalitions is known as the Shapley value. Coalitions are essentially just groups of features used to calculate the Shapley value of a particular feature.

The output of any machine learning model may be explained using this unifying method. The only consistent and locally accurate additive feature attribution method based on expectations, SHAP links game theory with local explanations by combining multiple earlier approaches.

SHAP in action: Explanations for the predicted loan acceptance probabilities of two customers. The one above indicates Customer 1 whose loan is rejected while the one below indicates Customer 2 whose loan is accepted


Partial Dependence Plots

The partial dependence plot (short PDP or PD plot) displays the minimal influence one or two features have on a machine learning model's anticipated outcome. A partial dependence plot can demonstrate the linearity, monotonicity, or complexity of the connection between the target and a feature.

PDP in action: show interaction between two features = ‘Risk_Score’ and ‘Debt.To.Income.Ratio’ which are the two main features to keep an eye on in this business problem


DeepLIFT: Deep Learning Important Features

A helpful model in the extremely challenging field of deep learning is DeepLIFT.

It operates via a form of backpropagation: it takes the output and makes an effort to dissect it by "reading" the different neurons that contributed to the development of the original output.

In essence, it's a method of exploring the algorithm's feature selection again (as the name indicates)

DeepLIFT provides something called “contribution scores” - values that have a positive or negative impact on the value/likelihood of an event

DeepLIFT in action: contribution scores of features that drive the likelihood of a customer getting their loans accepted/rejected



All the above methods mentioned have Python libraries available. There are others such as ELI5, Yellowbrick, Alibi, Lucid, etc., to name a few.


In conclusion, X-AI presents a major competitive advantage in a market where only a few competitors use its capabilities. Investment in XAI will likely continue to increase in the foreseeable future. Early investment might help companies that are already utilizing AI technologies to maintain the availability of their services.


Thanks for subscribing!

bottom of page